Difference between revisions of "Research"

From The Circuits and Biology Lab at UMN
Jump to navigationJump to search
 
(29 intermediate revisions by the same user not shown)
Line 11: Line 11:
''––'''''Arthur C. Clarke (1917–2008)'''
''––'''''Arthur C. Clarke (1917–2008)'''


Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent to computer scientists. With each nucleotide in the sequence drawn from the four-valued alphabet of {A, T , C, G}, a molecule of DNA with ''n'' nucleotides stores ''2n'' bits of data.   
Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent. With each nucleotide in the sequence drawn from the four-valued alphabet of {A, T , C, G}, a molecule of DNA with ''n'' nucleotides stores ''2n'' bits of data.   
* Could we store data for our computer systems in DNA? ''"Can't be done '''–''''' ''too hard."''
* Could we store data for our computer systems in DNA? ''"Can't be done '''–''''' ''too hard."''
* Is it worth doing? ''"Definitely not. It will never work as well as our hard drives do."''
* Is it worth doing? ''"Definitely not. It will never work as well as our hard drives do."''
* But one can store so much data so efficiently! "''I knew it was a good idea all along!''"
* But one can store so much data so efficiently! "''I knew it was a good idea all along!''"
<br>
{|
{|
|
|
Line 26: Line 28:
Peyton Okubo, John Stolzberg-Schray, Anil Reddy, and [http://mriedel.ece.umn.edu/wiki/index.php/Marc_Riedel Marc Riedel]
Peyton Okubo, John Stolzberg-Schray, Anil Reddy, and [http://mriedel.ece.umn.edu/wiki/index.php/Marc_Riedel Marc Riedel]
|- valign="top"
|- valign="top"
|'''under revision''':
|'''appeared in''':
|[https://arxiv.org/abs/2211.15494 Royal Society of Chemistry &ndash; Digital Discovery], 2023
|[https://pubs.rsc.org/en/content/articlepdf/2023/DD/D3DD00083D?page=search Royal Society of Chemistry &ndash; Digital Discovery], Vol. 2, pp. 1436&ndash;1451, 2023
|}
|}
| width="70" align="center" |<span class="plainlinks"> [https://mriedel.ece.umn.edu/wiki/images/8/8f/Manicka_Stephan_Chari_Mendonsa_Okubo_Stolzberg-Schray_Reddy_Riedel_Automated_Routing_of_Droplets_for_DNA_Storage_on_a_Digital_Microfluidics_Platform.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>  
| width="70" align="center" |<span class="plainlinks"> [https://mriedel.ece.umn.edu/wiki/images/8/8f/Manicka_Stephan_Chari_Mendonsa_Okubo_Stolzberg-Schray_Reddy_Riedel_Automated_Routing_of_Droplets_for_DNA_Storage_on_a_Digital_Microfluidics_Platform.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>  
Line 36: Line 38:
[https://mriedel.ece.umn.edu/wiki/images/1/11/Dna-storage-and-computing.pptx Slides]
[https://mriedel.ece.umn.edu/wiki/images/1/11/Dna-storage-and-computing.pptx Slides]
|}
|}
<br>
[[File:Dna-computing-dalle.jpg|center|thumb|Storing data in DNA.]]
<br>


==Computing with Molecules==
==Computing with Molecules==
Line 43: Line 48:
&ndash;&ndash;'''Arvind Gupta (1953&ndash; )'''
&ndash;&ndash;'''Arvind Gupta (1953&ndash; )'''


Computing has escaped from desktops and data centers into the wild. Embedded microcontrollers – found in our gadgets, our tools, our buildings, our soils, and even our bodies – are transforming our lives. And yet, there are limits to where silicon can go and where it can compute effectively. It is a foreign object that requires a electrical power source.  
Computing has escaped! It has gone from desktops and data centers into the wild. Embedded microcontrollers – found in our gadgets, our buildings, and even our bodies – are transforming our lives. And yet, there are limits to where silicon can go and where it can compute effectively. It is a foreign object that requires a electrical power source.  


We are studying novel types of computing systems that are not foreign, but rather an integral part of their physical and chemical environments: systems that compute ''directly'' with molecules. A simple but radical idea: compute with '''acids''' and '''bases'''. An acidic solution corresponds to a "1", while a basic solution to "0".
We are studying novel types of computing systems that are not foreign, but rather an integral part of their physical and chemical environments: systems that compute ''directly'' with molecules. A simple but radical idea: compute with '''acids''' and '''bases'''. An acidic solution corresponds to a "1" and a basic solution to "0".
<br>
{|
{|
|
|
Line 61: Line 67:
|}
|}
|}
|}
<br>
[[File:Computing-with-chemistry.jpg|center|thumb|450x450px|Computing with Acids and Bases]]
<br>


It's more complex that acids and bases, but DNA is a terrific chassis for computing. We have developed "CS 101" algorithms with DNA: Sorting, Shifting and Searching:
It's more complex that acids and bases, but DNA is a terrific chassis for computing. We have developed "'''CS 101'''" algorithms with DNA: ''Sorting, Shifting'' and ''Searching'':
{|
{|
|
|
Line 74: Line 84:


|- valign="top"
|- valign="top"
| '''under review in''':
| '''appeared in''':
| [https://www.springer.com/journal/11047 Natural Computing], 2023
| [https://link.springer.com/article/10.1007/s11047-023-09964-z Natural Computing], 2023
|- valign="top"  
|- valign="top"  
| '''presented&nbsp;at''':
| '''presented&nbsp;at''':
Line 87: Line 97:
<br> [http://mriedel.ece.umn.edu/wiki/images/d/d5/DNA27_Presentation.pdf Slides]
<br> [http://mriedel.ece.umn.edu/wiki/images/d/d5/DNA27_Presentation.pdf Slides]
|}
|}
[[File:Dna-computing-dalle.jpg|center|thumb|Computing with DNA: from logic gates to algorithms.]]
Based on a bistable mechanism for representing bits, we have implemented logic gates such '''AND''', '''OR''', and '''XOR gates''', as well as sequential components such as '''latches''' and '''flip-flops''' with DNA. Using these components, we have built full-fledged digital circuits such as a ''binary counters'' and ''linear feedback shift registers''.
Based on a bistable mechanism for representing bits, we implement a constituent set of logical components, including combinational components such as '''AND''', '''OR''', and '''XOR gates''', as well as sequential components such as '''D latches''' and '''D flip-flops'''. Using these components, we build full-fledged digital circuits such as a ''binary counters'' and ''linear feedback shift registers''.
{|
{|
|  
|  
Line 107: Line 116:
|}
|}


We have developed a strategy for implementing signal processing with molecular reactions including operations such as '''filtering'''. We have demonstrated robust designs for [http://en.wikipedia.org/wiki/Finite_impulse_response Finite-Impulse Response (FIR)], [http://en.wikipedia.org/wiki/Infinite_impulse_response Infinite-Impulse Response (IIR)] filters, and [http://en.wikipedia.org/wiki/Fast_Fourier_transform Fast Fourier Transforms (FFTs)].  
[[Image:dna-logic-gates.gif|center|thumb|450px| Simulations of DNA implementation of logic gates. The input signals are molecular concentrations X and Y; the output signal is a molecular concentration Z. (A) AND gate. (B) OR gate. (C) NOR gate. (D) XOR gate.]]
Also, we have performed signal processing including operations such as '''filtering''' and '''fast-fourier transforms (FFTs)''' with DNA.  


{|
{|
Line 142: Line 152:


<br>
<br>
{| align="center"


|
 
[[Image:dna-logic-gates.gif|center|thumb|450px| Simulations of DNA implementation of logic gates. The input signals are molecular concentrations X and Y; the output signal is a molecular concentration Z. (A) AND gate. (B) OR gate. (C) NOR gate. (D) XOR gate.]]
||
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
||
[[Image:moving-average-filter-simulation.gif|center|thumb|400px|Simulations of DNA implementation of a '''moving-average FIR filter'''. This filter removes the high-frequency component from an input signal, producing an output signal consisting of only the low-frequency component. Here the "signals" are molecular concentrations.]]
[[Image:moving-average-filter-simulation.gif|center|thumb|400px|Simulations of DNA implementation of a '''moving-average FIR filter'''. This filter removes the high-frequency component from an input signal, producing an output signal consisting of only the low-frequency component. Here the "signals" are molecular concentrations.]]
||
 
|}
Please see our "[[Papers,_Theses,_and_Presentations |Publications]]" page for more of our papers on these topics.
Please see our "[[Papers,_Theses,_and_Presentations |Publications]]" page for more of our papers on these topics.


==Computational Immunology==
==Computational Immunology==


“''Biology is the study of the complex things in the Universe. Physics is the study of the simple ones.''” &ndash;&ndash; '''Richard Dawkins''' (1941&ndash; )
“''Physics is the study of the simple things in the Universe. Biology is the study of the complex ones.'' ”  


We are studying a problem that computer science currently judges to be very difficult: to predict cellular immunity. It centers on the question of how strongly a given molecule binds to another. The given molecule is a peptide a fragment of a protein derived from a pathogen, such as a virus. The other is a molecule called Major Histocompatibility Class I (MHC I) that is expressed on the surface of most of our cells. MHC I molecules have a cleft into which a peptide can bind. A peptide will only bind if it fits into the cleft like a key into a lock.<br>
&ndash;&ndash; '''Richard Dawkins''' (1941&ndash; )
 
We are studying a problem that computer science currently judges to be very difficult: '''''predicting cellular immunity'''''. It centers on the question of how strongly molecules binds to one another. The molecules in question are ''peptides'' fragments of proteins from a virus – binding to cell-surface receptors. A peptide will only bind if it fits like a key into a lock.  
{| align="center"
{| align="center"
||
||
[[Image:Peptide1.PNG|center|thumb|300px|A peptide (in blue) bound to a MHC Class I protein (in yellow).]]
[[Image:Peptide1.PNG|center|thumb|500px|A peptide (in blue) bound to a MHC Class I protein (in yellow).]]
||
||
||
||
|}
|}


The binding is a critical step in a critical component of the immune system: it allows circulating T-cells to kill off infected cells. If this mechanism succeeds, an infection is stopped in its tracks. If it fails, then infected cells become factories for reproducing copies of the virus and full-blown disease results. Most aspects of the chemistry are well understood. The difficulty lies with the combinatorial scale . There are 38,000 peptides for a virus like SARS- Cov-2 each paired with 21,000 variants of MHC I molecules in the human population. This translates to three-quarters of a billion distinct pairings. Simulating them all would take billions of days of computing time. We are turning billions of days of computing time into just a few weeks of cloud computing time, donated by [https://blogs.oracle.com/research/post/announcing-inaugural-cohort-oracle-research-fellows Oracle]:
The binding is a critical step in a critical component of the immune system: it allows circulating '''T-cells to kill off infected cells.''' If this mechanism succeeds, an infection is stopped in its tracks. If it fails, then infected cells become factories for reproducing copies of the virus; full-blown disease results. Given a novel pathogen, such as SARS-Cov-2, predicting whether the immune system of an individual will do its job at fighting off the disease comes down to predicting how well the viral peptides bind to the cell-surface receptors of that person. We are tackling the problem with cloud computing resources, donated by [https://blogs.oracle.com/research/post/announcing-inaugural-cohort-oracle-research-fellows Oracle]:
{|
{|
|
|
Line 195: Line 201:
|}
|}


The first step is to solve the problem of hydrophobicity:
==Computing with Random Bit Streams==
==Computing with Random Bit Streams==


"''To invent, all you need is a pile of junk and a good imagination.''"  &ndash;&ndash; '''Thomas A. Edison''' (1847&ndash;1931)
"''To invent, all you need is a pile of junk and a good imagination.''"  &ndash;&ndash; '''Thomas A. Edison''' (1847&ndash;1931)


Humans are accustomed to counting in a positional number system &ndash; [http://en.wikipedia.org/wiki/Decimal decimal] radix.  Nearly all computer systems operate on another positional number system &ndash; [http://en.wikipedia.org/wiki/Binary_numeral_system binary] radix. From the standpoint of ''representation'', such positional systems are compact. However, from the standpoint of ''computation'', positional systems impose a burden: for each operation such as addition or multiplication, the signal must be "''decoded''", with each digit weighted according to its position. The result must be "''encoded''" back in positional form.
Humans are accustomed to counting in a positional number system &ndash; [http://en.wikipedia.org/wiki/Decimal decimal] radix.  Nearly all computer systems operate on another &ndash; [http://en.wikipedia.org/wiki/Binary_numeral_system binary] radix. We are so accustomed to these systems that it counterintuitive to ask: ''can we compute using a different representatio''n? and ''why would we want to''?


==== Logic that Operates on Probabilities ====
==== Stochastic Logic ====


We advocate an alternative representation: random bit streams where the signal value is encoded by the probability of obtaining a one versus a zero. This representation is much less compact than binary radix. However, complex operations can be performed with very simple logic. For instance, multiplication can be performed with a single AND gate; scaled addition can be performed with a multiplexer (MUX). 
We advocate an alternative representation: computing on random bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. Why compute this way? Using '''stochastic logic''', we can compute complex functions with very, very simple circuits. For instance, we can perform multiplication with a single AND gate and addition with a single MUX:   


{| align="center"
{| align="center"
Line 211: Line 216:
|| [[Image:stochastic-adder.png|thumb|320px|'''Scaled addition''' with a multiplexer (MUX).  
|| [[Image:stochastic-adder.png|thumb|320px|'''Scaled addition''' with a multiplexer (MUX).  
Given input probabilities <math>a</math>, <math>b</math> and <math>s</math>, the MUX produces an output probability <math>c = a s + (1 - s) b</math>.]]
Given input probabilities <math>a</math>, <math>b</math> and <math>s</math>, the MUX produces an output probability <math>c = a s + (1 - s) b</math>.]]
|}
|}  


We have developed a general method for synthesizing digital circuitry that computes on such stochastic bit streams.    
Using conventional binary, building a circuit that computes, say a polynomial approximation to a trigonometric function such as ''tanh(x)'' or ''cos(x),'' requires ''thousands'' of logic gates. With stochastic logic, we have shown that we can compute such functions with about a ''dozen'' logic gates, so a '''100X reduction in gate count'''. Our most important contribution is a '''general methodology for synthesizing polynomial functions''' with stochastic logic, one of the seminal contributions to the field:    


{|
{|
Line 233: Line 238:
<br>
<br>
[//mriedel.ece.umn.edu/wiki/images/2/21/Qian_Li_Riedel_Bazargan_Lilja_An_Architecture_for_Fault-Tolerant_Computation_with_Stochastic_Logic.pdf Paper]
[//mriedel.ece.umn.edu/wiki/images/2/21/Qian_Li_Riedel_Bazargan_Lilja_An_Architecture_for_Fault-Tolerant_Computation_with_Stochastic_Logic.pdf Paper]
|}
{|
|
{| style="background:#F0E68C"
|- valign="top"
| width="100" | '''title''':
| width="500" | [[Media:Li_Lilja_Qian_Riedel_Bazargan_Logical_Computation_on_Stochastic_Bit_Streams_with_Linear_Finite_State_Machines.pdf | Logical Computation on Stochastic Bit Streams with Linear Finite State Machines]]
|- valign="top"
| '''authors''':
| [http://www.ece.umn.edu/~lipeng/ Peng Li], [http://www.arctic.umn.edu/lilja.shtml David Lilja], [[Weikang Qian]],[http://www.ece.umn.edu/users/kia/ Kia Bazargan] and [[Marc Riedel]]
|- valign="top"
| '''appeared in''':
| [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6307798 IEEE Transactions on Computers], Vol. 63, No. 6., pp. 1474&ndash;1486, 2014
|- valign="top"
| '''presented at''':
| [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6165056 IEEE/ACM Asia and South Pacific Design Automation Conference],<br>Sydney, Australia, 2012
|}
| align="center" width="70" |
<span class="plainlinks">
[http://cctbio.ece.umn.edu/wiki/images/7/7c/Li_Lilja_Qian_Riedel_Bazargan_Logical_Computation_on_Stochastic_Bit_Streams_with_Linear_Finite_State_Machines.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>
<br>
[[Media:Li_Lilja_Qian_Riedel_Bazargan_Logical_Computation_on_Stochastic_Bit_Streams_with_Linear_Finite_State_Machines.pdf | Paper]]
|}
|}


====Logic that Generates Probabilities====
====Logic that Generates Probabilities====


Schemes for probabilistic computation can exploit physical sources to generate random values in the form of bit streams. Generally, each source has a fixed bias and so provides bits that have a specific probability of being one versus zero. If many different probability values are required, it can be difficult or expensive to generate all of these directly from physical sources. In this work, we demonstrate novel techniques for synthesizing combinational logic that transforms a set of ''source'' probabilities into different ''target'' probabilities.  
We have also shown how to synthesize logic that transforms a set of ''source'' probabilities into different ''target'' probabilities.  
[[File:Generate_Probabilities_Example.png|center|frame|Given a set ''S'' of source probabilities {0.4, 0.5}, we can synthesize a combinational circuit to generate an arbitrary ''decimal'' output probability. The example shows how to generate 0.119. Each AND gate performs a multiplication and each inverter performs a "one-minus" operation.]]
[[File:Generate_Probabilities_Example.png|center|frame|Given a set ''S'' of source probabilities {0.4, 0.5}, we can synthesize a combinational circuit to generate an arbitrary ''decimal'' output probability. The example shows how to generate 0.119. Each AND gate performs a multiplication and each inverter performs a "one-minus" operation.]]


Line 288: Line 271:
|}
|}


====Computing with Crappy Clocks====
'''<big>A Deterministic Approach</big>'''
 
Having championed stochastic logic for many years, we decided to reexamine its foundations. Why can complex functions be computed with such simple circuits when we compute on probabilities? Intuition might suggest that somehow we are harnessing deep aspects of probability theory. ''This intuition is <u>wrong</u>''.
 
The keys is that we operate on uniform representation rather than a positional one. We showed that '''we can compute deterministically using the same structures that we use when computing stochastically.''' ''There is no need to do anything randomly!''  This upended the field that we had pioneered.


Clock distribution networks are a significant source of power consumption and a major design bottleneck for high-performance circuits. We have proposed a radically new approach: splitting clock domains at a very fine level, with domains consisting of only a handful of gates each. These domains are synchrnonized by "crappy clocks", generated locally with inverter rings. This is feasible if one adopts the paradigm of computing on randomized bit streams.
[[File:polysynchronous-and.png|center|frame|thumb|Stochastic multiplication using an AND gate with unsynchronized random bit streams. The stochastic paradigm can tolerate arbitrarly high clock skew. Accordingly, one can replace an expensive global clock distribution network with cheap local clocks, generated by inverter rings &ndash; "crappy clocks".]]
{|
{|
|
|
Line 297: Line 282:
|- valign="top"
|- valign="top"
| width="100" | '''title''':
| width="100" | '''title''':
| width="500" | [[Media: Najafi_Lilja_Riedel_Bazargan_Polysynchronous_Stochastic_Circuits.pdf | Polysynchronous Stochastic Circuits]]
| width="500" | [//mriedel.ece.umn.edu/wiki/images/c/c9/Najafi_Jenson_Lilja_Riedel_Performing_Stochastic_Computation_Deterministically.pdf Performing Stochastic Computation Deterministically]
|- valign="top"
|- valign="top"
| '''authors''':
| '''authors''':
| [[M. Hassan_Najafi]], [http://www.arctic.umn.edu/lilja.shtml David Lilja], [[Marc Riedel]], and [http://www.ece.umn.edu/users/kia/ Kia Bazargan]
| [[Devon Jenson]], [[M. Hassan Najafi]], [https://ece.umn.edu/directory/lilja-david/ David Lilja], and [[Marc Riedel]]
|- valign="top"  
|- valign="top"
| '''to appear in''':
| '''appeared in''':
| [http://www.amsv.umac.mo/aspdac2016/ IEEE/ACM Asia and South Pacific Design Automation Conference], 2016  
| [https://ieeexplore.ieee.org/document/8793244 IEEE Trans. on Very Large Scale Integration Systems],<br>Vol. 27, No. 29, pp. 2925&ndash;2938, 2019
|- valign="top"
| '''presented at''':
| [https://iscas2020.org IEEE International Symposium of Circuits and Systems], 2020
|- valign="top"
| '''presented at''':
| [http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=38629 IEEE/ACM International Conference on Computer-Aided Design], 2016
|}
|}
| align="center" width="70" |  
| width="70" align="center" |
<span class="plainlinks">
<span class="plainlinks">
[http://cctbio.ece.umn.edu/wiki/images/e/ec/Najafi_Lilja_Riedel_Bazargan_Polysynchronous_Stochastic_Circuits.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>
[http://mriedel.ece.umn.edu/wiki/images/c/c9/Najafi_Jenson_Lilja_Riedel_Performing_Stochastic_Computation_Deterministically.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>
<br>
<br>
[[Media:Najafi_Lilja_Riedel_Bazargan_Polysynchronous_Stochastic_Circuits.pdf | Paper]]
[//mriedel.ece.umn.edu/wiki/images/c/c9/Najafi_Jenson_Lilja_Riedel_Performing_Stochastic_Computation_Deterministically.pdf Paper]
| width="70" align="center" |
<span class="plainlinks">[http://cadbio.com/wiki/images/4/4f/Jenson_Riedel_A_Deterministic_Approach_to_Stochastic_Computing.pptx http://mriedel.ece.umn.edu/wiki/images/3/36/Ppt.jpg]</span>
<br>[http://cadbio.com/wiki/images/4/4f/Jenson_Riedel_A_Deterministic_Approach_to_Stochastic_Computing.pptx Slides]
|}
|}


Please see our "[[Papers,_Theses,_and_Presentations |Publications]]" page for more of our papers on these topics.
====Time-Encoded Computing====
 
Computing deterministically on bit streams really means that, instead of encoding data in '''''space''''', we encode them '''''time'''''. The time-encoding consists of periodic signals, with the value encoded as the fraction of the time that the signal is in the high (on) state compared to the low (off) state in each cycle.
 
{| align="center"
|[[File:Analog-in-time.jpg|center|thumb|500px|Encoding a value in time. The value represented is the fraction of the time that the signal is high in each cycle, in this case 0.687.]]
|
||[[File:Multiplicaiton-on-time-encoding-signals.jpg|thumb|500px|Multiplication with a single AND gate, operating on deterministic periodic signals. ]]
|}
As technology has scaled and device sizes have gotten smaller, the supply voltages have dropped while the device speeds have improved. Control of the dynamic range in the voltage domain is limited; however, control of the length of pulses in the time domain can be precise. Encoding data in the time domain can be done more accurately and more efficiently than converting signals into binary radix. So we can compute ''more precisely,'' ''faster'', and with ''fewer logic gates:''
{|
|-
|
{| style="background:#F0E68C"
|- valign="top"
| width="100" | '''title''':
| width="500" | [//mriedel.ece.umn.edu/wiki/images/b/b0/Najafi_Jamali_Zavareh_Lilja_Riedel_Bazargan_Harjani_Time_Encoded_Values_for_Highly_Efficient_Stochastic_Circuits.pdf Time-Encoded Values for Highly Efficient Stochastic Circuits]
|- valign="top"
| '''authors''':
| [[M. Hassan Najafi]], S. Jamali-Zavareh, [http://www.arctic.umn.edu/lilja.shtml David Lilja], [[Marc Riedel]], [http://people.ece.umn.edu/~kia Kia Bazargan] and<br>[http://people.ece.umn.edu/~harjani/ Ramesh Harjani]
|- valign="top"
| '''appeared  in''':
| [http://ieeexplore.ieee.org/xpl/aboutJournal.jsp?punumber=92 IEEE Trans. on Very Large Scale Integration Systems],<br>Vol. 25, No. 5, pp. 1644&ndash;1657, 2017
|- valign="top"
| '''presented at''':
| [http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=33621 IEEE International Symposium on Circuits and Systems], 2017
|}
| width="70" align="center" |
<span class="plainlinks">
[http://www.mriedel.ece.umn.edu/wiki/images/b/b0/Najafi_Jamali_Zavareh_Lilja_Riedel_Bazargan_Harjani_Time_Encoded_Values_for_Highly_Efficient_Stochastic_Circuits.pdf http://mriedel.ece.umn.edu/wiki/images/0/04/Pdf.jpg]</span>
<br>
[//mriedel.ece.umn.edu/wiki/images/b/b0/Najafi_Jamali_Zavareh_Lilja_Riedel_Bazargan_Harjani_Time_Encoded_Values_for_Highly_Efficient_Stochastic_Circuits.pdf Paper]
|}Please see our "[[Papers,_Theses,_and_Presentations |Publications]]" page for more of our papers on these topics.


==Computing with Feedback==
==Computing with Feedback==
Line 405: Line 431:
FET-like junctions that cross these develop a high or low impedance, respectively.
FET-like junctions that cross these develop a high or low impedance, respectively.
]]
]]
|| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
|| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
|| [[Image:percolation.gif|center|thumb|315px| In a switching network with defects, percolation can be exploited to produce robust Boolean functionality. Unless the defect rate exceeds an error margin, with high probability no connection forms between the top and bottom plates for logical zero ("OFF"); with high probability, a connection forms for logical one ("ON").]]
|| [[Image:percolation.gif|center|thumb|260.994x260.994px| In a switching network with defects, percolation can be exploited to produce robust Boolean functionality. ]]
|}
|}
We have devised a novel framework for digital computation with lattices of nanoscale switches with high defect rates, based on the mathematical phenomenon of [http://en.wikipedia.org/wiki/Percolation_theory percolation]. With random connectivity, percolation gives rise to a sharp non-linearity in the probability of global connectivity as a function of the probability of local connectivity. We exploit this phenomenon to compute Boolean functions robustly in the presence of defects.
We have devised a novel framework for digital computation with lattices of nanoscale switches with high defect rates, based on the mathematical phenomenon of [http://en.wikipedia.org/wiki/Percolation_theory percolation]. With random connectivity, percolation gives rise to a sharp non-linearity in the probability of global connectivity as a function of the probability of local connectivity. We exploit this phenomenon to compute Boolean functions robustly in the presence of defects.
Line 440: Line 466:
"''There are two kinds of people in the world: those who divide the world into two kinds of people, and those who don't.''" &ndash;&ndash; '''Robert Charles Benchley''' (1889&ndash;1945)
"''There are two kinds of people in the world: those who divide the world into two kinds of people, and those who don't.''" &ndash;&ndash; '''Robert Charles Benchley''' (1889&ndash;1945)


Consider the task of designing a digital circuit with 256  inputs. From a mathematical standpoint, such a circuit performs mappings from a space of <math>2^{256}</math> Boolean input values to Boolean output values. (The number of rows in a [http://en.wikipedia.org/wiki/Truth_table truth table] for such a function is approximately equal to [http://en.wikipedia.org/wiki/Observable_universe#Matter_content the number of atoms in the universe] &ndash; <math>10^{77}</math> rows versus <math>10^{79}</math> atoms!) Verifying such a function, let alone designing the corresponding circuit, would seem to be an intractable problem. Circuit designers have succeeded in their endeavor largely as a result of innovations in the data structures and algorithms used to represent and manipulate [http://en.wikipedia.org/wiki/Boolean_function Boolean functions].  We have developed novel, efficient techniques for synthesizing functional dependencies based on so-called [http://en.wikipedia.org/wiki/Boolean_satisfiability_problem SAT-solving algorithms]. We use [http://en.wikipedia.org/wiki/Craig_interpolation Craig Interpolation] to generate circuits from the corresponding Boolean functions.  
Consider the task of designing a digital circuit with 256  inputs. From a mathematical standpoint, such a circuit performs mappings from a space of <math>2^{256}</math> Boolean input values to Boolean output values. (The number of rows in a [http://en.wikipedia.org/wiki/Truth_table truth table] for such a function is approximately equal to [http://en.wikipedia.org/wiki/Observable_universe#Matter_content the number of atoms in the universe] &ndash; <math>10^{77}</math> rows versus <math>10^{79}</math> atoms!) Verifying such a function, let alone designing the corresponding circuit, would seem to be an intractable problem.
 
Circuit designers have succeeded in their endeavor largely as a result of innovations in the data structures and algorithms used to represent and manipulate [http://en.wikipedia.org/wiki/Boolean_function Boolean functions].  We have developed novel, efficient techniques for synthesizing functional dependencies based on so-called [http://en.wikipedia.org/wiki/Boolean_satisfiability_problem SAT-solving algorithms]. We use [http://en.wikipedia.org/wiki/Craig_interpolation Craig Interpolation] to generate circuits from the corresponding Boolean functions.  
{| align="center"
{| align="center"
| [[Image:sat-squid.jpg|center|thumb|350px|A circuit construct for SAT-based verification.]]
| [[Image:sat-squid.jpg|center|thumb|350px|A circuit construct for SAT-based verification.]]
Line 472: Line 500:
"''Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.''" &ndash;&ndash; '''Bertrand Russell''' (1872&ndash;1970)
"''Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.''" &ndash;&ndash; '''Bertrand Russell''' (1872&ndash;1970)


[[Image:xkcd_disciplines_by_purity.png|center|thumb|none|550px|Mathematics, before the era of [http://en.wikipedia.org/wiki/LaTeX LaTeX].]]
[[Image:xkcd_disciplines_by_purity.png|center|thumb|740x740px]]




The great mathematician [http://en.wikipedia.org/wiki/John_von_Neumann John von Neumann] articulated the view that research should never meander too far down theoretical paths; it should always be guided by potential applications. This view was not based on concerns about the relevance of his profession; rather, in his judgment, real-world applications give rise to the most ''interesting'' problems for mathematicians to tackle. At their core, most of our research contributions are mathematical contributions. The tools of our trade are [http://en.wikipedia.org/wiki/Discrete_mathematics discrete math], including [http://en.wikipedia.org/wiki/Combinatorics combinatorics] and [http://en.wikipedia.org/wiki/Probability probability theory].
The great mathematician [http://en.wikipedia.org/wiki/John_von_Neumann John von Neumann] articulated the view that research should never meander too far down theoretical paths; it should always be guided by potential applications. This view was not based on concerns about the relevance of his profession; rather, in his judgment, real-world applications give rise to the most ''interesting'' problems for mathematicians to tackle. At their core, most of our research contributions are mathematical contributions. The tools of our trade are [http://en.wikipedia.org/wiki/Discrete_mathematics discrete math], including [http://en.wikipedia.org/wiki/Combinatorics combinatorics] and [http://en.wikipedia.org/wiki/Probability probability theory].
[[Image:papa-handwriting.gif|center|thumb|none|550px|Mathematics, before the era of [http://en.wikipedia.org/wiki/LaTeX LaTeX].]]
{|
{|
|
|

Latest revision as of 15:26, 7 February 2024

"You see things; and you say, 'Why?' But I dream things that never were; and I say, 'Why not?'"

–– George Bernard Shaw (1856 –1950)

Our research spans different disciplines ranging from digital circuit design, to algorithms, to mathematics, to synthetic biology. It tends to be inductive (as opposed to deductive) and conceptual (as opposed to applied). A recurring theme is building systems that compute in novel or unexpected ways with new and emerging technologies.

Storing Data with Molecules

All new ideas pass through three stages:

  1. It can't be done.
  2. It probably can be done, but it's not worth doing.
  3. I knew it was a good idea all along!

––Arthur C. Clarke (1917–2008)

Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent. With each nucleotide in the sequence drawn from the four-valued alphabet of {A, T , C, G}, a molecule of DNA with n nucleotides stores 2n bits of data.

  • Could we store data for our computer systems in DNA? "Can't be done too hard."
  • Is it worth doing? "Definitely not. It will never work as well as our hard drives do."
  • But one can store so much data so efficiently! "I knew it was a good idea all along!"


title: Automated Routing of Droplets for DNA Storage on a Digital Microfluidics Platform
authors: Ajay Manicka, Andrew Stephan, Sriram Chari, Gemma Mendonsa,

Peyton Okubo, John Stolzberg-Schray, Anil Reddy, and Marc Riedel

appeared in: Royal Society of Chemistry – Digital Discovery, Vol. 2, pp. 1436–1451, 2023
Pdf.jpg

Paper

Ppt.jpg

Slides


Storing data in DNA.


Computing with Molecules

"Biology is the most powerful technology ever created. DNA is software, protein are hardware, cells are factories."

––Arvind Gupta (1953– )

Computing has escaped! It has gone from desktops and data centers into the wild. Embedded microcontrollers – found in our gadgets, our buildings, and even our bodies – are transforming our lives. And yet, there are limits to where silicon can go and where it can compute effectively. It is a foreign object that requires a electrical power source.

We are studying novel types of computing systems that are not foreign, but rather an integral part of their physical and chemical environments: systems that compute directly with molecules. A simple but radical idea: compute with acids and bases. An acidic solution corresponds to a "1" and a basic solution to "0".

title: Digital Circuits and Neural Networks Based on Acid-Base Chemistry Implemented by Robotic Fluid Handling
authors: Ahmed Agiza, Kady Oakley, Jacob Rosenstein, Brenda Rubenstein,

Eunsuk Kim, Marc Riedel, and Sherief Reda

appeared in: Nature Communications, Vol. 14, No. 496, 2023


Computing with Acids and Bases


It's more complex that acids and bases, but DNA is a terrific chassis for computing. We have developed "CS 101" algorithms with DNA: Sorting, Shifting and Searching:

title: Parallel Pairwise Operations on Data Stored in DNA: Sorting, XOR, Shifting, and Searching
authors: Arnav Solanki, Tonglin Chen, and Marc Riedel
appeared in: Natural Computing, 2023
presented at: International Conference on DNA Computing and Molecular Programming, 2021

Pdf.jpg
Paper

Ppt.jpg
Slides

Based on a bistable mechanism for representing bits, we have implemented logic gates such AND, OR, and XOR gates, as well as sequential components such as latches and flip-flops with DNA. Using these components, we have built full-fledged digital circuits such as a binary counters and linear feedback shift registers.

title: Digital Logic with Molecular Reactions
authors: Hua Jiang, Marc Riedel, Keshab Parhi
presented at: The International Conference on Computer-Aided Design, San Jose, CA, 2013.

Pdf.jpg
Paper

Simulations of DNA implementation of logic gates. The input signals are molecular concentrations X and Y; the output signal is a molecular concentration Z. (A) AND gate. (B) OR gate. (C) NOR gate. (D) XOR gate.

Also, we have performed signal processing including operations such as filtering and fast-fourier transforms (FFTs) with DNA.

title: Discrete-Time Signal Processing with DNA
authors: Hua Jiang, Ahmed Salehi, Marc Riedel and Keshab Parhi
appeared in: ACS Synthetic Biology, Vol. 2 no. 5, pp. 245–254, 2013.
Supplementary Information: List of Reactions
appeared in: IEEE Design & Test of Computers, Vol. 29, No. 3, pp. 21–31, 2012.
presented at: IEEE/ACM International Conference on Computer-Aided Design,
San Jose, CA, 2010.
presented at: IEEE Workshop on Signal Processing Systems, San Francisco, 2010

Pdf.jpg
Paper

Ppt.jpg
Slides



Simulations of DNA implementation of a moving-average FIR filter. This filter removes the high-frequency component from an input signal, producing an output signal consisting of only the low-frequency component. Here the "signals" are molecular concentrations.

Please see our "Publications" page for more of our papers on these topics.

Computational Immunology

Physics is the study of the simple things in the Universe. Biology is the study of the complex ones.

–– Richard Dawkins (1941– )

We are studying a problem that computer science currently judges to be very difficult: predicting cellular immunity. It centers on the question of how strongly molecules binds to one another. The molecules in question are peptides – fragments of proteins from a virus – binding to cell-surface receptors. A peptide will only bind if it fits like a key into a lock.

A peptide (in blue) bound to a MHC Class I protein (in yellow).

The binding is a critical step in a critical component of the immune system: it allows circulating T-cells to kill off infected cells. If this mechanism succeeds, an infection is stopped in its tracks. If it fails, then infected cells become factories for reproducing copies of the virus; full-blown disease results. Given a novel pathogen, such as SARS-Cov-2, predicting whether the immune system of an individual will do its job at fighting off the disease comes down to predicting how well the viral peptides bind to the cell-surface receptors of that person. We are tackling the problem with cloud computing resources, donated by Oracle:

title: The UMN/Mayo Computational Human Immuno-Peptidome (CHIP) Project
Investigator: Marc Riedel
Agency: Oracle
Program: Oracle Research Fellowship
Award: $200,000
Duration: 2022 – 2024

Pdf.jpg
Proposal

Computing with Random Bit Streams

"To invent, all you need is a pile of junk and a good imagination." –– Thomas A. Edison (1847–1931)

Humans are accustomed to counting in a positional number system – decimal radix. Nearly all computer systems operate on another – binary radix. We are so accustomed to these systems that it counterintuitive to ask: can we compute using a different representation? and why would we want to?

Stochastic Logic

We advocate an alternative representation: computing on random bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. Why compute this way? Using stochastic logic, we can compute complex functions with very, very simple circuits. For instance, we can perform multiplication with a single AND gate and addition with a single MUX:

Multiplication with an AND gate. Here the variables represents the probabilities of obtaining a 1 versus a 0 in stochastic bit streams. The AND gate produces an output probability that is the product of the of the input probabilities and .
            
Scaled addition with a multiplexer (MUX). Given input probabilities , and , the MUX produces an output probability .

Using conventional binary, building a circuit that computes, say a polynomial approximation to a trigonometric function such as tanh(x) or cos(x), requires thousands of logic gates. With stochastic logic, we have shown that we can compute such functions with about a dozen logic gates, so a 100X reduction in gate count. Our most important contribution is a general methodology for synthesizing polynomial functions with stochastic logic, one of the seminal contributions to the field:

title: An Architecture for Fault-Tolerant Computation with Stochastic Logic
authors: Weikang Qian, Xin Li, Marc Riedel, Kia Bazargan, and David Lilja
appeared in: IEEE Transactions on Computers, Vol. 60, No. 1, pp. 93–105, 2011

Pdf.jpg
Paper

Logic that Generates Probabilities

We have also shown how to synthesize logic that transforms a set of source probabilities into different target probabilities.

Given a set S of source probabilities {0.4, 0.5}, we can synthesize a combinational circuit to generate an arbitrary decimal output probability. The example shows how to generate 0.119. Each AND gate performs a multiplication and each inverter performs a "one-minus" operation.
title: Transforming Probabilities with Combinational Logic
authors: Weikang Qian, Marc Riedel, Hongchao Zhou, and Jehoshua Bruck
will appear in: IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2012.
presented at: International Conference on Computer-Aided Design, San Jose, 2009
(nominated for IEEE/ACM William J. McCalla ICCAD Best Paper Award).

Pdf.jpg
Paper

Ppt.jpg
Slides

A Deterministic Approach

Having championed stochastic logic for many years, we decided to reexamine its foundations. Why can complex functions be computed with such simple circuits when we compute on probabilities? Intuition might suggest that somehow we are harnessing deep aspects of probability theory. This intuition is wrong.

The keys is that we operate on uniform representation rather than a positional one. We showed that we can compute deterministically using the same structures that we use when computing stochastically. There is no need to do anything randomly! This upended the field that we had pioneered.

title: Performing Stochastic Computation Deterministically
authors: Devon Jenson, M. Hassan Najafi, David Lilja, and Marc Riedel
appeared in: IEEE Trans. on Very Large Scale Integration Systems,
Vol. 27, No. 29, pp. 2925–2938, 2019
presented at: IEEE International Symposium of Circuits and Systems, 2020
presented at: IEEE/ACM International Conference on Computer-Aided Design, 2016

Pdf.jpg
Paper

Ppt.jpg
Slides

Time-Encoded Computing

Computing deterministically on bit streams really means that, instead of encoding data in space, we encode them time. The time-encoding consists of periodic signals, with the value encoded as the fraction of the time that the signal is in the high (on) state compared to the low (off) state in each cycle.

Encoding a value in time. The value represented is the fraction of the time that the signal is high in each cycle, in this case 0.687.
Multiplication with a single AND gate, operating on deterministic periodic signals.

As technology has scaled and device sizes have gotten smaller, the supply voltages have dropped while the device speeds have improved. Control of the dynamic range in the voltage domain is limited; however, control of the length of pulses in the time domain can be precise. Encoding data in the time domain can be done more accurately and more efficiently than converting signals into binary radix. So we can compute more precisely, faster, and with fewer logic gates:

title: Time-Encoded Values for Highly Efficient Stochastic Circuits
authors: M. Hassan Najafi, S. Jamali-Zavareh, David Lilja, Marc Riedel, Kia Bazargan and
Ramesh Harjani
appeared in: IEEE Trans. on Very Large Scale Integration Systems,
Vol. 25, No. 5, pp. 1644–1657, 2017
presented at: IEEE International Symposium on Circuits and Systems, 2017

Pdf.jpg
Paper

Please see our "Publications" page for more of our papers on these topics.

Computing with Feedback

"A person with a new idea is a crank until the idea succeeds." –– Mark Twain (1835–1910)

The accepted wisdom is that combinational circuits (i.e., memoryless circuits) must have acyclic (i.e., loop-free or feed-forward) topologies. And yet simple examples suggest that this need not be so. We advocate the design of cyclic combinational circuits (i.e., circuits with loops or feedback paths). We have proposed a methodology for synthesizing such circuits and demonstrated that it produces significant improvements in area and in delay.

A circuit that has feedback and yet is combinational.
title: Cyclic Boolean Circuits
authors: Marc Riedel and Shuki Bruck
appeared  in: Discrete Applied Mathematics, Vol. 160, No. 13–14, pp. 1877–1900, 2011.
dissertation: Ph.D., Electrical Engineering, Caltech, 2004
(winner of Charles H. Wilts Prize for the Best Ph.D. Dissertation in EE at Caltech).
presented at: Design Automation Conference, Anahiem, CA, 2003
(winner of DAC Best Paper Award).

Pdf.jpg
Paper

Pdf.jpg
PhD Dissertation

Ppt.jpg
Slides

Please see our Publications page for more of our papers on this topic.


Computing with Nanoscale Lattices

"Listen to the technology; find out what it’s telling you.” –– Carver Mead (1934–  )

In his seminal Master's Thesis, Claude Shannon made the connection between Boolean algebra and switching circuits. He considered two-terminal switches corresponding to electromagnetic relays. A Boolean function can be implemented in terms of connectivity across a network of switches, often arranged in a series/parallel configuration. We have developed a method for synthesizing Boolean functions with networks of four-terminal switches. Our model is applicable for variety of nanoscale technologies, such as nanowire crossbar arrays, as molecular switch-based structures.

Shannon's model: two-terminal switches. Each switch is either ON (closed) or OFF (open). A Boolean function is implemented in terms of connectivity across a network of switches, between the source S and the drain D.
               
Our model: four-terminal switches. Each switch is either mutually connected to its neighbors (ON) or disconnected (OFF). A Boolean function is implemented in terms of connectivity between the top and bottom plates. This network implements the same function as the two-terminal network on the left.
title: Logic Synthesis for Switching Lattices
authors: Mustafa Altun and Marc Riedel
will appear in: IEEE Transactions on Computers, 2011.
presented at: Design Automation Conference, Anaheim, CA, 2010.

Pdf.jpg
Paper

Ppt.jpg
Slides

The impetus for nanowire-based technology is the potential density, scalability and manufacturability. Many other novel and emerging technologies fit the general model of four-terminal switches. For instance, researchers are investigating spin waves. A common feature of many emerging technologies for switching networks is that they exhibit high defect rates.

A nanowire crossbar switch. The connections between horizontal and vertical wires are FET-like junctions. When high or low voltages are applied to input nanowires, the FET-like junctions that cross these develop a high or low impedance, respectively.
              
In a switching network with defects, percolation can be exploited to produce robust Boolean functionality.

We have devised a novel framework for digital computation with lattices of nanoscale switches with high defect rates, based on the mathematical phenomenon of percolation. With random connectivity, percolation gives rise to a sharp non-linearity in the probability of global connectivity as a function of the probability of local connectivity. We exploit this phenomenon to compute Boolean functions robustly in the presence of defects.

title: Synthesizing Logic with Percolation in Nanoscale Lattices
authors: Mustafa Altun and Marc Riedel
appeared in: International Journal of Nanotechnology and Molecular Computation,
Vol. 3, No. 2, pp. 12–30, 2011.
presented at: Design Automation Conference, San Francisco, CA, 2009.

Pdf.jpg
Paper

Ppt.jpg
Slides

Please see our "Publications" page for more of our papers on these topics.

Algorithms and Data Structures

"There are two kinds of people in the world: those who divide the world into two kinds of people, and those who don't." –– Robert Charles Benchley (1889–1945)

Consider the task of designing a digital circuit with 256 inputs. From a mathematical standpoint, such a circuit performs mappings from a space of Boolean input values to Boolean output values. (The number of rows in a truth table for such a function is approximately equal to the number of atoms in the universe rows versus atoms!) Verifying such a function, let alone designing the corresponding circuit, would seem to be an intractable problem.

Circuit designers have succeeded in their endeavor largely as a result of innovations in the data structures and algorithms used to represent and manipulate Boolean functions. We have developed novel, efficient techniques for synthesizing functional dependencies based on so-called SAT-solving algorithms. We use Craig Interpolation to generate circuits from the corresponding Boolean functions.

A circuit construct for SAT-based verification.
            
A squid.
title: Reduction of Interpolants For Logic Synthesis
authors: John Backes and Marc Riedel
presented at: The International Conference on Computer-Aided Design, San Jose, CA, 2010.

Pdf.jpg
Paper

Ppt.jpg
Slides

Please see our "Publications" page for more of our papers on this topic. (Papers on SAT-based circuit verification, that is, not on squids.)

Mathematics

"Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." –– Bertrand Russell (1872–1970)

Xkcd disciplines by purity.png


The great mathematician John von Neumann articulated the view that research should never meander too far down theoretical paths; it should always be guided by potential applications. This view was not based on concerns about the relevance of his profession; rather, in his judgment, real-world applications give rise to the most interesting problems for mathematicians to tackle. At their core, most of our research contributions are mathematical contributions. The tools of our trade are discrete math, including combinatorics and probability theory.

title: Uniform Approximation and Bernstein Polynomials with
Coefficients in the Unit Interval
authors: Weikang Qian, Marc Riedel, and Ivo Rosenberg
appeared in: European Journal of Combinatorics, Vol. 32, No. 3, pp. 448–463, 2011.

Pdf.jpg
Paper

Please see our "Publications" page for more of our papers on this topic.