Difference between revisions of "Research"

From The Circuits and Biology Lab at UMN
Jump to navigationJump to search
Line 311: Line 311:


{| align="center"
{| align="center"
|[[File:Analog-in-time.jpg|center|thumb|350x350px|Encoding a value in time. The value represented is the fraction of the time that the signal is high in each cycle, in this case 0.687.]]
|[[File:Analog-in-time.jpg|center|thumb|400x400px|Encoding a value in time. The value represented is the fraction of the time that the signal is high in each cycle, in this case 0.687.]]
||[[File:Multiplicaiton-on-time-encoding-signals.jpg|thumb|320x320px|Multiplication with a single AND gate, operating on deterministic periodic signals. ]]
||[[File:Multiplicaiton-on-time-encoding-signals.jpg|thumb|400x400px|Multiplication with a single AND gate, operating on deterministic periodic signals. ]]
|}
|}
As technology has scaled and device sizes have gotten smaller, the supply voltages have dropped while the device speeds have improved. Control of the dynamic range in the voltage domain is limited; however, control of the length of pulses in the time domain can be precise. Encoding data in the time domain can be done more accurately and more efficiently than converting signals into binary radix. So we can compute ''more precisely,'' ''faster'', and with ''fewer logic gates:''
As technology has scaled and device sizes have gotten smaller, the supply voltages have dropped while the device speeds have improved. Control of the dynamic range in the voltage domain is limited; however, control of the length of pulses in the time domain can be precise. Encoding data in the time domain can be done more accurately and more efficiently than converting signals into binary radix. So we can compute ''more precisely,'' ''faster'', and with ''fewer logic gates:''

Revision as of 19:41, 8 July 2023

"You see things; and you say, 'Why?' But I dream things that never were; and I say, 'Why not?'"

–– George Bernard Shaw (1856 –1950)

Our research spans different disciplines ranging from digital circuit design, to algorithms, to mathematics, to synthetic biology. It tends to be inductive (as opposed to deductive) and conceptual (as opposed to applied). A recurring theme is building systems that compute in novel or unexpected ways with new and emerging technologies.

Storing Data with Molecules

All new ideas pass through three stages:

  1. It can't be done.
  2. It probably can be done, but it's not worth doing.
  3. I knew it was a good idea all along!

––Arthur C. Clarke (1917–2008)

Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent. With each nucleotide in the sequence drawn from the four-valued alphabet of {A, T , C, G}, a molecule of DNA with n nucleotides stores 2n bits of data.

  • Could we store data for our computer systems in DNA? "Can't be done too hard."
  • Is it worth doing? "Definitely not. It will never work as well as our hard drives do."
  • But one can store so much data so efficiently! "I knew it was a good idea all along!"


title: Automated Routing of Droplets for DNA Storage on a Digital Microfluidics Platform
authors: Ajay Manicka, Andrew Stephan, Sriram Chari, Gemma Mendonsa,

Peyton Okubo, John Stolzberg-Schray, Anil Reddy, and Marc Riedel

under revision: Royal Society of Chemistry – Digital Discovery, 2023
Pdf.jpg

Paper

Ppt.jpg

Slides


Storing data in DNA.


Computing with Molecules

"Biology is the most powerful technology ever created. DNA is software, protein are hardware, cells are factories."

––Arvind Gupta (1953– )

Computing has escaped! It has gone from desktops and data centers into the wild. Embedded microcontrollers – found in our gadgets, our buildings, and even our bodies – are transforming our lives. And yet, there are limits to where silicon can go and where it can compute effectively. It is a foreign object that requires a electrical power source.

We are studying novel types of computing systems that are not foreign, but rather an integral part of their physical and chemical environments: systems that compute directly with molecules. A simple but radical idea: compute with acids and bases. An acidic solution corresponds to a "1", while a basic solution to "0".

title: Digital Circuits and Neural Networks Based on Acid-Base Chemistry Implemented by Robotic Fluid Handling
authors: Ahmed Agiza, Kady Oakley, Jacob Rosenstein, Brenda Rubenstein,

Eunsuk Kim, Marc Riedel, and Sherief Reda

appeared in: Nature Communications, Vol. 14, No. 496, 2023


Computing with Acids and Bases


It's more complex that acids and bases, but DNA is a terrific chassis for computing. We have developed "CS 101" algorithms with DNA: Sorting, Shifting and Searching:

title: Parallel Pairwise Operations on Data Stored in DNA: Sorting, XOR, Shifting, and Searching
authors: Arnav Solanki, Tonglin Chen, and Marc Riedel
under review in: Natural Computing, 2023
presented at: International Conference on DNA Computing and Molecular Programming, 2021

Pdf.jpg
Paper

Ppt.jpg
Slides

Based on a bistable mechanism for representing bits, we have implemented logic gates such AND, OR, and XOR gates, as well as sequential components such as latches and flip-flops with DNA. Using these components, we have built full-fledged digital circuits such as a binary counters and linear feedback shift registers.

title: Digital Logic with Molecular Reactions
authors: Hua Jiang, Marc Riedel, Keshab Parhi
presented at: The International Conference on Computer-Aided Design, San Jose, CA, 2013.

Pdf.jpg
Paper

Simulations of DNA implementation of logic gates. The input signals are molecular concentrations X and Y; the output signal is a molecular concentration Z. (A) AND gate. (B) OR gate. (C) NOR gate. (D) XOR gate.

Also, we have performed signal processing including operations such as filtering and fast-fourier transforms (FFTs) with DNA.

title: Discrete-Time Signal Processing with DNA
authors: Hua Jiang, Ahmed Salehi, Marc Riedel and Keshab Parhi
appeared in: ACS Synthetic Biology, Vol. 2 no. 5, pp. 245–254, 2013.
Supplementary Information: List of Reactions
appeared in: IEEE Design & Test of Computers, Vol. 29, No. 3, pp. 21–31, 2012.
presented at: IEEE/ACM International Conference on Computer-Aided Design,
San Jose, CA, 2010.
presented at: IEEE Workshop on Signal Processing Systems, San Francisco, 2010

Pdf.jpg
Paper

Ppt.jpg
Slides



Simulations of DNA implementation of a moving-average FIR filter. This filter removes the high-frequency component from an input signal, producing an output signal consisting of only the low-frequency component. Here the "signals" are molecular concentrations.

Please see our "Publications" page for more of our papers on these topics.

Computational Immunology

Biology is the study of the complex things in the Universe. Physics is the study of the simple ones.

–– Richard Dawkins (1941– )

We are studying a problem that computer science currently judges to be very difficult: predicting cellular immunity. It centers on the question of how strongly molecules binds to one another. The molecules in question are peptides – fragments of proteins from a virus – and cell-surface receptors that have a cleft. A peptide will only bind if it fits into the cleft like a key into a lock.

A peptide (in blue) bound to a MHC Class I protein (in yellow).

The binding is a critical step in a critical component of the immune system: it allows circulating T-cells to kill off infected cells. If this mechanism succeeds, an infection is stopped in its tracks. If it fails, then infected cells become factories for reproducing copies of the virus; full-blown disease results. Given a novel pathogen, such as SARS-Cov-2, predicting whether the immune system of an individual will do its job at fighting off the disease comes down to predicting how well the viral peptides bind to the cell-surface receptors of that person. We are tackling the problem with cloud computing resources, donated by Oracle:

title: The UMN/Mayo Computational Human Immuno-Peptidome (CHIP) Project
Investigator: Marc Riedel
Agency: Oracle
Program: Oracle Research Fellowship
Award: $200,000
Duration: 2022 – 2024

Pdf.jpg
Proposal

Computing with Random Bit Streams

"To invent, all you need is a pile of junk and a good imagination." –– Thomas A. Edison (1847–1931)

Humans are accustomed to counting in a positional number system – decimal radix. Nearly all computer systems operate on another positional number system – binary radix. We are so accustomed to these systems that it counter-intuitive to ask: can we compute using a different representation? and why would we want to?

Stochastic Logic

We advocate an alternative representation: computing on random bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. Why compute this way? Using stochastic logic, we can compute complex functions with very, very simple circuits. For instance, we can perform multiplication with a single AND gate and addition with a single MUX:

Multiplication with an AND gate. Here the variables represents the probabilities of obtaining a 1 versus a 0 in stochastic bit streams. The AND gate produces an output probability that is the product of the of the input probabilities and .
            
Scaled addition with a multiplexer (MUX). Given input probabilities , and , the MUX produces an output probability .

Using a conventional representation, building a circuit that computes, say a polynomial approximation to a trigonometric function such as tanh(x) or cos(x), requires thousands of logic gates. With stochastic logic, we have shown that we can compute such functions with about a dozen logic gates, so a 100X reduction in gate count. Our most important contribution is a general methodology for synthesizing polynomial functions with stochastic logic, one of the seminal contributions to the field:

title: An Architecture for Fault-Tolerant Computation with Stochastic Logic
authors: Weikang Qian, Xin Li, Marc Riedel, Kia Bazargan, and David Lilja
appeared in: IEEE Transactions on Computers, Vol. 60, No. 1, pp. 93–105, 2011

Pdf.jpg
Paper

Logic that Generates Probabilities

We have also shown how to synthesize logic that transforms a set of source probabilities into different target probabilities.

Given a set S of source probabilities {0.4, 0.5}, we can synthesize a combinational circuit to generate an arbitrary decimal output probability. The example shows how to generate 0.119. Each AND gate performs a multiplication and each inverter performs a "one-minus" operation.
title: Transforming Probabilities with Combinational Logic
authors: Weikang Qian, Marc Riedel, Hongchao Zhou, and Jehoshua Bruck
will appear in: IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 2012.
presented at: International Conference on Computer-Aided Design, San Jose, 2009
(nominated for IEEE/ACM William J. McCalla ICCAD Best Paper Award).

Pdf.jpg
Paper

Ppt.jpg
Slides

A Deterministic Approach

Having pioneered the field of stochastic logic, we decided to reexamine its foundations. Why can complex functions be computed with such simple circuits when we compute on probabilities? Intuition might suggest that somehow we are harnessing deep aspects of probability theory. This intuition is wrong.

The keys is that we operate on uniform representation rather than a positional one. We showed that we can compute deterministically using the same structures that we use when computing stochastically. There is no need to do anything randomly! This upended the field that we had pioneered.

title: Performing Stochastic Computation Deterministically
authors: Devon Jenson, M. Hassan Najafi, David Lilja, and Marc Riedel
appeared in: IEEE Trans. on Very Large Scale Integration Systems,
Vol. 27, No. 29, pp. 2925–2938, 2019
presented at: IEEE International Symposium of Circuits and Systems, 2020
presented at: IEEE/ACM International Conference on Computer-Aided Design, 2016

Pdf.jpg
Paper

Ppt.jpg
Slides

Time-Encoded Computing

Computing deterministically on bit streams really means that, instead of encoding data in space, we encode them time. The time-encoding consists of periodic signals, with the value encoded as the fraction of the time that the signal is in the high (on) state compared to the low (off) state in each cycle.

Encoding a value in time. The value represented is the fraction of the time that the signal is high in each cycle, in this case 0.687.
Multiplication with a single AND gate, operating on deterministic periodic signals.

As technology has scaled and device sizes have gotten smaller, the supply voltages have dropped while the device speeds have improved. Control of the dynamic range in the voltage domain is limited; however, control of the length of pulses in the time domain can be precise. Encoding data in the time domain can be done more accurately and more efficiently than converting signals into binary radix. So we can compute more precisely, faster, and with fewer logic gates:

title: Time-Encoded Values for Highly Efficient Stochastic Circuits
authors: M. Hassan Najafi, S. Jamali-Zavareh, David Lilja, Marc Riedel, Kia Bazargan and
Ramesh Harjani
appeared in: IEEE Trans. on Very Large Scale Integration Systems,
Vol. 25, No. 5, pp. 1644–1657, 2017
presented at: IEEE International Symposium on Circuits and Systems, 2017

Pdf.jpg
Paper

Please see our "Publications" page for more of our papers on these topics.

Computing with Feedback

"A person with a new idea is a crank until the idea succeeds." –– Mark Twain (1835–1910)

The accepted wisdom is that combinational circuits (i.e., memoryless circuits) must have acyclic (i.e., loop-free or feed-forward) topologies. And yet simple examples suggest that this need not be so. We advocate the design of cyclic combinational circuits (i.e., circuits with loops or feedback paths). We have proposed a methodology for synthesizing such circuits and demonstrated that it produces significant improvements in area and in delay.

A circuit that has feedback and yet is combinational.
title: Cyclic Boolean Circuits
authors: Marc Riedel and Shuki Bruck
appeared  in: Discrete Applied Mathematics, Vol. 160, No. 13–14, pp. 1877–1900, 2011.
dissertation: Ph.D., Electrical Engineering, Caltech, 2004
(winner of Charles H. Wilts Prize for the Best Ph.D. Dissertation in EE at Caltech).
presented at: Design Automation Conference, Anahiem, CA, 2003
(winner of DAC Best Paper Award).

Pdf.jpg
Paper

Pdf.jpg
PhD Dissertation

Ppt.jpg
Slides

Please see our Publications page for more of our papers on this topic.


Computing with Nanoscale Lattices

"Listen to the technology; find out what it’s telling you.” –– Carver Mead (1934–  )

In his seminal Master's Thesis, Claude Shannon made the connection between Boolean algebra and switching circuits. He considered two-terminal switches corresponding to electromagnetic relays. A Boolean function can be implemented in terms of connectivity across a network of switches, often arranged in a series/parallel configuration. We have developed a method for synthesizing Boolean functions with networks of four-terminal switches. Our model is applicable for variety of nanoscale technologies, such as nanowire crossbar arrays, as molecular switch-based structures.

Shannon's model: two-terminal switches. Each switch is either ON (closed) or OFF (open). A Boolean function is implemented in terms of connectivity across a network of switches, between the source S and the drain D.
               
Our model: four-terminal switches. Each switch is either mutually connected to its neighbors (ON) or disconnected (OFF). A Boolean function is implemented in terms of connectivity between the top and bottom plates. This network implements the same function as the two-terminal network on the left.
title: Logic Synthesis for Switching Lattices
authors: Mustafa Altun and Marc Riedel
will appear in: IEEE Transactions on Computers, 2011.
presented at: Design Automation Conference, Anaheim, CA, 2010.

Pdf.jpg
Paper

Ppt.jpg
Slides

The impetus for nanowire-based technology is the potential density, scalability and manufacturability. Many other novel and emerging technologies fit the general model of four-terminal switches. For instance, researchers are investigating spin waves. A common feature of many emerging technologies for switching networks is that they exhibit high defect rates.

A nanowire crossbar switch. The connections between horizontal and vertical wires are FET-like junctions. When high or low voltages are applied to input nanowires, the FET-like junctions that cross these develop a high or low impedance, respectively.
               
In a switching network with defects, percolation can be exploited to produce robust Boolean functionality. Unless the defect rate exceeds an error margin, with high probability no connection forms between the top and bottom plates for logical zero ("OFF"); with high probability, a connection forms for logical one ("ON").

We have devised a novel framework for digital computation with lattices of nanoscale switches with high defect rates, based on the mathematical phenomenon of percolation. With random connectivity, percolation gives rise to a sharp non-linearity in the probability of global connectivity as a function of the probability of local connectivity. We exploit this phenomenon to compute Boolean functions robustly in the presence of defects.

title: Synthesizing Logic with Percolation in Nanoscale Lattices
authors: Mustafa Altun and Marc Riedel
appeared in: International Journal of Nanotechnology and Molecular Computation,
Vol. 3, No. 2, pp. 12–30, 2011.
presented at: Design Automation Conference, San Francisco, CA, 2009.

Pdf.jpg
Paper

Ppt.jpg
Slides

Please see our "Publications" page for more of our papers on these topics.

Algorithms and Data Structures

"There are two kinds of people in the world: those who divide the world into two kinds of people, and those who don't." –– Robert Charles Benchley (1889–1945)

Consider the task of designing a digital circuit with 256 inputs. From a mathematical standpoint, such a circuit performs mappings from a space of Boolean input values to Boolean output values. (The number of rows in a truth table for such a function is approximately equal to the number of atoms in the universe rows versus atoms!) Verifying such a function, let alone designing the corresponding circuit, would seem to be an intractable problem. Circuit designers have succeeded in their endeavor largely as a result of innovations in the data structures and algorithms used to represent and manipulate Boolean functions. We have developed novel, efficient techniques for synthesizing functional dependencies based on so-called SAT-solving algorithms. We use Craig Interpolation to generate circuits from the corresponding Boolean functions.

A circuit construct for SAT-based verification.
            
A squid.
title: Reduction of Interpolants For Logic Synthesis
authors: John Backes and Marc Riedel
presented at: The International Conference on Computer-Aided Design, San Jose, CA, 2010.

Pdf.jpg
Paper

Ppt.jpg
Slides

Please see our "Publications" page for more of our papers on this topic. (Papers on SAT-based circuit verification, that is, not on squids.)

Mathematics

"Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." –– Bertrand Russell (1872–1970)

Mathematics, before the era of LaTeX.


The great mathematician John von Neumann articulated the view that research should never meander too far down theoretical paths; it should always be guided by potential applications. This view was not based on concerns about the relevance of his profession; rather, in his judgment, real-world applications give rise to the most interesting problems for mathematicians to tackle. At their core, most of our research contributions are mathematical contributions. The tools of our trade are discrete math, including combinatorics and probability theory.

Mathematics, before the era of LaTeX.
title: Uniform Approximation and Bernstein Polynomials with
Coefficients in the Unit Interval
authors: Weikang Qian, Marc Riedel, and Ivo Rosenberg
appeared in: European Journal of Combinatorics, Vol. 32, No. 3, pp. 448–463, 2011.

Pdf.jpg
Paper

Please see our "Publications" page for more of our papers on this topic.