Unconventional Fabrication From Bits to Atoms and Back Again

 

K. Eric Drexler 

Nanorex

 

     Fabrication based on mechanically directed molecular manipulation can be regarded as an unconventional form of computation in which the output pattern is valued more for its physical embodiment than for its information content. Indeed, there are fundamental physical parallels between this mode of molecular fabrication and classical digital logic. Regarding applications, advanced molecular fabrication methods will enable the production of an unprecedented range of atomically precise structures, providing new means for implementing unconventional computational systems. Developments in molecular nanotechnologies, including DNA engineering, are steps toward implementing molecular fabrication systems of this kind.

     Among the similarities between classical digital logic and directed molecular manipulation are discreteness of states, noise margins separating those states, and the consequent potential for exponentially low error rates. Shared physical issues include thermal fluctuations and the role of bit-erasure and dissipative driving forces in determining energy efficiency. Both computing and fabrication systems have potential implementations based on functional components of nanometer scale and high operating frequency. Each provides the basis for a transition from specialized analog systems to programmable systems with new and broad capabilities.

Among the differences, however, are the diversity of non-substitutable operations in fabrication (where there is nothing analogous to NAND or NOR) and the absence of a crisp concept of universality. A difference of great practical importance is the asymmetry of their mutual causal relationships: Fabrication systems can make both fabrication and computing systems, while computing systems can make neither. Substituting "design" for "fabrication", these roles are reversed.

Many unconventional computing systems will require unconventional structures if they are to be implemented. In many instances, intricate structures, small devices, and great precision will be either desirable or necessary. Molecular fabrication systems can enable the realization of a broad class of structures of this sort, in a wide range of materials, with components manufactured to complex, atomic specifications. This prospect will, I hope, suggest possibilities that encourage investigation of novel concepts and physical embodiments for unconventional computing. (For example, I'd like to see theoretical exploration of computing devices that exploit atomically precise structures and competing electronic domains in strongly correlated electronic materials.)

Currently, progress in molecular nanotechnologies is dominated by self-assembly. Structural DNA nanotechnologies are now able to routinely fabricate precise, million-atom, 100-nm-scale structures with hundreds of independently addressable binding sites for other components. Exploiting this technology to provide frameworks for modular composite nanosystems offers a path toward technologies for both molecular fabrication and unconventional computing.

 

In vitro synthetic biology: from self-assembly to biochemical circuits

 

Erik Winfree

Caltech

 

     Biological organisms are a beautiful example of programming. The program and data are stored in biological molecules, and the algorithms are carried out by molecular and biochemical processes.  If we understood how to program biomolecular systems, what could we create?  Molecular machines, systems, an artificial cell, from scratch?  In this talk I will discuss on our lab's attempts to get our heads around these questions by focussing on information and information processing in molecular systems.  Three examples will be how to program the self-assembly of DNA tiles to create algorithmically patterned crystals, how to program in vitro transcription networks to obtain dynamical behavior, and how to program nucleic acid interactions to create digital logic circuits.

 

Biological Computing Substrates

 

Klaus-Peter Zauner

University of Southhampton, UK

 

      A crucial difference sets apart present computing technology from information processing mechanisms utilised by organisms: The former is based on formalisms which are defined in disregard of the physical substrate used to implement them, while the latter directly exploit the physico-chemico properties of materials. There are many advantages to isolating the operation from implementation as is the case in current computers---but theses come at the cost of low efficency. In applications where size and energy-consumption is tightly restricted, or where real-time response to ambiguous data is required organisms cope well, but existing technology is unsatisfactory.

Taking heed of the clues from biology the question arises how the realm of computer science can be extended from formal to physical information paradigms. The aim is to arrive at a technology in which the course of computation is driven by the physics of the implementation substrate rather than arbitrarily enforced. The traditional tools and approaches of computer science are ill suited to this task. In particularly it will be necessary to orchestrate autonomously acting components to collectively yield desired behaviour without the possibility of prescribing individual actions.

Bio-electronic hybrid systems can be serve as a starting point to explore approaches to computing with autonomous components. We take a two-pronged approach in which we recruit both molecules and complete cells as biological computing substrate. Molecules offer reproducible nonlinearity, self-assembly, and high integration density of complex input-output mappings. Cells, on the other hand, provide cheap and fast nano-engineering through self-reproduction, build in quality-assurance through testing at the point of assembly, self-reconfiguration, and self-repair. Molecules, however, require infrastructure and cells are  typically too complex for efficient computation.  Our expectation therefore is that in the long term practical biological computing substrates will be situated at the subpramolecular and  subcellular level, i.e., at the interface between inanimate and  animate matter.

 

 

Physarum Polycephalum on a Chip for Bio-hybrid Information Processors

 

Ferran D. Revilla, Hywel Morgan and Klaus-Peter Zauner

University of Southhampton, UK

 

The true slime mold Physarum polycephalum plasmodium, a giant amoeboid cell, has raised interest for its information processing capabilities [1]. Despite a the lack of a centralised control organ, P. polycephalum is able to integrate information from different sources (temperature, light, chemical environment) and different areas of its surface and is capable of generating a global and seemingly smart response to adapt to its environment [2].  P. polycephalum can be seen as a highly distributed natural computing system in which signals are conveyed among parts of the plasmodial cell through the propagation of contraction waves along a mesh of intracellular veins.

In ongoing work towards integrating this organism into bio-hybrid devices we have created a novel micro-device.  A plasmodium is enclosed by a thin gas-permeable layer of poly-dimethyl-siloxane (PDMS) on the patterned copper side of the PCB and by a polycarbonate filtration membrane on the reverse. A micro-fluidic system, manufactured from PDMS, is clamped to the top of filtration membrane to provide both a suitable environment to sustain the slime mold for over a week, and to permit localised stimulation with chemical signals. Furthermore, this micro-chip with a size of 20 by 25 mm also enables the monitoring of the of the intracellular oscillations implicated in P. polycephalum's information integration [3]. To this end electrodes etched into the PCB and sealed with photo-resist are used for impedance measurements.  By performing simultaneously optical density measurements and impedance measurements we have verified that the impedance measurement technique provides an alte! rnative possibility to interface with the plasmodium.

The micro-chip is useful in the study and characterisation of natural computation and information integration processes in P. polycephalum.  It also provides a potential route for integrating plasmodia into devices such as living sensors or robot controllers [4].

 

[1] S. Tsuda, M. Aono, Y.-P. Gunji, Robust and emergent physarum logical-computing, BioSystems 73 (2004) 45-55.

[2] Y. Miyake, S. Tabata, H. Murakami, M. Yano, and H. Shimizu, Environment-Dependent Self-Organization of Positional Information Field in Chemotaxis of Physarum Plasmodium Journal of Theoretical Biology (1996) 178, 341-353

[3] Y. Miyake, H. Tada, M. Yano and H. Shimizu, Relationship between Intracellular Period Modulation and External Environment Change in Physarun Plasmodium, Cell structure and function, (1994) 19, 363-370.

[4] S. Tsuda and K.-P. Zauner, Y.-P. Gunji, Robot Control with Biological Cells, Biosystems, 87 (2007) 215-223.

 

Computing by propagating patterns: semantics of reaction-diffusion and Physarum machines

 

Andrew Adamatzky

University of the West of England, UK

 

A paradigm of reaction-diffusion computing states that being suitably encoded in concentration profiles of a chemical medium any problem can be, in principle, solved by propagating and interacting chemical and phase waves. There is a huge gap however between constructing a couple of logical gates in a medium, and embedding a realistic logical circuit, or solving a large-scale optimization problem of the real world.  Therefore in our talk we aim to fill the gap by adopting a goal-oriented approach in designing non-linear medium computers. To start with we show how famous problems of computational geometry (plane tessellation and skeletonisation) and robot control (taxis and obstacle avoidance) can be solved in linear time in reaction-diffusion chemical systems. We then establish a deficiency of the reaction-diffusion computing devices by demonstrating that it is impossible to invert Voronoi diagram, approximate minimum spanning tree and shortest path in neither excitable nor precipitating chemical systems without bringing up external computing structures. To cure the computational inadequacy we should allow just few points of a wave-front to guide propagation of the computing patterns. Such a constraint is realized in a slime mold, which is enclosed in a membrane analogue of reaction-diffusion media. We provide experimental evidences that Physarum slime mold interactively computes a succession of proximity graphs: nearest neighbour, spanning tree, relative neighbourhood and Gabriel graphs. Final part of the talk addresses programmability of non-linear medium computers, semantics of reaction-diffusion computing, based on Hagiya automata rules, and interpretation of Physarum computing in terms of Kolmogorov-Uspenskii machines.

 

Cell-Like Computation in Complex Synthetic Nanoscale Systems

 

M. L. Simpson

Oak Ridge National Laboratory

 

     In a landmark paper published in Science in 1972, Anderson observed “that at each level of complexity entirely new properties appear, and the understanding of the new behaviors require research which I think is as fundamental in it nature as any other”. This observation highlights a grand Challenge facing the nanoscience community: moving beyond just the synthesis of individual or homogeneous arrays to complex arrangements of nanoscale materials that lead to new classes of functionality.  Responding to this grand challenge presents insurmountable obstacles for the classical approach of reductionism that works to understand phenomena in segments rather than as a system, when confronting the twin difficulties of scale and complexity. Instead, what is required is the rethinking of the usual paradigm of research flowing from synergies between theory, synthesis, and characterization, as there is no general theory to guide the synthesis and organization of nanoscale materials into highly interactive collectives. I will discuss a research paradigm that includes a fourth component – the top-down observation of the organization and associated function of natural complex systems, and the transfer of these principles into the bottom-up synthesis of complex synthetic nanoscale systems. The best examples of high levels of functionality emerging from ensembles of nanoscale elements are found in biological cells that perform extremely complex functions that include sensing, communication, navigation, cooperation, and even fabrication and organization of synthetic nanoscale materials, which undoubtedly include computation on many different levels of the system. There are striking differences when synthetic and natural nanoscale systems are compared, and these difference are found both in scale (density of functional elements) and complexity (density of interactions between the elements), and it follows that nanoscience efforts should focus both on the synthesis of nanomaterials (scale) and their organization into ensembles (complexity). The philosophical underpinning of this approach is well-captured in a phrase attributed to Feynman: What I cannot create I do not understand.” This talk will focus on this connection between design and understanding as it relates to nanoscale systems that lead toward cell-like complexity in synthetic systems.

On the computational capabilities of physical systems

 

David H. Wolpert

NASA Ames

 

In this talk strong limits on computation in the physical universe are presented.  These limits are based on a novel definition of computation, ``inference devices''.  Inference devices model actual physical computers, in contrast to Chomsky hiearchy constructions like Turing machines.

1) There cannot be an inference device to which one can pose all computational tasks concerning the universe.

2) No inference device can correctly carry out any computational task in the subset of such tasks that can be posed to it.

In particular, no inference device can take the state of an arbitrary system external to that device as input, and correctly predict that system's future state before that future state actually occurs. The result also mean that there cannot exist an infallible, general-purpose observation apparatus. It also means that there cannot be an infallible, general-purpose control system.

(1) and (2) do not rely on external systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold for an infinitely fast, infinitely dense inference device, with computational powers greater than that of a Turing Machine.

3) There are impossibility results for inference devices that are analgous to Chomsky hierarchy results concerning universal Turing Machines and the Halting theorem, and results concerning the (im)possibility of certain kinds of error-correcting codes.

4) There is an analogue of algorithmic information complexity for inference devices, ``prediction complexity''.

A task-independent bound exists on how much the prediction complexity of a computational task can differ for two different universal inference devices used to solve that task. This bound is similar to the ``encoding'' bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two universal Turing machines.

5) Either the Hamiltonian of our universe prohibits a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity).

 

Both Continuous and Discrete

 

Norman Margolus

MIT

 

     In computational Physics, continuous motion is usually regarded as a limit approached as resolution in space and time is increased.  An unconventional alternative is to employ computational models which exactly reproduce a finite-space and finite-time sampling of the behavior of a continuous dynamics.  Fredkin's Billiard Ball Model is a well known example of a system of this sort: with constrained initial conditions and when viewed at integer times, the continuous classical mechanical time evolution of colliding hard spheres exactly reproduces a discrete digital dynamics.

     This talk is about these sorts of unconventional computational models, which "obey" a continuous dynamics.  Such models are interesting from the viewpoint of theoretical physics.  Like quantum systems, they have both continuous and discrete aspects.  They make contact between the continuous language of physics and the discrete language of computation, allowing continuum-physics ideas such as momentum and energy to be directly carried over into the discrete world.  They also provide examples of finite-state systems that obey classical mechanics, and so provide a classical substratum for counting states both in statistical mechanics and in classical dynamics.

 

CNN : A Brain-like  Computer  on  A  Chip

 

Leon  Chua

University of California, Berkeley

 

     CNN  is an acronym for Cellular  Nonlinear  Networks. It is endowed  with a non-von Neumann  architecture reminiscent  of  von Neumann's Cellular Automata but computes via continuous flows along  vector  fields. It is also endowed with the key feature of a von Neumann architecture in that it is  fully programmable via an user-friendly  C-like   language. It is a  massively parallel  processing machine where each cell is a caricature of a  neuron , and where computation is both brain-like and emergent , made possible via the  principle of local activity. All these unconventional features  had been  encapsulated in a tiny  176 x 144  silicon chip , thereby providing an enabling technology for many mission-critical tasks currently solvable only by supercomputers.

 

 Cellular neural network computing in physics

 

Maria-Magdolna Ercsey-Ravasz

Babes-Bolyai University, Cluj-Napoca

 

     Unconventional computing has been a very important research topic since it became clear that the power of conventional digital computers can not grow much further with the same speed as it did in the last decades. One of the many new computational paradigms exploited is cellular neural network computing, many times called also as cellular wave computing. This is based on the theory of cellular neural/nonlinear networks (CNN) [1] and is experimentally realized by different physical principles in the architecture of the CNN Universal Machine (CNN-UM) [2]. This new computational paradigm is well suited for some special, very complex problems, and the CNN-UM chips can be used expletively with conventional computers.

     Possibilities for performing fast image processing, developing different sensors, solving differential equations, and cellular automata models were already studied. Here we show that using the fast realistic random number generator developed on the CNN-UM presented in [3] a new - very broad - class of problems of physics can be handled on this computer. These problems are mainly lattice models, which may require random initial conditions or stochastic (Monte Carlo type) dynamics. Two basic problems are studied: the two-dimensional site-percolation problem and the two-dimensional Ising model.  Both represent a whole class of problems and the algorithms developed could be easily changed for many kindred problems: like bond percolation, directed percolation, diluted Ising model etc.  The algorithms were tested on an experimental version of the CNN-UM (ACE16K chip), which has 128 * 128 cells. The results are in good agreement with results obtained on digital computers. Time measurements suggest that developing trend of the CNN-UM chips - increasing the lattice size, and appearance of three dimensional chips - will assure an important advantage for the CNN-UM in future.

 

[1] L. O. Chua, L. Yang, "Cellular neural networks: Theory@, IEEE Transactions on Circuits and Systems, vol. 35, p. 1257, (1988)

[2] T. Roska, L. O. Chua, “The CNN Universal Machine: an analogic array computer”, IEEE Transactions on Circuits and Systems – II, vol. 40, p. 163, (1993)

[3] M. Ercsey-Ravasz, T. Roska, Z. Neda, \"Perspectives for Monte Carlo simulations on the CNN-UM\", Int. J. of Modern Physics C, Vol. 17, No. 6, p. 909, (2006)

 

Self-replicating materials, information processing, and the W-Machine

 

Steen Rasmussen

LANL

 

Self-replicating materials could presumably provide important properties for novel computing substrates, in particular if the self-replicating materials could communicate with conventional digital computing substrates.

We discuss recent advances in assembling self-replicating materials from scratch as we have demonstrated experimentally (DeClue et al., 2007) that an informational molecule can catalyze the metabolic production of containers, connecting the main components of minimal life in a protocell (Rasmussen et al., 2004).  Theoretical results predict that self-replicating informational molecules catalytically coupled to the existing growing container system (integration not yet experimentally implemented) should ensure balanced reproduction of all the protocellular components when integrated (Rocheleau et al., 2007 & Munteneau et al., 2007).

            The objective of the so-called W-machine (PACE) is to provide a computer programmable parallel microfluidics support system for the evolution of protocells and other complex chemical systems connecting digital computing and chemical information processing via microfluidics.  A not yet implemented W-machine design (Goranovic et al., 2006) is discussed both as a complementation mechanism (life-support system) for the development of protocells and as a potentially digitally controlled interface to program (evolve) protocellular functionalities.

 

M. DeClue, P-A. Monnard, J. Bailey, J. Boncella, S. Rasmussen & H. Ziock, Information molecule mediated photocatalysis of container formation in protocells, submitted, 2007.

G. Goranovic, S. Rasmussen & P. Nielsen, Artificial life-forms in microfluidic computers, in Proceedings, mTAS2006. 10th International Conf. on Miniaturized Systems for Chemistry and Life Sciences, Tokyo, Japan, November 5-9, 2006.

Munteneau, Rasmussen, Ziock, and Sole, Generic Darwinian selection in catalytic protocell assemblies Emergence of protocellular growth laws, Phil Trans. R. Soc. B, in press, 2007.

PACE: European Commission sponsored 6th Framework Project “Programmable Artificial Cell Evolution” based on the W-machine concept, which was coined by Bedau, McCaskill, Packard & Ramussen, in Cannobio, 2002.

Rasmussen, Chen, Deamer, Krakauer, Packard, Stadler & Bedau, Transitions between nonliving and living matter, Science 303 (2004) 963.

Rouchelau, Rasmussen, Nielsen, Jacobi & Ziock, Emergence of protocellular growth laws, Phil TransR. Soc. B, in press, 2007.

 

Biomolecular Automata Using Deoxyribozymes:

Accomplishments and Open Problems

 

Darko Stefanovic

University of New Mexico

 

      Deoxyribozymes are nucleic acid enzymes which catalyze DNA reactions and which are largely made out of DNA.  In our case, the enzymes, called phosphodiesterases, catalyze the reaction of cleavage of an oligonucleotide substrate (short sequence of single-stranded DNA) into two shorter oligonucleotides.  Also in our case, the enzymes are modified to include allosteric regulation sites to which specific control molecules can bind and so affect the catalytic activity. The enzyme is modularly designed to include a type of regulation site to which a control molecule must bind before the enzyme can complex with the substrate, thus the control molecule promotes catalytic activity; another type of regulation site allows the control molecule to alter the conformation of the enzyme's catalytic core, such that even if the substrate has bound to the enzyme, no cleavage occurs; thus this control molecule suppresses catalytic activity.  We call the allosterically regulated enzyme a logic gate, we call the control molecules the inputs to the logic gate, and we call the cleavage products the outputs of the logic gate. This basic logic gate corresponds to a multi-input conjunction, possibly with various input polarities.  These are the first chemical logic gates in which inputs and outputs are of the same kind (namely oligonucleotides), so cascading gates is possible without any external interfaces (such as e.g., photoelectronics).  The primary inputs are compatible with sensor molecules that could detect small molecules, or generally, cellular disease markers.  Final outputs can be tied to release of small molecules, in particular drug molecules. Thus, it will eventually be possible to make therapeutic decisions cell-by-cell according to a complex decision function based on many attributes of the cell.  This is sometimes referred to as ``intelligent drug delivery''.  On the one hand, the limits of this approach to intelligent drug delivery are defined by the extent of clinical knowledge about what kind of decision making is needed; on the other, the limits depend on what decision functions are feasible. In this talk, I shall explore the limits of deoxyribozyme logic devices from a computational standpoint, examining both what has been achieved in the wet lab and what is on the drawing board.

 

DNA Self-assembly and Computer System Fabrication

 

Chris Dwyer

Duke University

 

     The migration of circuit fabrication technology from the microscale to the nanoscale has generated a great deal of interest in how the fundamental physical limitations of materials will change the way we engineer computer systems. The changing relationships between performance, defects, and cost have motivated research into so-called disruptive or exotic technologies. This talk will present the theory, design, and methods of fabrication for DNA self-assembled nanostructures within the context of circuit fabrication. The advantages of this technology go beyond the simple scaling of device feature sizes (sub-20nm) to enable new modes of computation that are impractical under the constraints of conventional fabrication methods. A brief survey of several computer architectures that can take advantage of this new technology will also be presented.

 

Quantum Computation:  How Goes It?

 

Howard Barnum

LANL

 

I will survey the current state of quantum computation and the prospects and motivation for its further development.  Specifically, I'll summarize the quantum model of computation, emphasizing that this provides a distinct theoretical model of computation from the classical one, and explainining the reasons for thinking it may be more powerful than the classical model but not sufficiently powerful to do NP-complete problems in polynomial time.  This will provide concrete illustrations of the potential advantages of quantum computation for specific tasks, and an idea of the kinds of tasks quantum computation might be expected to speed up.

      I'll then explain why it seems to be so hard to implement quantum computation, survey the state of the art in experimental physical implementation and the difficulties it faces, sketch how error correction and fault tolerant computation can in principle be implented for quantum computation in a manner closely analogous to the way they can be implemented for classical computation, and give some reasons for my subjective probability, of around 0.95, that quantum computers will be in use and outperforming classical computers on some tasks within 40 years.

 

Intrinsic Computation in Quantum Processes: An Information-Theoretic Analysis Using Quantum Finite-State Machines

 

Karoline Wiesner, John Mahoney, and James P. Crutchfield

University of California, Davis

 

     We consider a quantum process as a quantum information source. Correlations in the generated sequences show that measured quantum systems store and process information in their behavior.  We analyze this form of intrinsic computation by means of various information-theoretic quanti- ties: entropy rate, excess entropy and transient information. Using a quantum finite-state machine representation of a quantum system we analyze a set of small-scale quantum systems and find computational capacities of varying degree. The method exemplified here for small-scale system is generalizable to large-scale systems.

     We recently introduced ways to measure information storage in quantum systems, using quantum finite-state machines (QFMs) as a computation-theoretic model that accounts for measurement effects [1,2]. We consider a quantum system, sub ject to a time-independent Hamiltonian and a measurement procedure, as a quantum information source. We refer to a quantum process as the joint probability distribution over a bi-infinite chain of variables produced by a quantum information source. A QFM is a representation of such a quantum process.

     The information measure we introduced is the quantum entropy rate h_u, an equivalent to the classical entropy rate introduced by Shannon [3]. It measures the amount of true randomness in the process. For the class of so called deterministic QFMs we gave a closed-form expression for h_u in Ref. [2]. The next information measure, the quantum excess entropy, quantifies the shared information between a quantum process's past and its future. The third, the quantum transient information, determines the difficulty with which an observer comes to know the internal state of a quantum process through measurements. We use this set of measures to quantify the information storage and processing capacity or short, the computational capacity, of a quantum process. In this study, we analyze a set of quantum processes generated by QFMs with only a few states. We find a wide range of computational capacities for these small scale systems, depending on the Hamiltonian and the measurement procedure. The results encourage a new perspective on information storage and processing as being intrinsic to behavior of even simple quantum systems. The here presented methods are generalizable to quantum systems of any size. The approach presented here should lead to a new perspective on quantum processes and their computational capacities as they are observable in molecular systems. The importance of molecular systems as electronic memory has recently been illustrated [4]. Our information-theoretic analysis of dynamics is suggested as a guide in the search for new forms of computational substrates.

 

[1] K. Wiesner and J. P. Crutchfield. http://arxiv.org/ abs/quant-ph/0608206, 2006.

[2] J. P. Crutchfield and K. Wiesner. http://arxiv.org/ abs/quant-ph/0611202, 2006.

[3] C. E. Shannon. Bel l Sys. Tech. J., 27:379-423, 623-656, 1948.

[4] J. E. Green and et al. Nature, 445:414-417, 2007.

 

Programming Programmable Matter

 

Seth Copen Goldstein

Carnegie Mellon University

 

      Taking the inevitability of (sub-)micron-scale manufacturing as a starting point, I will describe Claytronics, a new form of programmable matter composed of individual units designed to scale to millions of cooperating, sub-millimeter scale units.  one of our driving goals is for the ensemble to act coherently and thereby mimic, with high-fidelity and in 3-dimensional solid form, the look, feel, and motion of macro-scale objects.

     One the main challenges to realizing programmable matter is in programming the ensemble.  The programmer must achieve a global goal for the ensemble by managing the collection of programs running on each unit in the ensemble.  In addition to the normally hard problem of programming a distributed system, claytronics impose significant additional constraints, such as an ever changing topology of connections, limited resources, and uncertainty from operating in the physical world.  In this talk we describe some of the challenges and proposed solutions for programming such ensembles.

 

 

Properties for Engineered Emergence

 

Jake Beal

MIT

 

      It is difficult to establish engineering control over the behavior of aggregates of unreliable devices with complicated interaction patterns.  I take a linguistic view of this problem, searching for mechanisms that simplify the composition and abstraction of complicated behaviors.  From my work on various problems of aggregate control in cognitive architectures and spatial computing, I have noticed common themes in mechanisms that solve them.  From these, I extract four properties which seem to help in engineering robust aggregate behavior---self-scaling, sparseness, gradual degradation, and failure simplification---and give examples of how they can be exploited.

 

 

Four Principles of Adaptive Information Processing in Decentralized Systems

 

Melanie Mitchell

Portland State University

 

     Since the beginning of the computer age, the terms "information processing" and "computing" have been used to describe the dynamics of natural adaptive systems, ranging from the brain to cellular metabolism and genetic regulation.  It is widely assumed that information processing in such systems takes place in a very different manner than in von Neumann-style computing architectures.  In particular, the architectural features of these natural systems---large and varying numbers of relatively simple, stochastic, and noisy components, limited, dynamic, and unreliable connections, and no central control---require a radically different model of computation than the traditional one.

     In this paper I describe the mechanisms underlying information processing in three different natural systems---the immune system, ant colonies, and cellular metabolism---and discuss the relevance of these mechanisms for adaptive behavior in these systems.  I then abstract four general principles that I claim are key to adaptive information processing in decentralized systems such as these.  These principles deal with the representation and transmission of information, the essential role of randomness, the importance of fine-grained parallel architectures, and the interplay of bottom-up and top-down processes in all such systems.  Finally, I describe how these principles might be used in artificial intelligence or artificial life applications to achieve robust and fluid pattern recognition and learning.

 

Robust, adaptable, powerful computation, as inspired by Nature: A Grand Challenge for Computing Research

 

Susan Stepney

University of York, UK

 

      How can we build complex computational systems – systems that  are autonomous, adaptable, and robust – from millions of less reliable and simpler components?  How can we build them to perform correctly and safely in an unpredictable, changing and hostile environment, and over long periods of time?  Such tasks are currently well beyond the state of our computational art, and as our technology moves towards ever smaller and ever more numerous nano-scale and quantum devices, these tasks will get only more difficult. 

      And yet biological processes manage to do such things routinely.  Living creatures are remarkably robust and adaptable.  They can survive injury, damage, wear and tear, and continual attack from other creatures.  Biology manages to take huge amounts of potentially unreliable matter and use self-checking, self-repair, self-reconfiguration, multiple levels of redundancy, multiple levels of defence, to develop adaptable complex biological organisms that continue to work for long periods of time in an extremely hostile environment. 

       So, in an attempt to cope with complexity, researchers are drawing inspiration from biology, which seems to have already discovered the answers, to develop a host of bio-inspired algorithms in evolution (genetic algorithms, genetic programming), neurology (artificial neural networks), immunology (artificial immune systems), plant growth (L-systems), social networks (ant colony optimisation), and more.

       Researchers are also beginning to explore open complex adaptive systems, where new resources, and new kinds of resources can be added at any time, either by external agency, or by the actions of the system itself.  Such new resources can fundamentally alter the character of the system dynamics, and so allow new possibilities, new adaptations.  Our current computational systems are beginning to open themselves, for example, through the continuing dialogue between user and machine, through continued new connections to networks such as the Internet, and through robotic systems controlling their own energy sources.

      One of the most exciting, and seemingly weird, recent developments is the non-classical paradigm of quantum computing.  This has emphasised the fundamental link between computation and its physical embodiment in the real world.  Still in relative infancy, it holds out the promise of massive increases in computation power, of untappable communication channels, and of spooky effects such as quantum teleportation.

      I will describe the background and progress of one of the UK’s Grand Challenges in Computing Research: “Journeys in Non-Classical Computation” [1], which seeks to uncover, explore, and generalise all the many diverse non-classical computational paradigms, to produce a fully mature and rich science of all forms of computation, that unifies the classical and non-classical (natural) computational paradigms.  Such a mature computational science will allow us to design and build robust, adaptable, powerful, safe, complex computational systems.  It will help researchers to uncover deep biological truths: which features of biology are necessary for correct robust functioning (so true of any living organism)?  Which are necessary only because of the particular physical embodiment (carbon-based terrestrial life-forms)?  Which are merely contingent evolutionary aspects (and so could be different if  “the tape were played again”)?  It will help researchers to uncover deep physical truths: Which are more fundamental, the laws of computation, or the laws of physics?  What is the relationship between logical information (bits) and physical reality?  Are there corresponding laws of computation for the various “non-elementary” physical laws (for example, solid state physics)?

[1]         S. Stepney, S. L. Braunstein, J. A. Clark, A. Tyrrell, A. Adamatzky, R. E. Smith, T. Addis, C. Johnson, J. Timmis, P. Welch, R. Milner, D. Partridge.  Journeys in Non-Classical Computation I: A Grand Challenge for computing research.  Int. J. Parallel, Emergent and Distributed Systems 20(1):5-19, March 2005; and II: Initial journeys and waypoints.  Int. J. Parallel, Emergent and Distributed Systems. 21(2):97-125, April 2006

 

Real-Number Computation through

High-Dimensional Analog Physical Chaotic Neuro-Dynamics

 

Yoshihiko Horio  and Kazuyuki Aihara

Tokyo Denki University

 

     Conventional von Neumann computers have difficulty in solving many complex, large-scale, and ill-posed real-world problems, which include combinatorial optimization problems. However, living things often face such a problem in real-life, and should quickly obtain a suitable answer to it through physical, dynamical, and collective computation with a vast number of neuron assemblies. This highly-parallel computation through a high-dimensional dynamics (Computation through Dynamics) is completely different from a numerical computation of the von Neumann computers (Computation through Algorithm).

      A chaotic itinerancy is a universal dynamics in high-dimensional chaotic systems. It is observed also from activities of living things, in particular, from the brain activities. Therefore, it is highly probable that chaotic itinerancy dynamics is useful, effective, and important in information processing for living things. The complexity of the chaotic dynamics comes from the complexity of real-number. However, conventional digital computers cannot handle almost all real numbers. Consequently, we must use a physical (real-number) system such as analog electrical circuits with continuous variables to fully exploit the computational ability of the chaotic systems.

     In this paper, we explore a novel computational mechanism based on the real-number processing with high-dimensional physical chaotic dynamics. We first develop two hardware systems using analog chaotic neuron integrated circuits, which can handle real-number through their continuous state variables, in order to solve quadratic assignment problems (QAPs) with chaotic itinerancy dynamics. These systems are designed considering the hierarchy of information processing in the brain. The first system takes a top-down approach. That is, a solution of the QAP is constructed by observing an 800-dimensional chaotic dynamics of the analog chaotic neural network. In the second system with a bottom-up approach, a heuristic algorithm, the Tabu search in our case, is driven by a chaotic neuro-dynamics with 300 dimensions. Through experiments, we demonstrate that both systems efficiently solve the QAPs through the physical chaotic dynamics. Moreover, we analyze underlying mechanism of the highly parallel and collective analog computation from a nonlinear dynamics point of view. Furthermore, we are currently extending our hardware systems for large-size QAPs which a conventional digital computer cannot solve.

 

Analog State Space in Memories and in Stochastic Supra-Turing Computation

 

Hava Siegelman

University of Massachusetts, Amherst

 

In many dynamical systems that model memory, discrete attractors are formed in a relatively small number of locations in the state space. In biological memory systems, however, there seem to be a continuum of attractors. Also current models show how a single input triggers a well specified attractor, while in biological systems it takes a stream of inputs for form a flow in memory space. We study Input-Driven Dynamic Attractors which are affected by both input dynamics and the trade-off between input dynamics and inherent network dynamics. We demonstrate that because the dynamics are affected by continuously valued parameters a continuum of attractors emerge. The flow in attractor space is dependent not only on the last inputs but on the complete history, it may be complex and bifurcation emerge, and the attractors themselves can be more complex than fixed points. Our model has the ability to dynamically explain Attention biases, Memory formation and reconsolidation, as well as Context effects.

We continue to explain why when continuous state space is involved the computation surpasses that of the classical digital models of computation. We also demonstrate a similar effect for stochastic or asynchronous discrete models which sample from external analog processes.

The Chemical Paradigm, Programming in Space and
the Implementation of Autonomic Systems

 

Jean-Louis Giavitto

CNRS & Université d'Evry Val d'Essone

 

     The chemical reaction metaphor (Banâtre, Le Metayer 1993) describes computation in terms of a chemical solution in which molecules (representing data) interact freely according to reaction rules. Chemical solutions are represented by multisets (data-structure that allows several occurrences of the same element). Computation proceeds by rewritings which consume and produce new elements according to conditions and transformation rules. The result of a chemical program is obtained when a stable state is reached i.e., when no reaction can take place anymore. For instance, the reaction:

primes = replace x, y by y if x div y

computes the prime numbers lower or equal to a given number N when applied to the multiset of all numbers between 2 and N. The reaction replaces any couple of elements x and y such that the reaction condition holds (x div y is true if and only if x divises y). This process goes on until a stable state is reached, that is to say, when no divisible element remains.

     The goal of Autonomic Computing is to realize computer and software systems that can manage themselves using self-organization, self-healing and self-optimization properties. The chemical paradigm has been advocated as well suited to express autonomic properties: the reaction rules correspond to the local actions to be taken to react to a perturbation.

      We believe that the relevance of the chemical paradigm for the specification and the high-level programming of large autonomic and parallel/distributed systems comes from two fundamental characteristics:

  1. the multiset data structure and the multiset rewriting device which suitably represent the orderless interactions (reactions) between elements that occur in large parallel or open systems;
  2. the computation of a stable state such that self-healing, self-protection and self-optimization behaviors can been seen as the stabilization of the system on a fixed point after a transient perturbation (in this point of view, an chemical program is a dynamical system: a state is a multiset, a trajectory of the dynamical system is a program run and a stable state is the result of the program run).

However, these two general statements must be refined:

  1. The direct interactions of arbitrary elements in a system are not always allowed nor desirable. The system may exhibit some data organization and only “neighbor” elements may interact. The neighborhood relationship may represent physical (spatial distribution, localization of the resources) or logical constraints (inherent to the problem to be solved).
  2. A multiset stable w.r.t. the reactions represents a solution computed by the program or an admissible state of an autonomic system. This state is best characterized by global properties (e.g. the entire multiset of primes lower or equal to a given number) while the reactions represents local changes (e.g. the suppression of one integer fulfilling some conditions). Therefore, the real difficulty of chemical programming lies in the linking of the local changes to the desired global property.

In this presentation, we will present some concepts and tools in the field of algebraic topology that can be used to face these two problems.

     As showed in the MGS project (Giavitto, Michel; 2002) (Giavitto 2003) topology can be used to constrain the multiset structure to more adequate organization. Indeed, constraining the universal topology provided by multisets to nested sequences leads to Lindenmayer systems (Rozenberg, Saloma, 1992); restriction to discrete lattice corresponds to cellular automata and more general crystalline computations (Margolus, 1999). And topology with higher dimension can be used to give a direct finite formulation to field equations of classical physical theories (Tonti, 1972) with obvious interests for the numerical applications (Palmer, Shapiro 1994; Tonti, 2001).

     The formalization of such topological structure relies on the notion of chain complex (Munkres, 1984). A chain complex is a cellular space (a discrete space build by gluing elementary spaces or atomic localizations) and labelled by values. Chain complex can be used to formalize a data structure, its organization and its rewriting (Giavitto, Michel, 2002). As a consequence, a chemical reaction (on a multiset or on a more constrained structure) is a function from chain to chain. Such transformations can be very arbitrary, especially when the pattern language is sophisticated. However, it appears that, in a lot of applications, the reactions are restricted to chain homomorphisms (reactions that respect the algebraic structure of chains). These chain homomorphisms are also called cochains. The computational content of a cochain is clear; it corresponds to a simple pattern of distributed computations and hence can be viewed as a skeletons that package useful and reusable patterns of parallel computations in the line of the work (Skillicorn, 1996). Interestingly, cochains can be interpreted as discrete differential forms (Desbrun et al., 2006; Spicher 2006). Intuitively, a differential form is an entity that appear under the integral sign: it is a local function applied on each “point” of a domain to obtain a global result. Various theorems have been developed in mathematical analysis to link the local (differential) and the global (domain) level: existence of a solution of a differential equation, Stockes theorem, integration by part, etc. Each of these results can be translated into the discrete setting and provide tools to reason on the global properties emerging of the local reactions.

 

[Banâtre, Le Métayer, 1993] Jean-Pierre Banâtre and Daniel Le Métayer. Programming by multiset transformation. Communications of the ACM (CACM), 36(1):98–111 (January 1993).

[Giavitto, Michel; 2002] Jean-Louis Giavitto and Olivier Michel. Data Structure as Topological Spaces. In Proceedings of the Unconventional Models of Computation: Third International Conference (UMC 2002), LNCS 2509, Springer 2002.

[Giavitto; 2003] Jean-Louis Giavitto. Topological Collections, Transformations and Their Application to the Modelling and the Simulation of Dynamical Systems. In Proceedings of 14th International Conference on Rewriting Techniques and Applications (RTA 2003), LNCS 2706, pages 208233, Springer 2003.

[Rozenberg, Saloma, 1992] G. Rozenberg and A. Salomaa. Lindenmayer Systems: Impacts on Theoretical Computer Science, Computer Graphics and Developmental Biology, Springer Verlag, 1992.

[Margolus, 1999] Norman H. Margolus. Crystalline Computation. In Feynman and computation: exploring the limits of computers, pages 267–305, Perseus Books, 1999.

[Tonti, 1972] Enzo Tonti. On the mathematical structure of a large class of physical theories. Accademia Nazionale dei Lincei, estratto dai Rendiconti della Classe di Scienze fisiche, matematiche e naturali, Serie VIII, Vol. LII, fasc. 1, Janvier 1972.

[Tonti, 2001] Enzo Tonti. A Direct Discrete Formulation of Field Laws: The Cell Method. CMES - Computer Modeling in Engineering & Sciences, vol.2, no.2, 2001, pages 237–258.

[Palmer, Shapiro 1994] R. S. Palmer, V, Shapiro. Chain models of physical behavior for engineering analysis and design. Research in Engineering Design, Vol.5, No. 3, 1994,

[Giavitto et al., 2005] Jean-Louis Giavitto, Olivier Michel, Julien Cohen, Antoine Spicher, Computation in Space and Space in Computation. In Unconventional Programming Paradigms (UPP'04), LNCS 3566, page 137–152, Springer 2005

[Bird, de Moor, 1997] Richard Bird and Oege de Moor. Algebra of programming. Prentice Hall, international series in computer science, 1997.

[Munkres, 1984] James Munkres. Elements of Algebraic Topology. Addison-Wesley, 1984.

[Giavitto, Michel, 2002] J.-L. Giavitto and O. Michel. The topological structures of membrane computing. Fundamenta Informaticae, 49:107129, 2002.

[Skillicorn, 1996] David B. Skillicorn. Communication skeletons. In Abstract Machine Models for Parallel and Distributed Computing, pages 163–178, IOS Press, The Netherlands, 1996.

[Desbrun et al., 2006] Mathieu Desbrun, Eva Kanso, and Yiying Tong. Discrete differential forms for computational modeling. In Discrete differential geometry: an applied introduction, pages 39–54. SIGGRAPH'06 course notes, ACM, 2006.

[Spicher 2006] A. Spicher. Transformation de collections topologiques de dimension arbitraire - Application à la modélisation des systèmes dynamiques. PhD Thesis. Université d'Évry-Val d'Essonne, décembre 2006.

 

Dynamic Self-Assembly as Computation: An Unconventional Approach to Software Implementation

 

Ann M. Bouchard and Gordon C. Osbourn

Sandia

 

     We have shown that protein dynamic self-assembly processes map naturally onto a formal model of computing, the random access machine (RAM). Specifically, we have shown that many biological processes that do not appear to be information processing, such as synthesis of macromolecules, structural assembly, transport, disassembly, and degradation of molecules, actually do perform computation when viewed from the RAM perspective [1].  

      Dynamic self-assembly brings several novel properties to computational systems.  In biological systems, algorithms yield physical, functional structures, i.e., structures that perform some function, or act as a machine. The assembly of the structure from its constituent parts automatically enables the structure to perform its function‚Äîactuate,   signal, build other structures, or tear them down. These structures are not fixed or rigid, but instead are easily altered, temporarily combined, and even disassembled, in response to changing conditions or signals. This provides pathways for information and material flow that can be created on-the-fly where needed, and dismantled to terminate these flows. This malleability is not found in many conventional hardware and software architectures.

     The inherent flexibility and reconfigurability of these processes inspired us to base a novel software technology on dynamic self-assembly. The fundamental software entity, termed a ‚Äúmachine,‚Äù is designed to embody the dynamic self-assembly properties of proteins. Machines have dynamic binding sites that can selectively bind to sites of other machines. The binding or unbinding of sites can trigger data passing, execution, or the exposing, hiding or altering of other sites. The change of other sites can in turn trigger subsequent binding or unbinding of those sites, resulting in further data passing, execution, etc.

     The net result is the dynamic formation of functional software assemblies (a collection of bound machines) that provide data and execution pathways. The software functionality at any give time is determined by the current structure of the machine assemblies. This in turn is determined by the self-assembly and disassembly processes. A pathway can be reconfigured (machines unbound, and possibly re-bound to other machines), and this reconfiguration (effectively, assembly of a different pathway) results in different functionality.

     In this talk, we will demonstrate the software functionality of self-assembling pathways using a graphical user interface (GUI) we have implemented with this approach. For a series of examples, we will describe the user interaction with the GUI (e.g., typing, mouse movement, left click), the pathway self-assembly that is triggered by that user event, and the functionality resulting from the assembly of that pathway. Some novel capabilities that are enabled by the inherently dynamic nature of the self-assembling executable code will also be discussed.

[1] A. M. Bouchard and G. C. Osbourn, “Dynamic self-assembly in living systems as computation,” Natural Computing, 5 (4), 321-362, 2006.

 

Programming self-developing "blob machines" for spatial computing.

 

Fredric Gruau

University of Paris Sud

 

     We present blob computing: A Blob is a generic primitive used to structure a uniform computing substrate into an easier-to-program parallel virtual machine. We find inherent limitations in the main trend of today‚Äôs parallel computing, and propose an alternative model trying to combine both scalability and programmability. In order to allow scalability of hardware, we consider  "spatial computing media" such as  2D cellular automata, amorphous computers, FPGA, or 2D grids of processors. In order to achieve the programmability, we program this hardware  using two levels:

     The first "system level": It is a run time system implemented on the computing medium. It takes the form of a local rule which can maintain  connected regions called blobs. Blobs can be encapsulated. A blob is similar to a deformable elastic membrane filled with a gas of atoms. (elementary empty blobs). Blobs are interconnected using channels, which act as springs and bring connected blobs closer to each other. The system implements in a distributed way: the movement, the duplication and  the deletion of blobs and channels. It can also propagate successive waves to communicate signals in a pipelined way: intra-blob, or inter-blob.

     The second  "programmable level": The run time system installs a virtual machine called "blob machine" above the computing medium. A blob is programmed using  a finite state automaton, with output actions triggering duplication, deletion, movement or communication . Execution starts with a unique  ancestor blob that repeatedly duplicates and creates new blobs and new channels, thus generating a network of automata.  This virtual  blob machine is an example of "self-developing machines", which is easier to be programmed  than a computing medium, or a VLSI circuit,  because of its ability to dynamically create or delete new nodes.

      We present the blob machine, how to implement blobs,  and how one can program with blobs by using a higher level language called "blob ml". We illustrate the execution of many examples of small programs. They all have an optimal complexity, in the VLSI model of complexity, under some reasonable hypothesis, concerning the -not yet fully implemented- system level.

 

Dr. Strangestuff or: How I Learned to Stop Programming and Love Plastic Foam

 

Jonathan W. Mills

Indiana University

 

After ten years of experience with an increasingly sophisticated succession of Rubel's Extended Analog Computers (EACs), some general insights into this new paradigm of computing have emerged. They answer questions such as "How is it possible to compute with machines ranging from deviceless silicon ('empty space') to plastic foam to Jell-O(r) brand gelatin?", "Why does analogy compute efficiently?", "How are applications developed for these machines?" and "How do unconventional computers compare to digital computers-- and why should we use them?"

     This session explains the difference between algorithmic digital computationand computation based on the direct use of the laws of physics and the mathematical principles that are implicitly and inherently present in matter. Applications of the EAC that illustrate this paradigm will be presented. Some have been prototyped at Indiana University, others are under evaluation in industry, and still others point toward the revolutionary devices that may be realized in the next ten years.

     Demonstrations of the current generation of EAC will be available during the workshop.