The poster session will be on the evening of Wednesday, March 21 from 5:45 - 8:00.

 

Contributors who would like to set-up their posters may do so starting at the lunch break on Wednesday.

 

Posters listed in alphetical order by the first author:

 

  

Modelling the growth  of artificial organisms by using 2D Self-replicating CAs

 

Eleonora Bilotta and Pietro Pantano

Università della Calabria, Italy

     Starting from the metaphor that self-reproducers have life-like properties and that they are organisms (or agents) that live in an environment, 2D self-replicating cellular automata (CAs) -- found by evolutionary techniques  --  provide insight into analogous processes in the biological world, computing in a highly unconventional way.

     We have identified many self-reproducing methods. In this talk we present three of these mechanisms and a new way of managing information, useful for developing new forms of computation. The first mechanism (Universal Constructor self-reproducing method) is very similar to stem cell behavior since it is possible to reproduce any configuration given as input, according to the way in which one organizes spatially the structure one wishes to reproduce. In particular, for many of these self-reproducer systems it is necessary that these structures have the same spatial orientation, or that they be turned at 180° and it is necessary that they be settled at a distance of almost two cells, or a multiple number of 4 cells.  In turn, these rules can be combined together in order to produce second order dynamics, in a further complexification of the self-replicated structures. The second mechanism (Developmental self-reproducing method) is organized in two distinct phases. At the early stage, the structures become bigger and after relevant steps of simulation reproduce a copy of themselves. In this stage, the self-reproducers grow as biological organisms do and give life to child copies of themselves, which in turn grow and procreate after many steps of simulation. Offspring develop in the opposite direction to the parent. Meanwhile, the parent continues to grow, gradually changing its configuration. In other words, the replication of developmental self-reproducers involves several nested processes, operating on different time scales. The main process is driven by the original self-reproducers and their coordinated production of offspring. Each of these offspring gives rise, in turn, to new asynchronous processes. These are emergent and are not explicitly encoded in the original agents.

     The third mechanism is the method we have called ad hoc mixed systems, which allows us to explore the potential of creating 2D self-reproducing CA patterns at will. Ad hoc mixed system modules are independent mathematical entities, which can themselves be represented by Boolean CAs. In some cases, the module overlaps with the self-reproducing initial data; in others, the initial data can be recovered by combining different modules. In our work, we implemented a computational system which is able to identify modules in the patterns generated by a self-reproducer, (i.e., the basic self-reproducer or a subset of the basic self-reproducer). This made it possible to simulate the 2D CA using an ECA rule. In other words, the final pattern is obtained by combining subsets of the same pattern. Each of these substructures is governed by an ECA rule that defines its development, on different scales.

     All the self-reproducer systems presented above use genetic rules to guide the building of their final configurations in the same way that the biological genetic code governs the production of proteins. To extend this metaphor and to create a new way of computing, we mapped the state of one system into the language of DNA bases, associating 0 with the code A (adenine), 1 with G (guanine), 2 with T (thymine), 3 with C (cytosine) and 4 with U (uracil). This enabled us to count the numbers of each “base” present at different stages in the growth process, obtaining dynamical networks which allow for the comprehension of the system’s codification, and the maintenance, sending and transcription of  this information.

The relevance of such work consists the possibility of achieving the goal of modeling the growth of artificial systems with the possibility of developing these methods in new physical and computational devices, especially devoted to nanotechnologies and DNA computing.

 

In Network Computation through On-Chip Stochastic Communication

 

Paul Bogdan

Carnegie Mellon University

     Shrinking feature sizes, increasing interconnect and logic density, and continuous scaling of the supply voltage and clock speed make the future System-on-Chips (SoCs) highly sensitive to neutron and alpha radiation and susceptible to a large number of timing violations. Consequently, the manufacturability of VLSI chips in the deep-submicron (DSM) domain requires new design and verification methodologies. The unpredictable failure of such VLSI chips calls for the relaxation of correctness standards for physical interconnections and the use of alternative communication paradigms. Thus, the Network-on-Chip (NoC), which is a promising solution to communication problems faced in the SoC context, needs to support fault tolerant communication. Our work aims to investigate the theoretical foundations of a biologically inspired communication approach. More specifically, we propose an analytical model for the on-chip stochastic communication paradigm. We utilize our model to analyze the number of nodes aware of the packet dissemination versus time and validate our conclusions with simulation results.

 

Embryomorphic Engineering: How to Design Hyper-Distributed Architectures Capable of Autonomous Segmentation, Rescaling and Shaping

 

René Doursat

CNRS & Ecole Polytechnique, France

     Exploding growth in hardware components, software modules and network users forces us to find an alternative to rigidly designing computational systems in every detail. Future progress in information and communication technologies will depend on our ability to, instead, “meta-design” mechanisms allowing those systems to self-assemble, self-regulate and evolve. Such decentralized, autonomous systems are already pervasive in nature and called “complex”, although they are often less costly, more efficient and even simpler than intelligently designed, centralized systems. Complex systems are characterized by the self-organization of a great number of small, repeated elements into large-scale patterns, where each element may itself obey the dynamics of an inner network of smaller entities at a finer scale (microprogram). The new engineering challenge is to “guide” this self-organization, i.e., prepare the conditions and mechanisms favorable to a robust and reproducible—as opposed to random—pattern formation process (macroprogram). Yet, at the same time, it is also to let the parameters of this process evolve in order to freely generate innovative designs. Finding efficient systems requires matching loose selection criteria with productive variation mechanisms. The first point concerns the openness of the designers to “surprising” outcomes; the second point concerns the intrinsic ability of complex systems to create a “solution-rich” space by combinatorial tinkering on highly redundant parts. Embryogenesis, the development of an entire organism from a single cell, provides the most striking example of self-organization guided by evolvable genetic information. This work presents an original model of artificial *embryomorphic* system growth. A virtual organism is represented by a mass of cells that proliferate, migrate and self-pattern into differentiated domains. Each cell contains an internal gene regulatory network and acquires specific gene expression identity by interaction with neighboring cells. Different identities trigger different cell behaviors, which in turn induce new identities. The final organism’s architecture depends on the detailed interplay between the various rates of cell division and movement, propagation of genetic expression and positional information. Ultimately, on this score of “theme and variations” (developmental laws and parameters), evolution will be the player. In possible hardware applications of this model, nano-units containing the same instructions could self-organize without the need for reliability or precise arrangement as in traditional VLSI. In software or network applications (servers, security, etc.), a swarm of small-footprint software agents could diversify and self-deploy to achieve a desired level of functionality. In all cases, embryomorphic architectures suggest a “fine-grain” approach to systems design, i.e., one based on hyper-distributed collectives of a great number of very simple and relatively ignorant cloned elements.

 

Self-replicating Nanocells as Possible Substrate for Chemical Computation

 

Harold Fellermann, Steen Rasmussen, and Hans-Joachim Ziock

LANL

       This work presents the design of an artificial organism [1] that is currently being pursued both in vitro and in silico in an international, interdisciplinary research project [2]. In our approach, a micellar or vesicular proto-container is catalytically coupled to a proto-genome and a light-driven metabolic reaction. Although inspired by nowadays single cellular organisms, the approach differs significantly in its actual chemical implementation and can be understood as a minimal implementation of a chemical system that possess features attributed to living systems, namely self-assembly, growth, self-replication, and the possibility to evolve.

     Here, a mesoscopic computer model of the minimal protocell is presented [3,4] in which molecules are represented in a coarse grained fashion and their 3D motion is calculated by the dissipative particle dynamics (DPD) formalism [5,6]. Additionally, the model incorporates chemical reactions by means of stochastic process. Simulation results of the self-assembly, the growth by incorporation and metabolic turnover of precursor molecules, the template directed replication of the genome, and the fission of the aggregate into two separate daughter cells are presented.

     We outline a possible concept to use protocellular biofilms as a substrate for chemical based computation, in that protocells can maintain internal states by means of hydrophobic reactants that undergo multistable reactions within the cells, while information of states can be propagated among cells by diffusion of hydrophilic messengers. Logical gates are achieved by stoichiometric coupling between these messengers. We hope to increase the complexity of these gates by the use of genetic biopolymers either as messenger or catalyst. Furthermore, we can confine the metabolic activity of the protocells to arbitrary topologies by shining light patterns on the artificial biofilm. This allows wiring individual gates to more complex logical networks. While this design adopts prominent approaches in chemical computation [7-9], the protocellular system presented in this work additionally offers the features of self-replication and evolution. We are currently seeking for computational concepts that harvest these features.

[1] Rasmussen et al, Artificial Life 9:269-316, 2003. [2] Los Alamos National Laboratory sponsored Protocell Assembly (PAs) project and the European Commission sponsored Programmable Artificial Cell Evolution (PACE) project. 2004-08 [3] Fellermann & Sole, Phil. Trans. R. Soc. B., (in press 2007) [4] Fellermann et al., Artificial Life, submitted 2006. [5] Hoogerbrugge & Koelman, Europhys. Lett. 19:155-160, 1992. [6] Groot & Warren, J. Chem. Phys. 107(11):4423-4435, 1997. [7] Hjelmfelt, Weinberger & Ross, PNAS 88(24):10983-10987 [8] Kuhnert, Agladze, & Krinsky, Nature 337:224-247, 1989 [9] Seelig et al., Science 314:1585-1588, 2006

 

Computing with noise takes time

 

Luca Gammaitoni

Universita' di Perugia, Italy

     The present tendency to scale down CMOS based devices toward the nano-meter region1 makes the discussion of the role of noise in computation increasingly relevant. Noise immunity in a low energy dissipation scenario has become the recurring objective of significant research efforts in this field. On the other hand some authors have focused their attention on the potential role of noise in nanoscale devices where noise driven dynamics has been invoked to explain the experiments and to optimize future design. In order to address a non-negligible error probability a number of strategies have been devised where a probabilistic approach to the computational task has been often invoked. It is not our aim to propose a new computing strategy. Instead we demonstrate that the careful consideration of the noise effects, together with a proper choice of the ‚Äúidle time interval‚Äù lead to a prescription for the minimization of the error probability that can be summarized as follows: in a noisy environment you can still compute by using the usual threshold-crossing logic gates, provided that you wait long enough, but not too long.  We anticipate this result to be relevant toward the design and realization of nano-scale computer where ambient noise, instead of being a mere source of disturbances could be an essential component of the computing process itself.

 

RAIN Brains: Mammalian Neocortex as a Hybrid Analog-Digital Computer

 

Philip H. Goodman1, Rene Doursat1,2, Quan Zou1,

Milind Zirpe1, and Oscar Sessions1

1University of Nevada, Reno

2CNRS & Ecole Polytechnique, Paris, France

What kind of computer is the mammalian brain? To improve upon simple rate-based artificial neural networks, computational neuroscience research over the past decade focused on more biologically realistic spiking neuron models—but still ascribing, on the millisecond time scale, a digital overtone to brain processing. A more recent development has been to explore the spectral properties of subthreshold membrane potentials, emphasizing an analog mode of computing. Together, by modeling the fine temporal structure of neural signals, these research trends have revealed a great diversity of collective spatiotemporal regimes: synchronization and phase locking, delayed correlations and traveling waves, rhythms and chaos, etc. Through recurrent (and plastic) synaptic connections, neural cells transiently interact as dynamical subnetworks that promise an immense richness of coding expression and computational power, combining the discrete and the continuous. What repertoire of dynamical regimes (“phase diagrams”) can such subnetworks sustain? In the classical feedforward view, subnetworks (layers, cell assemblies) are initially mostly silent and have to be literally activated by an input or a “lower” area. Our work subscribes to a new paradigm, in which subnetworks already possess viable and complex endogenous activity modes that are only perturbed through coupling with an input or other subnetworks. Using spiking neuronal simulations, we describe here progress to-date towards building cohesive “analo­g-digital perturbation” principles that can underlie biological attention, pattern recognition, short- and long-term memory, and motor responsiveness to natural environmental stimuli. In particular, we describe the performance and sensitivity of dynamically igniting-and-quenching Recurrent Asynchronous Irregular Networks (RAINs). We explore the regimes and phase transitions of RAINs under conditions of calibrated voltage-sensitive ionic membrane channels, synaptic facilitation and depression, and Hebbian spike-timing dependent plasticity (STDP). Specifically, we demonstrate the spontaneous emergence of alternating sub-100 millisecond states of subthreshold (i.e., analog) correlation-decorrelation, suggesting a natural mechanism of intrinsic clocking. We also study “lock and key” properties of RAIN activation, i.e., a model of pattern recognition and nondiscrete memory storage based on a dynamics of coherence induction triggered by input stimuli (the “keys”). Here, learning a pattern (a “lock”) means tuning synaptic efficacies to a point of maximal postsynaptic response. Finally, we discuss the importance of embodied social robotics to “teach” intelligent behavior to RAIN brains, and speculate on the instantiation of RAIN brains in compact analog VLSI architectures.

 

Spatial Design Using Binding P-Systems

 

Ralph Hollingsworth

Muskingum College

Numerous computing technologies are under current or proposed development, in fields ranging from biology to classical electronics. A novel aspect of much of this work is the dynamic nature of the computing devices, i.e. the ability to extensively reconfigure physical structures during computation. To shift classic solution designs into such devices, languages used to implement solutions will need to be cognizant of topological and geometric factors. In order to study issues related to such "programming" languages, a conceptual framework has been developed. This environment extends P-Systems1 to include inter-object binding. Initial prototype studies have used PyMOL2 and Python to create examples of docking among membrane structures found in P-Systems. By using the classes in PyMOL, originally designed for molecular studies, it is possible to create abstract objects that can attract/repel each other via mutual docking sites.  The membrane character of P-Systems provides a methodology for enclosure, and for the exchange of parameter information with buffering (trans-membrane transport). Enclosed structures may be further composed, while sheltered from outside environments. The current study includes the external binding of membrane objects to each other, and implements their binding using geometric/topological hashing. Such hashes provide a significant heuristic capability for reducing the intractability found in complex, 3-dimensional docking. As a goal, a language structure based on these ideas could be applied to computing devices, including fixed-matrix substrates, chemical assemblages, and biological cells, and could be used for scales ranging from nano-sized devices to macro-sized clusters of computing entities.

1. Paun, G., "Computing with Membranes", Journal of Computer and System Sciences, 61, 1 (2000), 108-143 2 .DeLano, W.L., "The PyMOL Molecular Graphics System (2002)", World Wide Web: http://www.pymol.org

 

Spatial Swarms: A New Computing Paradigm?

 

Christian Jacob

University of Calgary, Canada

     The study of social interactions within human societies as well as insect colonies has led to a new field in artificial intelligence—the study of collective intelligence. The emergent properties of 'swarms of agents' are apparent in many facets. An ant colony, for example, exhibits properties of a hive mind: the colony grows, takes in food, has to get rid of its waste, has to defend its foraging space against other colonies—short, it displays many properties of a living entity. However, the colony only exists through the network of its more or less tightly connected building blocks, the ants.

     Particularly interesting from a computing perspective is the fact that no matter on which scale we observe a collectively intelligent system, we can utilize the same or very similar principles to capture a system's emergent properties. Consequently, we can reuse algorithms for traffic systems [1,9] to model army ant raiding patterns [11]. Algorithms that describe the interaction of immune system cells [6,7] can as well be used for modeling how gene regulatory processes work inside a bacterial cell [2,3,4,10]. We have even applied swarm computing principles in art exhibitions with interactive computer installations that provide playful exposures to swarms that create art and music [5].

     Swarm systems like these perform massively parallel computations in a spatial environment, which can not be captured well enough through relatively static cellular automata. My lab is exploring new ways to model, understand, and utilize swarm-based systems. We have recently introduced Swarm Grammars [8,12] (as an extension of Lindenmayer systems) to control thousands of swarm agents interacting in 3D space through rule-based systems. The rule systems can be evolved, so that we breed a co-evolutionary ecology of swarm computations, which are represented as 3-dimensional, structural designs—thus creating computational landscapes, similar to the hive minds of social insects.

[1] R. Hoar, J. Penner, and C. Jacob. Evolutionary swarm traffic: If ant roads had traffic lights. In IEEE World Congress on Computational Intelligence, Honolulu, Hawaii, 2002. IEEE Press. VirtualBacteria-Refs (2004) [2] R. Hoar, J. Penner, and C. Jacob. Transcription and evolution of a virtual bacteria culture. In IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003. IEEE Press. [3] C. Jacob and I. Burleigh. Biomolecular swarms: An agent-based model of the lactose operon. Natural Computing, 3(4):361-ì376, December 2004. [4] C. Jacob and I. Burleigh. Genetic programming inside a cell. In T. Yu, R. L. Riolo, and B. Worzel, editors, Genetic Programming Theory and Practice III, pages 191‚Äì206. Springer Verlag, 2006.  [5] C. Jacob, G. Hushlak, J. Boyd, P. Nuytten, M. Sayles, and M. Pilat. Swarmart: Interactive art from swarm intelligence. Leonardo (in print), 40(3), 2007 [6] C. Jacob, J. Litorco, and L. Lee. Immunity through swarms: Agent-based simulations of the human immune system. In Artificial Immune Systems, ICARIS 2004, Third International Conference, Catania, Italy, 2004. LNCS 3239, Springer.  [7] C. Jacob, S. Steil, and K. Bergmann. The swarming body: Simulating the decentralized defenses of immunity. In Artificial Immune Systems, ICARIS 2006, 5th International Conference, Oeiras, Portugal, September 2006. Springer. [8] C. Jacob and S. von Mammen. Swarm grammars: growing dynamic structures in 3d agent spaces. Digital Creativity, 18(1), 2007.  [9] J. Penner, R. Hoar, and C. Jacob. Swarm-based traffic simulation with evolutionary traffic light adaptation. In L. Ubertini, editor, Applied Simulation and Modelling, Crete, Greece, 2002. ACTA Press, Zurich.  [10] J. Penner, R. Hoar, and C. Jacob. Bacterial chemotaxis in silico. In ACAL 2003, First Australian Conference on Artificial Life, Canberra, Australia, 2003. [11] G. Suen. Modelling and simulating army ant raids. M.sc. thesis, University of Calgary, Dept. of Computer Science, Calgary, Canada, April 2004. [12] S. von Mammen, C. Jacob, and G. Kokai. Evolving swarms that build 3d structures. In CEC 2005, IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2005. IEEE Press.

 

Continuous States and Distributed Symbols: Toward a Biological Theory of Computation

 

Simon D. Levy

Washington & Lee University

     The classical theory of computation rests on two fundamental assumptions: states are finite, and symbols are atomic. Although automata built on these assumptions are extremely successful at solving many computational tasks, the assumptions are highly implausible for human and animal cognition.  First, the signals used by the brain and other biological systems are mainly continuous, as evidenced by the widespread use of differential equations in modeling these systems.   For this reason, it makes little sense to view mental states as countable, let alone finite.  Second, there is very little reason to believe that mental representations involve locally-stored atomic symbols.  Consequently, classical  pointer-based discrete structures over such symbols, and algorithms operating on such structures,  are not biologically realistic.  Experimental evidence instead favors a view in which the representations of entities, concepts, relations, etc., are distributed over a large number of individually meaningless elements in a way that supports similarity metrics and content-based retrieval. 

     Although both continuous-state computation and distributed representations have received a fair amount of research attention, it is uncommon to see them discussed together in the unconventional-computation literature (except,  perhaps, as part of a general survey).  In our presentation we argue that a biologically plausible theory of computation will require both a continuous-state automaton component and a distributed-memory component,  much as a classical pushdown automaton uses both a finite-state automaton and a pushdown stack.    This view is supported by current research in clinical psychiatry suggesting hemispheric differentiation for sequence processing and conceptual structure.

     We show further that stack-like operations (PUSH and POP) over distributed representations can be performed as simple vector addition and scalar multiplication, in a way reminiscent of moiré or foreground/background effects in visual processing.   This possibility suggests that “higher” mental functions like language and abstract thought might be exploiting existing neural circuitry already available for other purposes.  We conclude with a simple application example from language parsing, and some speculation about possible new directions and guiding principles for biologically-inspired unconventional computation.

 

Designing collision-based computers in elementary cellular automata

 

Genaro Juarez Martinez1, Harold V. McIntosh

Andrew Adamatzky1 and Juan C. Seck Tuoh Mora

1University of the West of England

     A collision-based computer is an architecture-less device, where signals are represented by particles (gliders or self-localizations), and logical operations are calculated by collisions among particles. In this work we present results about detecting and classifying all possible collisions between particles in one-dimensional cellular automata, and constructing a catalogue of binary collisions. Using this catalogue, we develop formal languages via de Bruijn diagrams to represent particles in the evolution space of Rule 54 and Rule 110 cellular automata using regular expressions. Finally, we describe a subset of regular expression to code each known particle in Rule 54 and Rule 110 to solve some relevant problems like self-organization in particle dynamics, production of glider guns and logical operations

 

A Hierarchical Dynamical System Analog Computation Architecture for Embodied Cognition

 

Nebu John Mathai and Takis Zourntos

Texas A&M University

     We present an unconventional analog computation paradigm based on the application of cybernetics to the synthesis of cognitive faculties for mobile robots with life-like behavior. We are encouraged by a long history of ideas linking life, cognition and computation. Sommerhoff, formulating the basis for an analytical biology, sought to explain the apparent purposiveness of living systems. Cyberneticists sought to identify and exploit the control and communication mechanisms underlying natural and artificial autonomous goal-directed systems, believing regulation to apply as much to an organism's externally directed cognition as to the homeostatic control of its internal environment. Maturana and Varela (echoing Wiener) very directly state, "living systems are cognitive systems and living is a process of cognition." Finally, recent work on the "artificial life route to artificial intelligence" seeks to enrich AI---and, by extension, the underlying computational mechanisms that facilitate cognition---through the development of embodied robotic agents with life-like structure and behavior.

     Our approach is cybernetic and influenced by artificial life insights. We view the computational machinery underlying cognition to be regulatory and hence use control theoretic synthesis tools. Moreover, we develop physically embodied cognition and utilize a hierarchical dynamical systems structure. Our architecture consists of a cascade of regulators separated by time scale, each having access to sensor data and which control problems (with varying levels of abstraction) relating to the agent's motion in the world. To synthesize the regulators we employ a methodology of plant-controller co-design; after constructing a plant that models the salient environmental dynamics, we apply nonlinear control theoretic toolsets to synthesize a corresponding controller. In addition to providing a systematic means of developing analog computation systems, we find that the use of the feedback regulator motif as a computational primitive naturally facilitates parallelism.

     Conceptually our approach has affinities with Ashby's notion of intelligence amplification, i.e., we design a cognitive dynamical system coupled to a "problem" via sensor/actuator channels, and whose stability yields a solution in a more complex problem domain. Indeed, we highlight the emergence of satisficing intelligence to demonstrate this, appealing to Bedau's definition of weak emergence. As well, we show how satisficing intelligence arises as a result of our unconventional approach to stability---we relax the pursuit of asymptotic stability to ultimate boundedness, giving the system flexibility to consider more varied actions.  Finally, simulation results and animations (available at http://www.ece.tamu.edu/~takis/robotics_main_page.html) for the system in non-trivial environment topologies are provided, illustrating the life-like behavior of the cognitive system and the emergence of satisficing intelligence.

 

Bits and Bases, Parallel Paradigms for Communication

 

Elebeoba E. May

Sandia

      Constructing parallels between engineering frameworks for communication and biological information processing mechanisms is a challenge that if embraced can contribute to our quantitative understanding of complex molecular processes.  Forming a quantitative understanding of how biological organisms can communicate their genetic message efficiently in the presence of noise can improve and advance current communication protocols.  Given this objective, the underlying hypothesis of the work presented is that the genetic system can be operationally paralleled to an engineering communication system, which transmits and operates on bases as opposed to bits [May, et al., 2004].  The central dogma of genetics can be viewed as the paradigm for biological communication, where an organism's redundancy containing DNA is the result of an error control encoding process.  We use this framework to study several problems in molecular biology: 1) The informational capacity of the genetic replication process; 2) Aging related mutagenesis; and 3) Detection of single nucleotide polymorphisms in DNA sequences.

     Mutations are replication errors that remain or are missed by genetic proofreading mechanisms.  Mutation derived capacity values can provide insight regarding the information capacity of the replication process. We calculate the genetic channel capacity using u_b, the single base mutation rates reported in Drake et al. [Drake et al., 1998]. Assuming the DNA replication process can be paralleled to a discrete memoryless channel, we analyze the information capacity of replication using empirical mutagenesis data and the Shannon entropy to calculate the channel capacity for the following prokaryotic and eukaryotic systems: Bacteriophage M13, Bacteriophage Lambda, Bacteriophages T2 and T4, Escherichia coli, Saccharomyces cerevisiae, and Neurospora crassa), higher eukaryotes (Caenorhabditis elegans, Drosophila melanogaster, Mus musculus (mouse), Homo sapiens). We found that prokaryotic organisms have larger channel capacity values, ranging from 1.95 to 1.975 bits, than the higher eukaryotes with capacity values ranging from 0.4 to 1.85 bits.  This suggests that for DNA microbes the maximum coding rate R is closer to (n-1)/n, leaving few bases for error control coding.  In contrast, the channel capacity values for higher eukaryotes imply a distinctly smaller maximum value for R, suggesting that eukaryotic genomes have more bases available for error control coding, i.e. more redundancy in their genome.  It is generally accepted that eukaryotic organisms have a greater number of "extra" bases in their genome (bases that are not used to specify amino acids) than prokaryotes; our findings are consistent with this belief.

      Given the probability of reduced replication fidelity as the number of times a chromosome is copied increases, and the existence of biological mechanisms that prevent the continued transmission of error containing chromosomes, it may be possible to view aging and related mutation engendered diseases as inevitable communication failures.

      Extending the genome replication channel model, it is evident that for a fixed u_b <1, as G (genome size) increases, the quantity (1-u_b)^G decreases.  Consequently the channel error probability (1-(1-u_b^G) increases.  The result is a reduction in overall channel capacity.  We evaluate the probability of error for an organism's replication channel after NCD cell divisions by equating multiple cell divisions to the transmission of a genome of size G*NCD, where NCD is the number of cell divisions. There is a reduction in the information capacity of the replication channel for N_{CD}=1...75 cellular generations for higher eukaryotic organisms, substantiating the need for error control within DNA in order to ensure the survivability of an organism and ultimately the species.

     Hybridization-based target recognition and discrimination is central not only to in vivo biological processes, but also vital to the operation of nucleic acid sensor systems. Therefore developing algorithms to accurately detect polymorphisms in DNA sequences is critical. Using the genetic communication theory paradigm, we investigate coding theory algorithms for uniquely categorizing single nucleotide polymorphisms in silico based on the calculation of syndromes.  We develop a syndrome generation algorithm that can be used to uniquely distinguish SNPs in the middle to 3' end of the 15-base probe sequence of computational catalytic molecular beacons (deoxyribozyme gates) [Stojanovic and Stefanovic, 2003]. This error control coding algorithm not only detects a mutation in the sequence but can categorize the location and type of base mutation based on the unique syndrome vectors. We continue to investigate a complementary algorithm that produces similar results for SNPs in the 5' end of the input sequence and the biological implications of our communication theory algorithms.

Acknowledgment

SNP work performed in collaboration with S. Brozik, P. Dolan, and P. Crozier of Sandia National Laboratory.  Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company for the United States Department of Energy's National Nuclear Security Administration under contract DEAC0494AL85000. References  JW Drake, B Charlesworth, D Charlesworth, JF Crow.  Genetics, 148:  1667-1686, 1998.  EE May, MA Vouk, DL Bitzer, DI Rosnick.  BioSystems, 2004. MN Stojanovic, D Stefanovic.  Nature Biotechnology, 21:  1069-107, 2003.

 

Fault Tolerance via Local Adaptation

 

Reid Porter and Garrett Kenyon

LANL

     For applications like image and signal processing and real-time control, artificial neural networks, due to their distributed cellular architecture, can provide natural solutions for computing with fault prone devices. The neural network community has proposed several architectures like Hopfield Networks and Self Organizing Maps that exploit system dynamics to solve computational problems. Solutions are represented as equilibrium points and the convergence of the system effectively provides self-organized fault tolerance. This type of approach appears ideal for fault tolerant computing with next generation devices but it currently limited to specific applications for which these systems have been proposed e.g. associative memories and clustering.  In this poster we investigate how fault tolerant equilibrium can be used in applications which are not explicitly encoded in the system dynamics. The basic approach is very simple and amounts to relaxing the requirements. Instead of requiring the system dynamics to directly encode the desired computation, we require only that the system dynamics include the desired computation as a local equilibrium. We then use the initial conditions to bootstrap the system to the desired local equilibrium and hence exploit fault tolerant equilibrium for a much wider class of application.

 

An analogical definition of computation

 

Brian F. Rossa

Lockheed Martin Advanced Technology Laboratories

      The word "computation" is overloaded. Its semantics lend application to countless situations, both spontaneous and contrived, natural and artificial. Our intuition tells us what computation is and what it isn't, but the scientific vernacular often colludes with what can be said, formally, about mathematical constructs, physical systems, and the act of computing. This work proposes a rigorous qualitative definition of computation that hopes to resolve these ambiguities. We define computation as a process of analogy-making between physical systems and give evidence to support that this is a particularly good definition. We go on to describe classic results in computer science in terms of this new model, disambiguating it from other definitions of computation. We show that this definition succeeds in unifying much thinking about the relationship between computation and physics.

 

     "Computation" is evoked notionally in a variety contexts. So many, in fact, that the naive investigator cannot say with much certainty what exactly computation “is.” In the academic computer science community, "computation" is associated almost exclusively with the Turing Machine, and historically with mathematics. If a problem cannot be solved with a Turing Machine, then it's said to be "uncomputable." Yet we find that recently this association has been made less salient by those who are interested in analog computation. In most contexts, "computation" implies a computer, a physical system (or perhaps a “physically realizable” system) whose states encode the entities of interest. Yet the formal connection between abstract computation and its physical realization is not addressed outside the work of a small group of prescient physicists. Finally, it is difficult to imagine "computation" as anything but a human endeavor, and yet we find the entire field of “natural computation” as a counterpoint to this intuition.

     The nascent study of computation across all such contexts - let us call it "computation science" to avoid further overloading – cannot come to fruition without resolving such ambiguities. We argue that the development of a computation science depends upon a rigorous, unified definition of computation. This work presents a body of evidence - albeit circumstantial evidence – to support our argument. When taken in the whole, this evidence admits a very strong candidate:

     We find that computation, in the most general sense, is about the making of a very particular analogy between two physical systems. These systems we will call the "source" and the "computer" for obvious reasons. Once constructed, this analogy between source and computer is simple and precise. It says that the states of the computer, evolving according to its own dynamics, encode what “could” happen to the source system. In order for us to say that a physical process is an instance of computation in this sense, it must satisfy certain criteria.  First, the source and computer must be distinct systems. Second, we must identify a process whereby observables of the source system are mapped to controls on the computer. This process we call "embedding." Embedding, we claim, requires a third distinct physical system that is as important to computation as the source or computer. This system we call the "constructor."

     The notion of a constructor is implicit in previous attempts to define computation in physical terms. Toffoli [2005] argues very nicely that computation does not exist outside a very particular context. Many others have described computation as something of a goal-directed process. Latent in all of this is a physical system - usually a human- that embodies goals, establishes context, and ultimately serves as the constructor in our sense. We choose to make this system's role explicit because, we will show, it serves a very important “physical” role in computation. Examples will be given shortly of many different instances of computation that support our intuition.

     We choose not to define computation in terms of symbols or abstract machines or languages, but rather in terms of physical systems. Our analogical definition of computation (ADC), then, is a “physical” definition of computation.  This is motivated by a great deal of contemporary research in “nonstandard computation” that allows our intuition about what computation “is” to grow beyond the digital model. We are also motivated by a number of very difficult problems in computer engineering that remain unresolved, presumably, because previous definitions failed to make explicit the relationship between computation and its physical substrate. In this endeavor, we draw from a wonderful body of work, starting with Landauer, that is concerned with the physical aspects of computation. Finally, we hope to collect under our banner a number of curiosities having to do with the computing of physics (rather than the physics of computation). These, we believe, offer a strange paradox against which to test the fitness of our ADC and, ultimately, pose questions about the so-called Church-Turing Deutsch Thesis.

 

Computing with cortical microcircuits- new suggestions for parallel programming models

 

Gabriele Scheler

Stanford University

   I propose a computational model of cortical micro-circuitry consisting of the following parts: a) local circuits which compute by aligning and convoluting input signals with stored patterns using parametrization for afferent and recurrent filters (data stream) b) linking local circuits as coupled oscillators in global landscapes using interaction by synchronized signals (command level). I will focus here on computation in the local microcircuit.

System layout. Each local circuit has an arrangement of processors with afferent filters which are sigmoidals parametrized by m0 (and upper bounds (u), lower bounds(l)) and s or k and recurrent filters which express temporal modulation (p-modulation) of a sigmoidal function.

                   "afferent filter"

                   and

                                           "recurrent filter"

Afferent signals consist of systematic (statistically dependent) variations of a time-varying signal. The use of sigmoidals retains peak information across all filters and has the capacity to transform signals into a series of impulses. Clusters of m0 variation perform horizontal segmentation of the family of time-varying signals, and clusters of slope (s,k) variation perform the matching of its signal shapes to stored templates. The recurrent connectivity is matched to the afferent filter properties of each unit and is specific to each local circuit. The recurrent filter system adds up signal components according to the topology of connections and re-inserts signals to processors of the local circuit through p-modulated sigmoidal filters. P-modulation allows parametrized temporal dislocation of peaks in the input signals.

Computation: The system can be tuned such that prominent matches of signal shape between input and stored template generate strong synchronized signals (which are distributed to other local modules). This provides parallel processing by global connectivity on the command level, resulting in phase reset, phase advance or phase delay signals. On the data stream, the output is a combination of the components of the input signals and the signals that are stored by the parametrization of the processors (the local memory). This means that local circuits work both as acceptors (comparators) of input and stored signal, and as computational processors using input and stored signals. Storage or memory in the system consists of parameters as programmed elements vs. auto-tuned parameters necessary for calibration. We present a number of integer computations which can be realized with the system. Learning algorithms would allow programming by slow and selective adaption to input strings (not implemented). We also raise the question of (resource-restricted) universality vs. limitations of integer computation dependent on choice of filters.

 

Towards Self-organized Computation with a Network

 

Hideaki Suzuki

National Institute of Information and Communications Technology

     Network Artificial Chemistry (NAC) [1,2,3,4] is a research approach that represents molecular interactions purely using a mathematical graph with no coordinate information.  Molecular activities such as collisions and reactions are governed by spatial constraints in the real chemistry, so in order for the NAC to tightly emulate bio-molecular activities in the water, a rewiring rule of the NAC should be able to reproduce a regular structure in the graph through the rewiring rule.

      From this notion, this paper proposes preparing an active program in the NAC node, and tests the possibility of creating regular structures.  We take the two-dimensional square lattice as an example, and implement a program in each node.  The algorithm extends two paths from a node, and if they do not meet and create a length-four loop, newly creates an edge for the loop.  All the nodes in the graph has the same program, and through their parallel and independent execution, a quasi-regular structure is created.

     Figure 1 shows the results obtained from a typical simulation run.  We start from an initial random graph with 512 nodes and uniform degree 4.  After conducting one hundred active operations for each node, a regular structure emerges in the graph, and the graph seems a kind of a folded sheet of a 2D square lattice (Fig.1(a)).  As time goes on, the regular structure is kept basically, but the sheet structure is unraveled gradually, and after one thousand operations, we have the half-sheet and half-string topology shown in Fig.1(b).

     In the previous studies of the NAC, we classified the types of edges from (weakest) van der Waals to (strongest) covalent and assumed that the weakest edges are basically rewired by a passive rewiring rule incorporating the

physical/spatial constraint [2,3].  Solvent nodes were assumed to be inert (inactive) and the active operations were only realized by a data-flow cluster created through the mathematical folding of an active node chain [4].  The present results, however, suggest that when we try to create organized structure in a network, one of the easiest and straightforward way is to implement an active program in a node that makes the nodes behave in a complicated manner to some extent.

     Conclusion: We designed an active program that locally rewires the surrounding edges of a solvent node in the NAC, conducted the numerical experiments of its execution, and succeeded in organizing a global regular structure in the NAC graph.

     Future extention of this study includes improving the program so it might organize a lump structure, putting heterogenious programs in the NAC nodes to organize more complex structure, and so on.

     This study was supported in part by Doshisha University's Research Promotion Funds, "Academic Frontier, Intelligent Information Science", in Japan.

[1]Suzuki, H.: Proceedings of the Ninth International Conference on the Simulation and Synthesis of Living Systems (Artificial Life IX) (2004) 507-513 [2] Suzuki, H., Ono, N.: The 8th European Conference on Artificial Life (ECAL) Workshop Proceedings CD-ROM.  Canterbury, UK (2005) [3]Suzuki, H.: Australian Journal of Chemistry 59 (2006) 869-873 Suzuki, H.: BioSystems 87 (2007) 125-135

 

Components and Aspects Oriented Language for Amorphous Computers

 

Moise Valvassori

Paris 8 University, France

     We design a component and aspect oriented language for amorphous computers[1]. A "software component" is piece of software which exposes predefined aspects and able to use others components. An aspect [2] is a part of a program that cross-cuts components. Components allow us to build libraries and aspects permit to build multiple backends such as simulations or embedded devices programs. Our amorphous computers consist of several thousands of cells distributed uniformly on a square surface. Each cell contains: a unique program p for all cells, n private states, n' public states called messages, and a blackboard b containing the neighborhood messages.

  In our language, a program is composed by a components collection. A component contains a name and an aspects list in: parameters declaration, a components list used by this component (each component could be parameterized), a states list which represent the private state of the cell, a messages list which will be automatically transmitted to the cells in the neighborhood when modified, initialization programs, messages handling programs and general programs.

     The previously listed aspects are expressed in a simplified Lisp dialect.  Additionally to traditional functions and programming structures, five specific cell functions are available: a pair of functions for reading and modifying the private state of the cell, a pair of functions for reading and modifying the public state of the cell, a function for reading of the public state in the blackboard of the close cells, a access function to the parameters.

       The components constituting an amorphous program are compiled to an autonomous simulator written in the C programming language.

       Simulations created by the compiler simulate an amorphous computer involving several tens of thousands to several million cells. It's designed around a simple and efficient architecture: a table contains the complete state of all the cells, a table contains the neighborhood of all the cells, a set of threads dedicated to messages management, a set of threads dedicated to cell's behavior management, a thread specialized to data analysis, a thread coordinating the other simulation threads.

     The figure 1 shows a Voronoi diagram component source code. This component uses another component which propagate the source cell's ids to other cells. Cells detecting id collision are on a partition's frontier. The figure 2 shows a visualization of a Voronoi diagram on an amorphous computer.

[1] H. Abelson et al. Amorphous Computing. CACM, 43(5):74-82, may 2000. [2] G. Kiczales et al. Aspect-Oriented Programming. In ECOOP'97,1997.