20041201

Adaptive Quantum Networks quantum topology » neural networks [IJTP] Our paper on adaptive quantum networks appears in this month's International Journal of Theoretical Physics. We introduce a novel model of superposed adaptive quantum networks, with considerations for high-dimensional dissipative quantum systems in both quantum computation and molecular biology. A preprint of the article is available via quant-ph 0311016.

20041125

Quantum networks, quantum registers and developments in quantum computing Future Salon News summaries of recent developments in quantum information science and technology.

20041121

Room-temperature Bose-Einstein condensation?

Hideyo OKUSHI, AIST Tsukuba Diamond Research Center, Japan The AIST Tsukuba Diamond Research Center has observed extremely sharp 235-nm exciton emission in fabricated single-crystal diamond film semiconductors at 300K. If the exciton lifetimes are long enough it is possible that Bose-Einstein condensation can occur in these diamond films, even at room temperature.

20041026

Transfer of Nonclassical Properties from Microscopic Superpositions to Macroscopic Thermal States H. Jeong ,T.C. Ralph

Abstract quant-ph 0410210 "We have studied a more reasonable analogy of Schrodinger’s cat paradox where the virtual cat is a significantly mixed thermal state. Our discussion was motivated by the observation that a truly classical system cannot be in a pure quantum state. We have found that non-classical properties of microscopic quantum superpositions can be transferred to thermal states of large average photon numbers. The resulting states show strong quantum coherence and entanglement between severely mixed thermal states. Our examples are feasible in real physical systems and may be realized for some moderate cases using current technology. Finally, it will be an interesting future work to explore the possibility of quantum information processing using the thermal-state “superpositions” and entanglement studied in this paper."

20041024

A Quantum Perceptron M. Andrecut and M. K. Ali, Department of Physics, University of Lethbridge, Canada "The task of a classical perceptron is to classify two classes of patterns by generating a separation hyperplane. Here, we give a complete description of a quantum perceptron. The quantum algorithms for classification and learning are formulated in terms of unitary quantum gates operators. In the quantum case, the concept of separable or non-separable classes is irrelevant because the quantum perceptron can learn a superposition of patterns which are not separable by a hyperplane." - mircea.andrecut@uleth.ca

20041018

"Brain" in a dish acts as autopilot, living computer

Oct. 21, 2004 GAINESVILLE, Fla. --- A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.

The “brain” -- a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish -- gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.

“We’re interested in studying how brains compute,” said Thomas DeMarse, the UF professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.”

While computers are very fast at processing some kinds of information, they can’t approach the flexibility of the human brain, DeMarse said. In particular, brains can easily make certain kinds of computations – such as recognizing an unfamiliar piece of furniture as a table or a lamp – that are very difficult to program into today’s computers.

“If we can extract the rules of how these neural networks are doing computations like pattern recognition, we can apply that to create novel computing systems,” he said.

DeMarse experimental "brain" interacts with an F-22 fighter jet flight simulator through a specially designed plate called a multi-electrode array and a common desktop computer.

“It’s essentially a dish with 60 electrodes arranged in a grid at the bottom,” DeMarse said. “Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network – a brain.”

The brain and the simulator establish a two-way connection, similar to how neurons receive and interpret signals from each other to control our bodies. By observing how the nerve cells interact with the simulator, scientists can decode how a neural network establishes connections and begins to compute, DeMarse said.

When DeMarse first puts the neurons in the dish, they look like little more than grains of sand sprinkled in water. However, individual neurons soon begin to extend microscopic lines toward each other, making connections that represent neural processes. “You see one extend a process, pull it back, extend it out – and it may do that a couple of times, just sampling who’s next to it, until over time the connectivity starts to establish itself,” he said. “(The brain is) getting its network to the point where it’s a live computation device.”

To control the simulated aircraft, the neurons first receive information from the computer about flight conditions: whether the plane is flying straight and level or is tilted to the left or to the right. The neurons then analyze the data and respond by sending signals to the plane’s controls. Those signals alter the flight path and new information is sent to the neurons, creating a feedback system.

“Initially when we hook up this brain to a flight simulator, it doesn’t know how to control the aircraft,” DeMarse said. “So you hook it up and the aircraft simply drifts randomly. And as the data comes in, it slowly modifies the (neural) network so over time, the network gradually learns to fly the aircraft.”

Although the brain currently is able to control the pitch and roll of the simulated aircraft in weather conditions ranging from blue skies to stormy, hurricane-force winds, the underlying goal is a more fundamental understanding of how neurons interact as a network, DeMarse said.

“There’s a lot of data out there that will tell you that the computation that’s going on here isn’t based on just one neuron. The computational property is actually an emergent property of hundreds or thousands of neurons cooperating to produce the amazing processing power of the brain.”

With Jose Principe, a UF distinguished professor of electrical engineering and director of UF's Computational NeuroEngineering Laboratory, DeMarse has a $500,000 National Science Foundation grant to create a mathematical model that reproduces how the neurons compute.

These living neural networks are being used to pursue a variety of engineering and neurobiology research goals, said Steven Potter, an assistant professor in the Georgia Tech/Emory Department of Biomedical Engineering who uses cultured brain cells to study learning and memory. DeMarse was a postdoctoral researcher in Potter’s laboratory at Georgia Tech before he arrived at UF.

“A lot of people have been interested in what changes in the brains of animals and people when they are learning things,” Potter said. “We’re interested in getting down into the network and cellular mechanisms, which is hard to do in living animals. And the engineering goal would be to get ideas from this system about how brains compute and process information.”

Though the ”brain” can successfully control a flight simulation program, more elaborate applications are a long way off, DeMarse said.

“We’re just starting out. But using this model will help us understand the crucial bit of information between inputs and the stuff that comes out,” he said. “And you can imagine the more you learn about that, the more you can harness the computation of these neurons into a wide range of applications.”

20041008

Energy-time entanglement preservation in plasmon-assisted light transmission quant-ph 0410064 "...the only soliton particle quantum state compatible with [our] results is a superposition of a single soliton particle existing at two different moments in time separated one from the other by a duration of thousands of times longer than its own lifetime. At a macroscopic level this would lead to a "Schrodinger cat" living at two epochs that differ by much more than a cat's lifetime."

20041006

Under the Surface, the Brain Seethes With Undiscovered Activity

University of Rochester researchers have found that roughly 80 percent of our cognitive power may be cranking away on tasks completely unknown to us, probably dedicated to subconsciously reprocessing our initial thoughts and experiences. The research has possible profound implications for our very basis of understanding reality.

There’s an old myth that we only use 10 percent of our brains, but researchers at the University of Rochester have found in reality that roughly 80 percent of our cognitive power may be cranking away on tasks completely unknown to us. Curiously, this clandestine activity does not exist in the youngest brains, leading scientists to believe that the mysterious goings-on that absorb the majority of our minds are dedicated to subconsciously reprocessing our initial thoughts and experiences. The research, which has possible profound implications for our very basis of understanding reality, appears in this week’s issue of the journal Nature.

“We found neural activity that frankly surprised us,” says Michael Weliky, associate professor of brain and cognitive sciences at the University of Rochester. “Adult ferrets had neural patterns in their visual cortex that correlated very well with images they viewed, but that correlation didn’t exist at all in very young ferrets, suggesting the very basis of comprehending vision may be a very different task for young brains versus old brains.”

A second surprise was in store for Weliky. Placing the ferrets in a darkened room revealed that older ferrets’ brains were still humming along at 80 percent as if they were processing visual information. Since this activity was absent in the youngsters, Weliky and his colleagues were left to wonder: What is the visual cortex so busy processing when there’s no image to process?

Initially, Weliky’s research was aimed at studying whether visual processing bore any resemblance to the way real-world images appear. This finding may help lead to a better understanding of how neurons decode our world and how our perception of reality is shaped.

Weliky, in a bit of irony, set 12 ferrets watching the reality-stretching film The Matrix. He recorded how their brains responded to the film, as well as to a null pattern like enlarged television static, and a darkened room. Movies capture the visual elements that are present in the real world. For instance, as Keanu’s hand moves across the screen for a karate chop, the image of the hand and all the lines and color it represents moves across a viewer’s visual realm essentially the same way it would in real life. By contrast, the enlarged static—blocks of random black and white—has no such motion. Weliky was able to graph the movie-motion statistically, showing essentially how objects move in the visual field.

The test was then to see if there was any relationship between the statistical motion of the movie and the way visual neurons in the ferrets fired. Each visual neuron is keyed to respond to certain visual elements, such as a vertical line, that appears in a specific area of the ferret’s vision. A great number of these cells combine to process an image of many lines, colors, etc. By watching the patterns of how these cells fired while watching The Matrix, Weliky could describe the pattern statistically, and match those statistics of how the ferret responded to the film with the statistics of the actual visual aspects of the film.

Weliky found two surprises. First, while the neurons of adult ferrets statistically seemed to respond similarly to the statistics of the film itself, younger ferrets had almost no relationship. This suggests that though the young ferrets are taking in and processing visual stimuli, they’re not processing the stimuli in a way that reflects reality.

“You might think of this as a sort of dyslexia,” explains Weliky. “It may be that in very young brains, the processing takes place in a way that’s not necessarily disordered, but not analogous to how we understand reality to be. It’s thought that dyslexia works somewhat like this—that some parts of the brain process written words in an unusual way and seem to make beginnings of words appear at their ends and vice versa. Infant brains may see the entire world the same way, as a mass of disparate scenes and sounds.” Weliky is quick to point out that whatever way infant brains may interpret the world, just because they’re different from an adult pattern of perception does not mean the infants have the wrong perception. After all, an adult interpreted the visual aspects of the film with our adult brains, so it shouldn’t be such a surprise that other adult brains simply interpret the visual aspects the same way. If an infant drew up the statistics, it might very well match the neural patterns of other infants.

The second, and more surprising, result of the study came directly from the fact that Weliky’s research is one of the first to test these visual neurons while the subject is awake and watching something. In the past, researchers would perhaps shine a light at an unconscious ferret and note which areas of the brain responded, but while that method narrowed the focus to how a single cell responds, it eliminated the chance to understand how the neural network of a conscious animal would respond. Accepting all the neural traffic of a conscious brain as part of the equation let Weliky get a better idea of the actual processing going on. As it turned out, one of his control tests yielded insight into neural activity no one expected.

When the ferrets were in a darkened room, Weliky expected their visual neurons to lack any kind of activity that correlated with visual reality. Neurologists have long known that there is substantial activity in the brain, even in darkness, but the pattern of that activity had never been investigated. Weliky discovered that while young ferrets displayed almost no patterns that correlated with visual reality, the adult ferrets’ brains were humming along, producing the patterns even though there was nothing to see. When watching the film, the adult ferrets’ neurons increased their patterned activity by about 20 percent.

“This means that in adults, there is a tremendous amount of real-world processing going on—80 percent—when there is nothing to process,” says Weliky. “We think that if you’ve got your eyes closed, your visual processing is pretty much at zero, and that when you open them, you’re running at 100 percent. This suggests that with your eyes closed, your visual processing is already running at 80 percent, and that opening your eyes only adds the last 20 percent. The big question here is what is the brain doing when it’s idling, because it’s obviously doing something important.”

Since the young ferrets do not display similar patterns, the “idling” isn’t necessary for life or consciousness, but since it’s present in the adults even without stimulus, Weliky suggests it may be in a sense what gives the ferret its understanding of reality. The eye takes in an image and the brain processes the image, but 80 percent of the activity may be a representation of the world replicated inside the ferret’s brain.

“The basic findings are exciting enough, but you can’t help but speculate on what they might mean in a deeper context,” says Weliky. “It’s one thing to say a ferret’s understanding of reality is being reproduced inside his brain, but there’s nothing to say that our understanding of the world is accurate. In a way, our neural structure imposes a certain structure on the outside world, and all we know is that at least one other mammalian brain seems to impose the same structure. Either that or The Matrix freaked out the ferrets the way it did everyone else.”

This research was funded by the National Institutes of Health.

20041004

Testing Bell's inequality in a capacitively coupled Josephson circuit L.F. Wei, Yu-xi Liu, Franco Nori quant-ph 0408089 "Bell's inequalities have been experimentally tested by using, e.g., far apart photons and very-closely-spaced trapped ions. Here, we propose a way to test Bell's inequality with a pair of capacitively Josephson qubits; these coupled-qubits exhibit macroscopic quantum entanglement as demonstrated by recent spectral-analysis experiments [Nature 421, 823 (2003); Science 300, 1548 (2003)]. We propose an effective dynamical decoupling approach to overcome the "fixed-interaction" difficulty for implementing the required single-qubit operations. The obtained long-lived entanglement and realizable simultaneous measurements of the two qubits should allow the testing of Bell's inequality using this coupled Josephson circuit."

20040921

Quantum entanglement by classical computer: a crucial experiment
Luigi Accardi, Centro V. Volterra, Roma

A simple experiment is described in which two experimenters, by performing independent, local, binary choices on a common classical, deterministic, macroscopic source of randomness (in fact a generator of random points in the unit disk in the plane) and computing the empirical correlations among their results, arrive to a violation of Bell's inequalities. The local binary choices satisfy all the standard conditions of the EPR experiment: singlet, equiprobability, rotation invariance, etc.

In addition the experiment suggests a new interpretation of the usual EPR experiment, more natural and appealing from the physical point of view than the usual one and totally in line with the "chameleon effect" which is at the basis of the quantum probabilistic approach to the theory of quantum measurement.

A mathematical formulation of the "chameleon effect" will be discussed and illustrated with the mathematical model used to write the computer programme used in the experiment. The result of the present experiment, which for a long time has been considered to be impossible by the majority of physicists, fully confirms the point of view advocated, starting from the late 70's, by quantum probability in absolute isolation and strongly opposed by the majority of physicists who, following the interpretation due to Bell, were relating the violation of Bell's inequality to a non locality effect.

In particular the experiment proves that:

(i) it is possible to produce non-Kolmogorovian correlations by local realistic classical deterministic macroscopic systems

(ii) it is possible to produce quantum entanglement by classical
computer


This opens the way to a series of new possibilities, for example the possibility of implementing quantum cryptography by classical computer. The experiment will be described and performed during the talk and the public will have the possibility to check the procedure by choosing the parameters of the measurements. An earlier version of the experiment is available at the Volterra Institute.

20040915

Chips Coming to a Brain Near You

In this era of high-tech memory management, next in line to get that memory upgrade isn't your computer, it's you.

Professor Theodore W. Berger , director of the Center for Neural Engineering at the University of Southern California, is creating a silicon chip implant that mimics the hippocampus, an area of the brain known for creating memories. If successful, the artificial brain prosthesis could replace its biological counterpart, enabling people who suffer from memory disorders to regain the ability to store new memories.

And it's no longer a question of "if" but "when." The six teams involved in the multi-laboratory effort, including USC, the University of Kentucky and Wake Forest University, have been working together on different components of the neural prosthetic for nearly a decade. They will present the results of their efforts at the Society for Neuroscience 's annual meeting in San Diego, which begins Saturday.

While they haven't tested the microchip in live rats yet, their research using slices of rat brain indicates the chip functions with 95 percent accuracy. It's a result that's got the scientific community excited.

"It's a new direction in neural prosthesis," said Howard Eichenbaum , director of the Laboratory of Cognitive Neurobiology at Boston University. "The Berger enterprise is ambitious, aiming to provide a prosthesis for memory. The need is high, because of the prevalence of memory disorder in aging and disease associated with loss of function in the hippocampus."

Forming new long-term memories may involve such tasks as learning to recognize a new face, or remembering a telephone number or directions to a new location. Success depend on the proper functioning of the hippocampus. While this part of the brain doesn't store long-term memories, it re-encodes short-term memory so it can be stored as long-term memory.

It's the area that's often damaged as a result of head trauma, stroke, epilepsy and neurodegenerative disorders such as Alzheimer's disease. Currently, no clinically recognized treatments exist for a damaged hippocampus and the accompanying memory disorders.

Berger's team began its research by studying the re-encoding process performed by neurons in slices of rat hippocampi kept alive in nutrients. By stimulating these neurons with randomly generated computer signals and studying the output patterns, the group determined a set of mathematical functions that transformed any given arbitrary input pattern in the same manner that the biological neurons do. And according to the researchers, that's the key to the whole issue.

"It's an impossible task to figure out what your grandmother looks like and how I would encode that," said Berger. "We all do a lot of different things, so we can't create a table of all the things we can possibly look at and how it's encoded in the hippocampus. What we can do is ask, 'What kind of transformation does the hippocampus perform?'

"If you can figure out how the inputs are transformed, then you do have a prosthesis. Then I could put that into somebody's brain to replace it, and I don't care what they look at -- I've replaced the damaged hippocampus with the electronic one, and it's going to transform inputs into outputs just like the cells of the biological hippocampus."

Dr. John J. Granacki , director of the Advanced Systems Division at USC, has been working on translating these mathematical functions onto a microchip. The resulting chip is meant to simulate the processing of biological neurons in the slice of rat hippocampus: accepting electrical impulses, processing them and then sending on the transformed signals. The researchers say the microchip is doing exactly that, with a stunning 95 percent accuracy rate.

"If you were looking at the output right now, you wouldn't be able to tell the difference between the biological hippocampus and the microchip hippocampus," Berger said. "It looks like it's working."

The team next plans to work with live rats that are moving around and learning, and will study monkeys later. The researchers will investigate drugs or other means that could temporarily deactivate the biological hippocampus, and implant the microchip on the animal's head, with electrodes into its brain.

"We will attempt to adapt the artificial hippocampus to the live animal and then show that the animal's performance -- dependent in these tasks on an intact hippocampus -- will not be compromised when the device is in place and we temporarily interrupt the normal function of the hippocampus," said Sam A. Deadwyler , "thus allowing the neuro-prosthetic device to take over that normal function." Deadwyler, a professor at Wake Forest University, is working on measuring the hippocampal neuron activity in live rats and monkeys.

The team expects it will take two to three years to develop the mathematical models for the hippocampus of a live, active rat and translate them onto a microchip, and seven or eight years for a monkey. They hope to apply this approach to clinical applications within 10 years. If everything goes well, they anticipate seeing an artificial human hippocampus, potentially usable for a variety of clinical disorders, in 15 years.

Overall, experts find the results promising.

"We are nowhere near applicability," said Boston University's Eichenbaum. "But the next decade will prove whether this strategy is truly feasible."

"There is a big gap in making the microchip work in a slice preparation and getting it to work in a human being," added Norbert Fortin, a neuroscientist from the Cognitive Neurobiology Lab at Boston University. "However, their approach is very methodical, and it is not unreasonable to think that in 15 to 20 years such a chip could help, to some degree, a patient who suffered from hippocampal damage."

Research Group, Wired Link




20040822

Remote Sensing Applications

Quantum information science garners a number of advantages in metrology and remote sensing applications. NASA's JPL Quantum Computing Technologies division is actively pursuing research in these areas:

Quantum Lithography
Quantum Gyroscopes
Quantum Clock Synchronization
SQUID- Based Atom Interferometric Gravity Gradiometers

20040309

Quantum Information Science and Technology Project Tokyo, Japan The QUIST/Tokyo team recently visited Korea to visit national quantum information research centers in the region, with focus on recent developments in quantum algorithms. We traveled to the Quantum Information Sciences laboratory at Korea Institute for Advanced Study, led by Dr. Jaewan Kim, and to the School of Mathematical Sciences at Seoul National University, led by principal researcher Dr. Dong Pyo Chi. A public copy of the report is available as ATIP04.004.