Monday, October 31, 2011

 Astronomers discover complex organic matter in the universe

Organic compounds of unexpected complexity exist throughout the universe, Prof. Sun Kwok and Dr. Yong Zhang of the University of Hong Kong have discovered, suggesting that complex organic compounds can be synthesized in space even when no life forms are present.

The organic substance they found contains a mixture of aromatic (ring-like) and aliphatic (chain-like) components that are so complex, their chemical structures resemble those of coal and petroleum. Since coal and oil are remnants of ancient life, this type of organic matter was thought to arise only from living organisms.

Unidentified radiation from the universe

The researchers investigated an unsolved phenomenon: a set of infrared emissions detected in stars, interstellar space, and galaxies, known as “Unidentified Infrared Emission features.” From observations taken by the Infrared Space Observatory and the Spitzer Space Telescope, Kwok and Zhang showed that the astronomical spectra have chemical structures that are much more complex that previously thought. By analyzing spectra of star dust formed in exploding stars called novae, they show that stars are making these complex organic compounds on extremely short time scales of weeks, and ejecting it into the general interstellar space, the region between stars.

“Our work has shown that stars have no problem making complex organic compounds under near-vacuum conditions,” says Kwok. “Theoretically, this is impossible, but observationally we can see it happening.”

Most interestingly, this organic star dust is similar in structure to complex organic compounds found in meteorites. Since meteorites are remnants of the early Solar System, the findings raise the possibility that stars enriched the early Solar System with organic compounds. The early Earth was subjected to severe bombardments by comets and asteroids, which potentially could have carried organic star dust. Whether these delivered organic compounds played any role in the development of life on Earth remains an open question.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Invisibility tiles can cloak any shape

Oliver Paul at the University of Kaiserslautern in Germany and associates have revealed a practical way of making invisibility cloaks of any size and shape, The Physics ArXiv Blog reports.

Creating a cloak that exactly follows the shape of the object it is intended to hide is hard because curve cloaks are hard to make, so they approximated the shape using flat facets.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 The making of Arduino

Arduino recently unveiled the Arduino Due, a board with a 32-bit Cortex-M3 ARM processor that offers more computing power for makers with complex projects such as FM radios, 3-D printer kits, or drones.

Google has also released an Arduino-based developer board that lets an Android phone interact with motors, sensors, and other devices. This permits building Android apps that use the phone’s camera, motion sensors, touch screen, and Internet connectivity to control a display or a robot.

Arduino is a low-cost microcontroller board that can be connected to all kinds of sensors, lights, motors, and other devices, with easy-to-learn programming software. Arduino has spawned an international do-it-yourself revolution in electronics. More than 250,000 Arduino boards have been sold around the world. You can buy an Arduino board for just about US $30 or build your own from scratch: All hardware schematics and source code are available for free under public licenses. As a result, Arduino has become the most influential open-source hardware movement of its time.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
HyQ quadruped robot from Italy can trot, kick

The HyQ (hydraulic quadruped)
robot, developed by Istituto Italiano di Tecnologia (IIT) engineers, is a hydraulic quadruped (hy-q) designed to perform highly dynamic tasks such as running and jumping, an IEEE Spectrum blog reports.

Legged locomotion remains one of the biggest challenges in robotics, and the Italian team hopes that their robot can become a platform for research and collaboration among different groups — a kind of open-source BigDog (more like “LittleDog,” but this one can kick).

HyQ, which weighs in at 70 kilograms, can walk and trot at speeds up to 6 kilometers per hour.

The IIT researchers say it could be used for search and rescue missions in dangerous environments. You could send the robot to navigate autonomously looking for victims, for example, or teleoperate it to investigate a disaster-stricken zone.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 ARM CTO predicts chips the size of blood cells
The chip design company is on its way to making chips no bigger than a red blood cell, its CTO says

In less than a decade, that smartphone you're holding could have 32 times the memory, 20 times the bandwidth and a microprocessor core no bigger than a red blood cell, the CTO of chip design company ARM said on Thursday.

ARM has already helped develop a prototype, implantable device for monitoring eye-pressure in glaucoma patients that measures just 1 cubic millimeter, CTO Mike Muller said at ARM's TechCon conference in Silicon Valley Thursday. The device includes a microprocessor sandwiched between sensors at the top and a battery at the bottom.

Strip away those extra components, rearrange the transistors into a cube and apply the type of advanced manufacturing process expected in 2020, and you'd end up with a device that occupies about the same volume as a blood cell, Muller said.

ARM designs the processor cores used in most of today's smartphones and tablets, and smaller cores are generally more energy efficient, he said. That helps to extend battery life.

That's a good thing, because battery technology is advancing much more slowly, and Muller expects only twice the improvement in battery performance by the end of the decade.

That could be a gating factor for all the other improvements, so the electrical systems inside portable devices will have to be redesigned so that people don't have to recharge them multiple times a day.

For example, smartphones today contain basically a single compute system, with one type of CPU and some memory attached. But the tasks performed by smartphones, such as making a call or playing a 3D game, require very different levels of performance.

So in the future, MulIer said, "some systems will have entire subsystems within them, including their own CPU and their own memory," devoted to a particular task such as music playback. That way, other subsystems in a device can be shut down, conserving battery life.

It's a model ARM is already pursuing with its Big.Little architecture announced last week. That design will see two types of processor core in the same device, one powerful and one less so, and uses the most power-appropriate device for the task at hand. The idea of entire subsystems takes that a step further.

The bandwidth gains in 2020 will come mostly from advances in topology, according to Muller -- basically increasing the number of cellular base stations. Spectrum, and the technologies used to send bits across that spectrum, won't advance much, he predicted.

That's okay for people in cities, where it can make financial sense to install more base stations. "If you're out in the middle of nowhere, I'm sorry, there's not going to be much big change for you," Muller said.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Scientists measure dream content for the first time

Max Planck and Charité Hospital researchers have succeeded in analyzing the activity of the brain during dreaming.

They did this with the help of lucid dreamers, people who become aware of their dreaming state and are able to alter the content of their dreams. The lucid dreamers were asked to become aware of their dream while sleeping in an MRI scanner and to report this “lucid” state to the researchers by means of eye movements. They were then asked to voluntarily “dream” that they were repeatedly clenching first their right fist and then their left one for ten seconds.

This enabled the scientists to measure the entry into REM sleep — a phase in which dreams are perceived particularly intensively — with the help of the subject’s electroencephalogram (EEG) and to detect the beginning of a lucid phase. The brain activity measured from this time onwards corresponded with the arranged “dream” involving the fist clenching.

A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. The coincidence of the brain activity measured during dreaming and the conscious action shows that dream content can be measured. “With this combination of sleep EEGs, imaging methods and lucid dreamers, we can measure not only simple movements during sleep but also the activity patterns in the brain during visual dream perceptions,” says Martin Dresler, a researcher at the Max Planck Institute for Psychiatry.

The researchers were able to confirm the data obtained using MR imaging in another subject using a different technology. With the help of near-infrared spectroscopy, they also observed increased activity in a region of the brain that plays an important role in the planning of movements. “Our dreams are therefore not a ‘sleep cinema’ in which we merely observe an event passively, but involve activity in the regions of the brain that are relevant to the dream content,” explains Michael Czisch, research group leader at the Max Planck Institute for Psychiatry.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Kurzweil responds: Don’t underestimate the Singularity

Last week, Paul Allen and a colleague challenged the prediction that computers will soon exceed human intelligence. Now Ray Kurzweil, the leading proponent of the “Singularity,” offers a rebuttal. — Technology Review, Oct. 10, 2011.

Although Paul Allen paraphrases my 2005 book, The Singularity Is Near, in the title of his essay (cowritten with his colleague Mark Greaves), it appears that he has not actually read the book. His only citation is to an essay I wrote in 2001 (“The Law of Accelerating Returns“) and his article does not acknowledge or respond to arguments I actually make in the book.

When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.

I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen’s descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.

Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.

Allen writes that “these ‘laws’ work until they don’t.” Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it’s true that this specific trend continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.

This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.

Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth. One recent study (“Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology” by the President’s Council of Advisors on Science and Technology) states the following:

“Even more remarkable — and even less widely understood — is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade … Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.”

I cite many other examples like this in the book3.

Regarding AI, Allen is quick to dismiss IBM’s Watson as narrow, rigid, and brittle. I get the sense that Allen would dismiss any demonstration short of a valid passing of the Turing test. I would point out that Watson is not so narrow. It deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors. It’s not perfect, but neither are humans, and it was good enough to get a higher score than the best two human Jeopardy! players put together.

Allen writes that Watson was put together by the scientists themselves, building each link of narrow knowledge in specific areas. Although some areas of Watson’s knowledge were programmed directly, according to IBM, Watson acquired most of its knowledge on its own by reading natural language documents such as encyclopedias. That represents its key strength. It not only is able to understand the convoluted language in Jeopardy! queries (answers in search of a question), but it acquired its knowledge by reading vast amounts of natural-language documents. IBM is now working with Nuance (a company I originally founded as Kurzweil Computer Products) to have Watson read tens of thousands of medical articles to create a medical diagnostician.

A word on the nature of Watson’s “understanding” is in order here. A lot has been written that Watson works through statistical knowledge rather than “true” understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term “statistical information” in the case of Watson refers to distributed coefficients in self-organizing methods such as Markov models. One could just as easily refer to the distributed neurotransmitter concentrations in the human cortex as “statistical information.” Indeed, we resolve ambiguities in much the same way that Watson does by considering the likelihood of different interpretations of a phrase.

Allen writes: “Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain, every individual structure and neural circuit has been individually refined by evolution and environmental factors.”

Allen’s statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems. I show in The Singularity Is Near that after lossless compression (due to massive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which pertains to the brainsup>4. That’s not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world.

How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons. It is true that the massively repeated structures in the brain learn different items of information as we learn and gain experience, but the same thing is true of artificially intelligent systems such as Watson.

Dharmendra S. Modha, manager of cognitive computing for IBM Research, writes: “…neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of homology across species … The astonishing natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer.”

Allen articulates what I describe in my book as the “scientist’s pessimism.” Scientists working on the next generation are invariably struggling with that next set of challenges, so if someone describes what the technology will look like in 10 generations, their eyes glaze over. One of the pioneers of integrated circuits was describing to me recently the struggles to go from 10 micron (10,000-nanometer) feature sizes to five-micron (5,000 nanometers) features over 30 years ago. They were cautiously confident of this goal, but when people predicted that someday we would actually have circuitry with feature sizes under one micron (1,000 nanometers), most of the scientists struggling to get to five microns thought that was too wild to contemplate. Objections were made on the fragility of circuitry at that level of precision, thermal effects, and so on. Well, today, Intel is starting to use chips with 22-nanometer gate lengths.

We saw the same pessimism with the genome project. Halfway through the 15-year project, only 1 percent of the genome had been collected, and critics were proposing basic limits on how quickly the genome could be sequenced without destroying the delicate genetic structures. But the exponential growth in both capacity and price performance continued (both roughly doubling every year), and the project was finished seven years later. The project to reverse-engineer the human brain is making similar progress. It is only recently, for example, that we have reached a threshold with noninvasive scanning techniques that we can see individual interneuronal connections forming and firing in real time.

Allen’s “complexity brake” confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don’t need to re-create or simulate every organelle in every pancreatic Islet cell. You would want, instead, to fully understand one Islet cell, then abstract its basic functionality, and then extend that to a large group of such cells. This algorithm is well understood with regard to Islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the massively repeated Islet cells of the pancreas, there is nonetheless massive repetition of functions.

Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain “bottom up” without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.

The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience. The Google self-driving cars (which have driven over 140,000 miles through California cities and towns) learn from their own driving experience as well as from Google cars driven by human drivers. As I mentioned, Watson learned most of its knowledge by reading on its own.

It is true that Watson is not quite at human levels in its ability to understand human language (if it were, we would be at the Turing test level now), yet it was able to defeat the best humans. This is because of the inherent speed and reliability of memory that computers have. So when a computer does reach human levels, which I believe will happen by the end of the 2020s, it will be able to go out on the Web and read billions of pages as well as have experiences in online virtual worlds. Combining human-level pattern recognition with the inherent speed and accuracy of computers will be very powerful. But this is not an alien invasion of intelligence machines—we create these tools to make ourselves smarter. I think Allen will agree with me that this is what is unique about the human species: we build these tools to extend our own reach.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
The New Einsteins Will Be Scientists Who Share

From cancer to cosmology, researchers could race ahead by working together—online and in the open

In January 2009, a mathematician at Cambridge University named Tim Gowers decided to use his blog to run an unusual social experiment. He picked out a difficult mathematical problem and tried to solve it completely in the open, using his blog to post ideas and partial progress. He issued an open invitation for others to contribute their own ideas, hoping that many minds would be more powerful than one. He dubbed the experiment the Polymath Project.

Several hours after Mr. Gowers opened up his blog for discussion, a Canadian-Hungarian mathematician posted a comment. Fifteen minutes later, an Arizona high-school math teacher chimed in. Three minutes after that, the UCLA mathematician Terence Tao commented. The discussion ignited, and in just six weeks, the mathematical problem had been solved.

Other challenges have followed, and though the polymaths haven't found solutions every time, they have pioneered a new approach to problem-solving. Their work is an example of the experiments in networked science that are now being done to study everything from galaxies to dinosaurs.

These projects use online tools as cognitive tools to amplify our collective intelligence. The tools are a way of connecting the right people to the right problems at the right time, activating what would otherwise be latent expertise.

Networked science has the potential to speed up dramatically the rate of discovery across all of science. We may well see the day-to-day process of scientific research change more fundamentally over the next few decades than over the past three centuries.

But there are major obstacles to realizing this goal. Though you might think that scientists would aggressively adopt new tools for discovery, they have been surprisingly inhibited. Ventures such as the Polymath Project remain the exception, not the rule.

Consider the idea of sharing scientific data online. The best-known example of this is the human genome project, whose data may be downloaded by anyone. When you read in the news that a certain gene is associated with a particular disease, you're almost certainly seeing a discovery made possible by the project's open-data policy.

Despite the value of open data, most labs make no systematic effort to share data with other scientists. As one biologist told me, he had been "sitting on [the] genome" for an entire species of life for more than a year. A whole species of life! Just imagine the vital discoveries that other scientists could have made if that genome had been uploaded to an online database.

Why don't scientists share?

If you're a scientist applying for a job or a grant, the biggest factor determining your success will be your record of scientific publications. If that record is stellar, you'll do well. If not, you'll have a problem. So you devote your working hours to tasks that will lead to papers in scientific journals.

Even if you personally think it would be far better for science as a whole if you carefully curated and shared your data online, that is time away from your "real" work of writing papers. Except in a few fields, sharing data is not something your peers will give you credit for doing.

There are other ways in which scientists are still backward in using online tools. Consider, for example, the open scientific wikis launched by a few brave pioneers in fields like quantum computing, string theory and genetics (a wiki allows the sharing and collaborative editing of an interlinked body of information, the best-known example being Wikipedia).

Specialized wikis could serve as up-to-date reference works on the latest research in a field, like rapidly evolving super-textbooks. They could include descriptions of major unsolved scientific problems and serve as a tool to find solutions.

But most such wikis have failed. They have the same problem as data sharing: Even if scientists believe in the value of contributing, they know that writing a single mediocre paper will do far more for their careers. The incentives are all wrong.

If networked science is to reach its potential, scientists will have to embrace and reward the open sharing of all forms of scientific knowledge, not just traditional journal publication. Networked science must be open science. But how to get there?

A good start would be for government grant agencies (like the National Institutes of Health and the National Science Foundation) to work with scientists to develop requirements for the open sharing of knowledge that is discovered with public support. Such policies have already helped to create open data sets like the one for the human genome. But they should be extended to require earlier and broader sharing. Grant agencies also should do more to encourage scientists to submit new kinds of evidence of their impact in their fields—not just papers!—as part of their applications for funding.

The scientific community itself needs to have an energetic, ongoing conversation about the value of these new tools. We have to overthrow the idea that it's a diversion from "real" work when scientists conduct high-quality research in the open. Publicly funded science should be open science.

Improving the way that science is done means speeding us along in curing cancer, solving the problem of climate change and launching humanity permanently into space. It means fundamental insights into the human condition, into how the universe works and what it's made of. It means discoveries not yet dreamt of.

In the years ahead, we have an astonishing opportunity to reinvent discovery itself. But to do so, we must first choose to create a scientific culture that embraces the open sharing of knowledge.

Read more:


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Thursday, October 20, 2011


Wednesday, October 19, 2011

iPhone accelerometer detects keystrokes

What if a hacker could use your cell phone to track what you are typing on a keyboard?

A research team at Georgia Tech has discovered how to do exactly that, using a smartphone accelerometer — the internal device that detects when and how the phone is tilted — to sense keyboard vibrations and decipher complete sentences with up to 80 percent accuracy.

“We first tried our experiments with an iPhone 3GS, and the results were difficult to read,” said Patrick Traynor, assistant professor in Georgia Tech’s School of Computer Science. “But then we tried an iPhone 4, which has an added gyroscope to clean up the accelerometer noise, and the results were much better. We believe that most smartphones made in the past two years are sophisticated enough to launch this attack.”

Previously, Traynor said, researchers had accomplished similar results using microphones, but a microphone is a much more sensitive instrument than an accelerometer. A typical smartphone’s microphone samples vibration roughly 44,000 times per second, while even newer phones’ accelerometers sample just 100 times per second, two full orders of magnitude less often.

Read more:


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
iPhone accelerometer detects keystrokes

What if a hacker could use your cell phone to track what you are typing on a keyboard?

A research team at Georgia Tech has discovered how to do exactly that, using a smartphone accelerometer — the internal device that detects when and how the phone is tilted — to sense keyboard vibrations and decipher complete sentences with up to 80 percent accuracy.

“We first tried our experiments with an iPhone 3GS, and the results were difficult to read,” said Patrick Traynor, assistant professor in Georgia Tech’s School of Computer Science. “But then we tried an iPhone 4, which has an added gyroscope to clean up the accelerometer noise, and the results were much better. We believe that most smartphones made in the past two years are sophisticated enough to launch this attack.”

Previously, Traynor said, researchers had accomplished similar results using microphones, but a microphone is a much more sensitive instrument than an accelerometer. A typical smartphone’s microphone samples vibration roughly 44,000 times per second, while even newer phones’ accelerometers sample just 100 times per second, two full orders of magnitude less often.

Read more:


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
Wireless microelectronic stimulators for spinal cord injuries tested in animals

NJIT researchers have done animal testing of wireless neural stimulators called FLAMES (floating light activated micro-electrical stimulators) for individuals with spinal cord injuries.

The FLAMES technology uses tiny semiconductor devices energized by an near-infrared light beam through an optical fiber located just outside the spinal cord. The devices are designed to activate the nerves in the spinal cord below the point of injury and thus allow the use of paralyzed muscles. The device is implanted into the spinal cord, and is then allowed to float in the tissue. There are no attached wires. A patient pushes a button on the external unit to activate the laser, the laser then activates the FLAMES device.

“The unique aspect of the project is that the implanted stimulators are very small, in the sub-millimeter range,” Sahin said. “A key benefit is that since our device is wireless, the connections can’t deteriorate over time plus, the implant causes minimal reaction in the tissue, which is a common problem with similar wired devices.”

The electrical activation of the central and peripheral nervous system has been investigated for treatment of neural disorders for many decades and a number of devices have already successfully moved into the clinical phase, such as cochlear implants and pain management via spinal cord stimulation. Others are on the way, such as micro stimulation of the spinal cord to restore locomotion, micro stimulation of the cochlear nucleus, midbrain, or auditory cortex to better restore hearing and stimulation of the visual cortex in the blind subject. All of them, however, are wired.

The work is now in its third year of support from a four-year, $1.4 million National Institutes of Health (NIH) grant.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Does future hold ‘Avatar’-like bodies for us?

Dmitry Itskov introduced his “Project Immortality 2045: Russian Experience” at the Singularity Summit in New York, reports MSNBC’s Innovation blog.  His plans include creating a humanoid avatar body within five to seven years, transplanting a human brain into a new “body B” in 10 to 15 years, digitally uploading a human brain’s consciousness in 20 to 25 years, and moving human consciousness to hologram-like bodies in 30 to 35 years.

He claimed support from the Russian Federation’s Ministry of Education and Science, as well as actor Seagal, to create a research center capable of giving humans life-extending bodies.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Power from the people

Scientists at Joseph Fourier University of Grenoble have built a biofuel cell that uses glucose and oxygen at concentrations found in the body to generate electricity.

They are the first group in the world to demonstrate their device working while implanted in a living animal. Within a decade or two, biofuel cells may be used to power a range of medical implants, from sensors and drug delivery devices to entire artificial organs.

Glucose and oxygen are both freely available in the human body, so hypothetically, a biofuel cell could keep working indefinitely.

The electrodes are made by compressing a paste of carbon nanotubes mixed with glucose oxidase for one electrode, and glucose and polyphenol oxidase for the other. The electrodes have a platinum wire inserted in them to carry the current to the circuit. Then the electrodes are wrapped in a special material that prevents any nanotubes or enzymes from escaping into the body.

Finally, the whole package is wrapped in a mesh that protects the electrodes from the body’s immune system, while still allowing the free flow of glucose and oxygen to the electrodes. The whole package is then implanted in the rat.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 New media bypassing TV channels, book publishers

YouTube has been striking deals with several content providers (such as Warner Bros., BermanBraun, FremantleMedia and Shine Group) to add about two dozen channels offering original shows, with TV-style entertainment and news, says Hollywood Reporter. Sources indicated Google would spend as much as $150 million on the effort.

Meanwhile, Amazon is signing up new authors, bypassing book publishers and agents, says The New York Times.  Amazon will publish 122 books this fall in both physical and e-book form, and has signed its first deal with the self-help author Tim Ferriss.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Three new developments in creating better graphene

Here are three new promising research developments in enhancing graphene — the thinnest and strongest material in the world (more than 100 times stronger than diamond). Graphene’s properties make it ideal for advancements in green electronics, superconductors, super-strong materials, flexible screens and electronic devices, ultra-efficient solar power cells, and computers with 1,000 GHz processors that run on virtually no energy.

Grow it in large sheets

UC Santa Barbara researchers have discovered how to use low-pressure chemical vapor deposition (LPCVD) while preserving high conductivity (allows for high current).

How: Disintegrate methane at a specific high temperature to build uniform layers of carbon (as graphene) on a pretreated copper substrate.

Uses: “Intel has a keen interest in graphene due to many possibilities it holds for the next generation of energy-efficient computing, but there are many roadblocks along the way,” says Intel Fellow, Shekhar Borkar. “The scalable synthesis technique developed by Professor Banerjee’s group at UCSB is an important step forward.”

Crumple it up

Researchers at Northwestern University have developed a new form of graphene inspired by a trash can full of crumpled-up papers.

How: Create freely suspended water droplets containing graphene-based sheets, then use a carrier gas to blow the aerosol droplets through a furnace. As the water quickly evaporates, the thin sheets are compressed by capillary force into near-spherical particles, eliminating Van der Waals attraction and stopping them from sticking together (by reducing surface area).

Uses: The rigid crumpled graphene balls are remarkably stable against mechanical deformation —  and ideal for energy storage and energy conversion.

Simulate it

Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has developed a material with physical properties similar to graphene, but can be doped with foreign atoms. It resembles iron pnictides (used for high-temperature superconductors),

How: Combine strontium, manganese, and bismuth to form SrMnBi2.

Uses: New magnets and superconductors.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Scientists uncover previously hidden network that regulates cancer genes

Researchers at Columbia University Medical Center (CUMC) and two other institutions have uncovered a vast gene regulatory network (“mPR network“) in mammalian cells that could explain why there is such genetic variability in cancer.

The researchers say the findings could broaden inquiry into how tumors develop and grow, who is at risk for cancer, and even inactivate key cancer molecules.

“The discovery of this regulatory network fills in a missing piece in the puzzle of cell regulation and allows us to identify genes never before associated with a particular type of tumor or disease,” said Andrea Califano, professor of systems biology and director of the Columbia Initiative in Systems Biology.

For decades, scientists have thought that the primary role of messenger RNA (mRNA) is to shuttle information from the DNA to the ribosomes, the sites of protein synthesis. The new studies suggest that the mRNA of one gene can control, and be controlled by, the messenger RNA (mRNA) of other genes via a large pool of microRNA molecules, with dozens to hundreds of genes working together in complex self-regulating subnetworks.

For example, deletions of mRNA network regulators in the phosphatase and tensin homolog gene (PTEN), a major tumor suppressor, appear to be as damaging as mutations of the gene itself in several types of cancer, the studies show.

mPR network

The newly identified regulatory network (called the mPR network by the CUMC investigators) allows mRNAs to communicate through small bits of RNA called microRNAs. Researchers first realized about a decade ago that microRNAs, by binding to complementary genetic sequences on mRNAs, can prevent those mRNAs from making proteins. Turning this concept on end, the new studies reveal that mRNAs actually use microRNAs to influence the expression of other genes.

When two genes share a set of microRNA regulators, changes in expression of one gene affect the other. If, for instance, one of those genes is highly expressed, the increase in its mRNA molecules will “sponge up” more of the available microRNAs.

The researchers said this is the first time the range and relevance of this kind of interaction has been characterized.

In the CUMC study, Pavel Sumazin, research scientist in systems biology, and colleagues analyzed glioblastoma mRNA and microRNA expression data from the Cancer Genome Atlas, a public database, uncovering a regulatory layer comprising more than 248,000 microRNA-mediated interactions

The researchers found that the tumor suppressor gene PTEN is part of a subnetwork of more than 500 genes. Of these genes, 13 are frequently deleted in glioblastoma and seem to work together through microRNAs to stop PTEN activity — achieving the same result as if the tumors had inactivating mutations or deletions of PTEN itself.

The finding explains, at least in part, why all patients with glioblastoma do not share the same genetic profile. In about 80 percent of patients, their tumors have a deletion of PTEN. In most of the remaining 20 percent, PTEN is intact, but the gene is not expressed — an observation that had confounded researchers.

“This suggested that there must be some other mechanism by which PTEN can be completely suppressed,” said. Sumazin. “Now we know that there are at least 13 other genes — none of which had ever been implicated in cancer — that can ‘gang up’ on PTEN to suppress its activity, with different combination of deletions in different patients.

“The network helps explain the so-called dark matter of the genome. For years, scientists have been cataloging all the genes involved in particular diseases. But if you add up all the genetic and epigenetic alterations that have been identified, even with high-resolution studies, there are still many cases where you cannot explain why a person has the disease. Now we have a new tool for explaining these genetic variations, for gaining a better understanding of the disease and, ultimately, for finding new treatments.”

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
Wearable projection system turns any surface into a multitouch interface

OmniTouch, a wearable projection system developed by researchers at Microsoft Research and Carnegie Mellon University, lets you turn pads of paper, walls, or even your own hands, arms, and legs into graphical, interactive surfaces.

OmniTouch uses a depth-sensing camera, similar to the Microsoft Kinect, to track your fingers on everyday surfaces. You control interactive applications by tapping or dragging your fingers. The projector can superimpose keyboards, keypads, and other controls onto any surface, automatically adjusting for the surface’s shape and orientation to minimize distortion of the projected images.

You can use the palm of  your hand as a phone keypad, or as a tablet for jotting down brief notes. Maps projected onto a wall can be panned and zoomed with the same finger motions that work with a conventional multitouch screen.

“It’s conceivable that anything you can do on today’s mobile devices, you will be able to do on your hand using OmniTouch,” said Chris Harrison, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute.

The OmniTouch device includes a short-range depth camera and laser pico-projector and is mounted on your shoulder. But Harrison said the device ultimately could be the size of a deck of cards, or even a matchbox, so that it could fit in a pocket, be easily wearable, or be integrated into future handheld devices.

Harrison previously worked with Microsoft Research to develop Skinput, a technology that used bioacoustic sensors to detect finger taps on a person’s hands or forearm to control smartphones or other compact computing devices.

Harrison was an intern at Microsoft Research when he developed OmniTouch in collaboration with Microsoft Research’s Hrvoje Benko and Andrew D. Wilson. Harrison will describe the technology on Wednesday (Oct. 19) at the Association for Computing Machinery’s Symposium on User Interface Software and Technology (UIST) in Santa Barbara, Calif.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Researchers do precise gene therapy without a needle

L. James Lee and his colleagues at Ohio State University have successfully inserted specific doses of an anti-cancer gene into individual leukemia cells to kill them without a needle. The technique uses electricity to “shoot” bits of therapeutic biomolecules through a tiny channel and into a cell in a fraction of a second.

They have dubbed the method “nanochannel electroporation” (NEP).

“NEP allows us to investigate how drugs and other biomolecules affect cell biology and genetic pathways at a level not achievable by any existing techniques,” said Lee, the Helen C. Kurtz Professor of Chemical and Biomolecular Engineering and director of the NSF Nanoscale Science and Engineering Center for Affordable Nanoengineering of Polymeric Biomedical Devices at Ohio State.

There have long been ways to insert random amounts of biomaterial into bulk quantities of cells for gene therapy. And fine needles can inject specific amounts of material into large cells. But most human cells are too small for even the smallest needles to be of any use.

NEP gets around the problem by suspending a cell inside an electronic device with a reservoir of therapeutic agent nearby. Electrical pulses push the agent out of the reservoir and through a nanoscale channel in the device, through the cell wall, and into the cell. Researchers control the dose by adjusting the number of pulses and the width of the channel.

They constructed prototype devices using polymer stamps, and used individual strands of DNA as templates for the nanometer-sized channels.

Lee invented the technique for uncoiling strands of DNA and forming them into precise patterns so that they could work as wires in biologically based electronics and medical devices. But for this study, gold-coated DNA strands were stretched between two reservoirs and then etched away, in order to leave behind a nano-channel of precise dimensions connecting the reservoirs within the polymeric device.

Electrodes in the channels turn the device into a tiny circuit, and electrical pulses of a few hundred volts travel from the reservoir with the therapeutic agent through the nano-channel and into a second reservoir with the cell. This creates a strong electric field at the outlet of the nano-channel, which interacts with the cell’s natural electric charge to force open a hole in the cell membrane, one large enough to deliver the agent, but small enough not to kill the cell.

In tests, they were able to insert agents into cells in as little as a few milliseconds.

First, they tagged bits of synthetic DNA with fluorescent molecules, and used NEP to insert them into human immune cells. After a single 5-millisecond pulse, they began see spots of fluorescence scattered within the cells. They tested different pulse lengths up to 60 milliseconds, which filled the cells with fluorescence.

To test whether NEP could deliver active therapeutic agents, they inserted bits of therapeutic RNA into leukemia cells. Pulses as short as 5 milliseconds delivered enough RNA to kill some of the cells. Longer pulses — approaching 10 milliseconds — killed almost all of them. They also inserted some harmless RNA into other leukemia cells for comparison, and those cells lived.

At the moment, the process is best suited for laboratory research, Lee said, because it only works on one cell or several cells at a time. But he and his team are working on ways to inject many cells simultaneously. They are currently developing a mechanical cell-loading system that would inject up to 100,000 cells at once, which would potentially make clinical diagnostics and treatments possible.

“We hope that NEP could eventually become a tool for early cancer detection and treatment, for instance, inserting precise amounts of genes or proteins into stem cells or immune cells to guide their differentiation and changes, without the safety concerns caused by overdosing, and then placing the cells back in the body for cell-based therapy,” Lee added.

He sees potential applications for diagnosing and treating leukemia, lung cancer, and other tumors. He’s working with researchers at Ohio State’s Comprehensive Cancer Center to explore those possibilities.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 Psychopathic killers: computerized text analysis uncovers the word patterns of a predator

The words of psychopathic murderers match their personalities, which reflect selfishness, detachment from their crimes and emotional flatness, says Jeff Hancock, Cornell professor of computing and information science, and colleagues at the University of British Columbia in the journal Legal and Criminological Psychology.

Computerized text analysis shows that psychopathic killers make identifiable word choices beyond conscious control when talking about their crimes. This research could lead to new tools for diagnosis and treatment, and has implications law enforcement and social media.

Hancock and his colleagues analyzed stories told by 14 psychopathic male murderers held in Canadian prisons and compared them with 38 convicted murderers who were not diagnosed as psychopathic. Each subject was asked to describe his crime in detail. Their stories were taped, transcribed and subjected to computer analysis.

Clues: conjunctions, physical needs, past tense

Psychopaths used more conjunctions like “because,” “since” or “so that,” implying that the crime “had to be done” to obtain a particular goal. They used twice as many words relating to physical needs, such as food, sex or money, while non-psychopaths used more words about social needs, including family, religion and spirituality. Unveiling their predatory nature in their own description, the psychopaths often included details of what they had to eat on the day of their crime.

Psychopaths were more likely to use the past tense, suggesting a detachment from their crimes, say the researchers. They tended to be less fluent in their speech, using more “ums” and “uhs.” The exact reason for this is not clear, but the researchers speculate that the psychopath is trying harder to make a positive impression, needing to use more mental effort to frame the story.

Two text analysis tools were used to examine the crime narratives. Psychopathy was determined using the Psychopathy Checklist-Revised (PCL-R). The Wmatrix linguistic analysis tool was used to examine parts of speech and semantic content while the Dictionary of Affect and Language (DAL) tool was used to examine the emotional characteristics of the narratives.

“Previous work has looked at how psychopaths use language,” Hancock said. “Our paper is the first to show that you can use automated tools to detect the distinct speech patterns of psychopaths.” This can be valuable to clinical psychologists, he said, because the approach to treatment of psychopaths can be very different.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
 The mechanism that gives shape to life

Researchers at EPFL (Ecole Polytechnique Fédérale de Lausanne) and the University of Geneva (Unige) have solved the mystery of how genes determines the shape that many animals take.

During the development of an embryo, everything happens at a specific moment. In about 48 hours, it will grow from the top to the bottom, one slice at a time — scientists call this the embryo’s segmentation. “We’re made up of thirty-odd horizontal slices,” explains Denis Duboule, a professor at EPFL and Unige. “These slices correspond more or less to the number of vertebrae we have.”

Every hour and a half, a new segment is built. The genes corresponding to the cervical vertebrae, the thoracic vertebrae, the lumbar vertebrae and the tailbone become activated at exactly the right moment one after another.

DNA acts like a mechanical clock

Very specific genes, known as “Hox,” responsible for the formation of limbs and the spinal column, are involved in this process. “Hox genes are situated one exactly after the other on the DNA strand, in four groups. First the neck, then the thorax, then the lumbar, and so on,” explains Duboule.

The process is astonishingly simple. In the embryo’s first moments, the Hox genes are dormant, packaged like a spool of wound yarn on the DNA. When the time is right, the strand begins to unwind. When the embryo begins to form the upper levels, the genes encoding the formation of cervical vertebrae come off the spool and become activated. Then it is the thoracic vertebrae’s turn, and so on down to the tailbone. The DNA strand acts a bit like an old-fashioned computer punchcard, delivering specific instructions as it progressively goes through the machine.

“A new gene comes out of the spool every 90 minutes, which corresponds to the time needed for a new layer of the embryo to be built,” explains Duboule. “It takes two days for the strand to completely unwind; this is the same time that’s needed for all the layers of the embryo to be completed.” This system is the first “mechanical” clock ever discovered in genetics. And it explains why the system is so remarkably precise.

Player-piano music

The structure of all  animals — the distribution of their vertebrae, limbs and other appendices along their bodies — is programmed like a sheet of player-piano music by the sequence of Hox genes along the DNA strand.

The sinuous body of the snake is a perfect illustration. A few years ago, Duboule discovered in these animals a defect in the Hox gene that normally stops the vertebrae-making process. “Now we know what’s happening. The process doesn’t stop, and the snake embryo just keeps on making vertebrae, all identical, until the process just runs out of steam.”

The Hox clock is a demonstration of the extraordinary complexity of evolution. One notable property of the mechanism is its extreme stability, explains Duboule. “Circadian or menstrual clocks involve complex chemistry. They can thus adapt to changing contexts, but in a general sense are fairly imprecise. The mechanism that we have discovered must be infinitely more stable and precise. Even the smallest change would end up leading to the emergence of a new species.”

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at
Robot biologist solves complex problem from scratch

An interdisciplinary team of scientists has taken a major step toward automating the scientific process with the Automated Biology Explorer (ABE) system, which can analyze raw experimental data from a biological system and derive the basic mathematical equations that describe the way the system operates.

According to the researchers at Vanderbilt University, Cornell University and CFD Research Corporation, it is one of the most complex scientific modeling problems that a computer has solved completely from scratch.

The work was a collaboration between John P. Wikswo, the Gordon A. Cain University Professor at Vanderbilt, Michael Schmidt and Hod Lipson at the Creative Machines Lab at Cornell University and Jerry Jenkins and Ravishankar Vallabhajosyula at CFDRC in Huntsville, Ala.

John P. Wikswo, the Gordon A. Cain University Professor at Vanderbilt, has christened the  is a unique piece of software called

ABE’s “brain” is software called Eureqa, developed at Cornell in 2009. One of Eureqa’s initial achievements was identifying the basic laws of motion by analyzing the motion of a double pendulum. What took Sir Isaac Newton years to discover, Eureqa did in a few hours when running on a personal computer.

Software derives biochemical equations automatically

The biological system that the researchers used to test ABE is glycolysis, the primary process that produces energy in a living cell. They focused on how yeast cells control glycolytic oscillations  because it is one of the most extensively studied biological control systems. ABE derived the equations a priori. The only thing the software knew in advance was addition, subtraction, multiplication and division.

The ability to generate mathematical equations from scratch is what sets ABE apart from Adam, the robot scientist developed by Ross King and his colleagues at the University of Wales at Aberystwyth. Adam runs yeast genetics experiments and made international headlines two years ago by making a novel scientific discovery without direct human input. King fed Adam with a model of yeast metabolism and a database of genes and proteins involved in metabolism in other species. He also linked the computer to a remote-controlled genetics laboratory. This allowed the computer to generate hypotheses, then design and conduct actual experiments to test them.

To give ABE the ability to run experiments like Adam, tbe researchers are urrently developing “laboratory-on-a-chip” technology that can be controlled by Eureqa. This will allow ABE to design and perform a wide variety of basic biology experiments. Their initial effort is focused on developing a microfluidics device that can test cell metabolism.

Why biology needs automation

Biology is more complex than astronomy or physics or chemistry,” maintained John P. Wikswo, the Gordon A. Cain University Professor at Vanderbilt. “In fact, it may be too complex for the human brain to comprehend.”

This complexity stems from the fact that biological processes range in size from the dimensions of an atom to those of a whale and in time from a billionth of a second to billions of seconds. Biological processes also have a tremendous dynamic range: for example, the human eye can detect a star at night that is one billionth as bright as objects viewed on a sunny day.

Then there is the matter of sheer numbers. A cell expresses between 10,000 to 15,000 proteins at any one time. Proteins perform all the basic tasks in the cell, including producing energy, maintaining cell structures, regulating these processes and serving as signals to other cells. At any one time, there can be anywhere from three to 10 million copies of a given protein in the cell.

According to Wikswo, the crowning source of complication is that processes at all these different scales interact with one another: “These multi-scale interactions produce emergent phenomena, including life and consciousness.”

Looked at from a mathematical point of view, to create an accurate model of a single mammalian cell may require generating and then solving somewhere between 100,000 to one million equations.

Balanced against this complexity is the capability of the human brain. The biophysicist cites research that has found that the human brain can only process seven pieces of data at a time and quotes a 1938 assessment of brain research by Emerson Pugh: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

That is where robot scientists like ABE and Adam come in, Wikswo argues. They have the potential for both generating and analyzing the tremendous amounts of data required to really understand how biological systems work and predict how they will react to different conditions.


“We set out to work with robots, but our path took us, through many twists and turns, to automating science,” said Hod Lipson at the Creative Machines Lab at Cornell University.

His starting point was an attempt to breed robot control systems using an approach modeled on natural selection, instead of having a programmer code in all the steps. Individual programming had largely broken down as robots became more complex because the robots didn’t perform correctly without extensive and time-consuming debugging.

Lipson used genetic programming for the breeding process. It involves starting with the basic components of a robot, randomly combining them in millions of different configurations and then testing how well they perform by a specific criterion, such as how fast they can move. The designs that work the best are then randomly combined and tested. These steps are repeated until it produces a design that is acceptable. However, this process also proved to be too slow.

So Lipson combined the breeding and the debugging processes in an approach he calls co-evolution. He started with a crude simulator, used it to design a robot, tested the design, and studied how it failed. He used this information to improve the simulator so that it could predict the failure. Then he used the improved simulator to design another robot, tested the design, watched how it failed and improved the simulator once again. Repeating these steps of co-evolving simulators and robots produced increasingly competent designs, he found.

After proving that co-evolution works for robot design, Lipson realized that it could be generalized to solve other problems. Specifically, he adapted it for the mathematical process of curve fitting, more generally called symbolic regression. This involves deriving equations that can describe various data sets. Lipson’s software package, Eureqa, proved to be extremely successful. As the word got around, he began getting requests for copies of the program and decided to make it into a citizen science project, available for anyone to download on the Internet.

“Today, it has more than 20,000 users. People are using it to solve problems in a wide variety of areas including traffic, business and neighborhood problems,” Lipson said. Wikswo says this approach will give scientists the ability to control biological systems even if they can’t completely explain how they work, and will allow for developing significantly improved drugs and other therapies.

Read more:

Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at