Saturday, July 30, 2011

Cold Fusion #1 Claims NASA Chief

by Hank Mills 

Dennis Bushnell is a Chief scientist at NASA Langley Research Center in Hampton, Virginia. He is also an inventor, author, and has been a consultant to countless government and military agencies. A few of these include the DOD, Air Force, DARPA, and the NRC. To read a more complete summary of his background, a good review can be found here. Recently, he was interviewed during an EV World podcast. 

A Chief NASA scientist, Dennis Bushnell has came out in support of Andrea Rossi's E-Cat technology, but denies any type of nuclear fusion is taking place, saying it is probably beta decay per the Widom Larson Theory. Repackaging the terminology to avoid embarrassment will not erase over twenty years of suppression and the reality of cold fusion! 

During the show, he addressed what he called "Low Energy Nuclear Reactions" as being the most interesting and promising alternative energy technology being developed. In fact, it was first on his list, ahead of salt water agriculture, cyanobacteria, energy conservation, geothermal power, nano-plastic solar panels, solar thermal concentrators, and high altitude wind power. 

"The most interesting and promising [technology sector] at this point ... [is] low energy nuclear reactions."

Bushnell went on to say that LENR technology could potentially solve all of our energy and climate problems. He stated the technology could be used for any application, including to power rockets for space travel. It is quite refreshing to hear such positive statements, in support of cold fusion, from a mainstream, credible, and respected scientist! 

Actually, I cannot think of any other scientist off the top of my head, that I would have rather made a statement in support of LENR (cold fusion). His extensive scientific background, career history, and status as a Chief NASA scientist make his supportive statements very significant. Hopefully, they will inspire other scientists to take LENR research seriously! I would like to hear a naysayer like Bob Park (who has attacked cold fusion researchers for 20 years) try to criticize him for his comments.

During the interview, Bushnell specifically mentioned Andrea Rossi's E-Cat (Energy Catalyzer) technology, and seemed very supportive of it. He reviewed the tests that have been performed and the large amount of excess heat produced. At one point he made a remark scientists across the world should notice... 

"I think we are almost over the "we do not understand it" problem. I think we are almost over the "this does not produce anything useful" problem. I think this will go forward fairly rapidly now. If it does, this is capable of, by itself, completely changing geo-economics, geo-politics, and solving climate issues."

Despite his positive statements about LENR, he also made a few statements that indicate his lack of ability to admit that nuclear fusion at low temperatures could be a reality. He stated that all of the so called, "cold fusion" experiments performed over the last twenty years did not produce fusion reactions. His position is that they produced energy via a process called "Widom Larsen" theory, that does not involve fusion at all, but only "beta decay."

They Dare Not Call It Fusion

Fusion is the process in which two atoms collide, merge or "fuse" together, and form another element. During the process, a large amount of energy is released. The problem is that achieving fusion can be difficult, due to electrostatic repulsion. This electrostatic wall that prevents fusion reactions is called the, "Coulomb Barrier." The star in the center of our solar system produces fusion reactions by using millions of degrees of heat. With enough heat, the atoms are smashing into each other with so much force the Coulomb Barrier can be broken. This is what mainstream scientists call "hot fusion."

Cold Fusion, is a phenomenon in which atoms can fuse together and release energy at much lower temperatures. Instead of millions of degrees, the reactions can take place at temperatures as low as a few hundred of degrees. Somehow, in cold fusion setups such as those of Andrea Rossi's, the Coulomb Barrier is apparently somehow being penetrated. There are many ideas and theories about the possible mechanisms that allows this barrier to be broken, allowing fusion reactions happen at such low energy levels. 

Many of the theories have similar themes. Quite a few involve a proton from a hydrogen atom being made "invisible", being shielded, or made electrostatically neutral by an electron. In other theories, hydrogen atoms are shrunken and turned into mini-atoms or "virtual neutrons." Basically, in these theories the protons and their electrons (in some kind of altered form) do not experience the full repulsion of the Coulomb barrier, or are able to quantum tunnel through it. After they penetrate the barrier, a transmutation occurs in the metal (the atom gains a proton) and a large amount of energy is released. The end result is nuclear fusion at low temperatures. 

The "Widom Larsen" theory is just another variation of the above. In the theory, an exotic type electron called a "heavy surface plasmon polariton" combines with a proton to form an, "ultra low momentum neutron." This neutron can then penetrate the Coulomb barrier of an atom of nickel (or other metal) to produce transmutations and release energy. Its proponents claim that this theory does not violate any "laws" of physics, and is not nuclear fusion. 

However, I propose that "Widom Larsen" is a form of nuclear fusion, just like the other theories. The only difference is that it uses a few more fancy names for exotic sub atomic particles. Just like many of the other theories, the following set of events take place according to "Widom Larsen" theory.

- A shrunken or mini-hydrogen atom, virtual neutron, or a proton shielded by an electron sneak past the Coulomb barrier of another atom.

- A transmutation into a heavier element can take place.

- A large release of energy takes place.

This is indeed a fusion reaction, but the "Widom Larsen" proponents still try to argue otherwise. They claim that true "nuclear fusion" can only occur if a proton is pushed through the Coulomb barrier when the full repulsion is felt. Anything else, they claim, is a "neutron capture" event. 

The first thing untenable about that is they claim an "ultra low momentum neutron" is composed of a proton and electron. If it is composed of a proton and electron (just pretending to be a neutron) how can it be a neutron capture event? For example, if I catch a dog dressed up like a cat, I really caught a dog. I did not really catch a cat! However, they want you to believe a dog dressed up like a cat, is really a cat! 

The second thing untenable about their assertion is they claim that fusion cannot be taking place unless the full repulsion of the Coulomb barrier is felt. They go to the dictionary, and produce the following definitions.

- Neutron Capture involves a single particle, such as a neutron, with no electric charge entering a nucleus.

- Nuclear Fusion involves two nuclei having like-charges that overcome electromagnetic forces (the Coulomb barrier).

Basically, they try to claim that if you find a way to make a nuclei or proton sneak into another atom (without using lots of energy to bypass the Coulomb barrier) you have cheated, and you have not produced nuclear fusion. 

For example, did you "climb" a wall if you used a ladder? The "Widom Larsen" supporters say you did not "climb" the wall unless you scaled it by hand! Using a ladder was cheating! According to them you did not climb the wall, but only "went over it." 

They want to call your success something less than what it was, because you found a smarter/faster way to do it! 

However, regardless how you got over the wall, the end result is the same. You are on the other side! The same is true with Cold Fusion and LENR (which they claim is only neutron capture and not fusion). The fact is, fusion happened regardless of the method by which you got the proton/electron/neutron into the atom's nucleus!

In reality, there may be some sort of Widom-Larsen "like" phenomenon taking place in Andrea Rossi's cold fusion technology, and others. However, I doubt that the entire theory is correct. To be blunt, I doubt any of the current cold fusion theories are 100% correct, but the seeming fact is fusion is occurring! 

Any theory that claims that taking an atom, putting all or part of it into the nucleus of another atom, transmuting the second atom into another element, and releasing energy in the process is anything other than *some* kind of fusion, is total nonsense. It defies logic and rationality!

Hiding the Legacy of Suppression

The main reasons I think many mainstream scientists like Bushnell and the "Widom Larsen" supporters want to abolish the term Cold Fusion, deny the obvious truth fusion is taking place, and claim only LENR neutron capture is taking place is as follows.

First, they have to do "something" to make themselves "stand out." By shouting their "Widom Larsen" theory they can avoid the "stigma" of the term cold fusion, they can make it seem they have a special theory better than all the others, and they can claim they are not producing fusion at low temperatures (which is still heresy in their opinion). 

Secondly, by naming the emerging technology "LENR" or "neutron capture", they can possibly avoid the history of cold fusion being brought up. If the history of cold fusion is brought up, it makes the mainstream scientific community look like a bunch of evil, greedy monsters and ignorant fools. Cold fusion has been suppressed for over 20 years, and they do not want the history of that suppression being a focus of the media's attention. They would rather give it a new name and a non-fusion explanation. That way, their dirty little secret can be kept hidden. The population of the world is going to be angry when they realize that we could have had practical cold fusion many years ago, if not for those who put their own interests ahead of the good of mankind! 

Finally, if something as "impossible" as Cold Fusion becomes a reality, then it raises the question anew about what other technologies the scientific community deemed "impossible" might actually be possible after all. Anti-gravity, free energy, and faster than light space travel are all considered by the mainstream to be science fiction, but in the post cold fusion world that paradigm would be shattered (for political reasons more than scientific). By not using the term "Cold Fusion", it may be easier for the scientific community to down play the significance of this technology actually existing when they previously said it was impossible, claim that they never were involved in suppressing it, and convince the masses it does not represent damning proof of the failure of mainstream science on something so fundamental.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Inside the Mind of Microsoft’s Chief Futurist (Interview)

If I encountered Craig Mundie on the street, met his kind but humorless gaze and heard that slight southern drawl, I'd guess he was a golf pro—certainly not Microsoft's Chief of the future.

As chief research and strategy officer at Microsoft, Mundie is a living portal of future technology, a focal point between thousands of scattered research projects and the boxes of super-neat products we'll be playing with 5 years, 20 years, maybe 100 years from now. And he's not allowed to even think about anything shipping within the immediate 3 years. I'm pretty sure the guy has his own personal teleporter and hoverboard, but when you sit and talk to him for an hour about his ability to see tomorrow, it's all very matter of fact. So what did we talk about? Quantum computing did come up, as did neural control, retinal implants, Windows-in-the-cloud, multitouch patents and the suspension of disbelief in interface design.

Seeing the Future
Your job is to look not at next year or next five years. Is there a specific number of years you're supposed to be focused on?

I tell people it ranges from from about 3 to 20. There's no specific year that's the right amount, in part because the things we do in Research start at the physics level and work their way up. The closer you are to fundamental change in the computing ecosystem, the longer that lead time is.

When you say 3 years, you're talking about new UIs and when you say 20 you're talking about what, holographic computing?

Yeah, or quantum computing or new models of computation, completely different ways of writing programs, things where we don't know the answer today, and it would take some considerable time to merge it into the ecosystem.

So how do you organize your thoughts?

I don't try to sort by time. Time is a by-product of the specific task that we seek to solve. Since it became clear that we were going to ultimately have to change the microprocessor architecture, even before we knew what exactly it would evolve to be from the hardware guys, we knew they'd be parallel in nature, that there'd be more serial interconnections, that you'd have a different memory hierarchy. From roughly from the time we started to the time that those things will become commonplace in the marketplace will be 10 to 12 years.

Most people don't really realize how long it takes from when you can see the glimmer of things that are big changes in the industry to when they actually show up on store shelves.

Is it hard for you to look at things that far out?

[Chuckles] No, not really. One of the things I think is sort of a gift or a talent that I have, and I think Bill Gates had to some significant degree too, is to assimilate a lot of information from many sources, and your brain tends to work in a way where you integrate it and have an opinion about it. I see all these things and have enough experience that I say, OK, I think that this must be going to happen. Your ability to say exactly when or exactly how isn't all that good, but at least you get a directional statement.

When you look towards the future, there's inevitability of scientific advancement, and then there's your direction, your steering. How do you reconcile those two currents?

There are thousands of people around the world who do research in one form or another. There's a steady flow of ideas that people are advancing. The problem is, each one doesn't typically represent something that will redefine the industry.

So the first problem is to integrate across these things and say, are there some set of these when taken together, the whole is greater than the sum of the parts? The second is to say, by our investment, either in research or development, how can we steer the industry or the consumer towards the use of these things in a novel way? That's where you create differentiated products.

Interface Design and the Suspension of Disbelief
In natural interface and natural interaction, how much is computing power, how much is sociological study and how much is simply Pixar-style animation?

It's a little bit of all of them. When you look at Pixar animation, something you couldn't do in realtime in the past, or if you just look at the video games we have today, the character realism, the scene realism, can be very very good. What that teaches us is that if you have enough compute power, you can make pictures that are almost indistinguishable from real life.

On the other hand, when you're trying to create a computer program that maintains the essence of human-to-human interaction, then many of the historical fields of psychology, people who study human interaction and reasoning, these have to come to the fore. How do you make a model of a person that retains enough essential attributes that people suspend disbelief?

When you go to the movies, what's the goal of the director and the actors? They're trying to get you to suspend disbelief. You know that those aren't real people. You know Starship Enterprise isn't out there flying around—

Don't tell our readers that!

[Grins] Not yet at least. But you suspend disbelief. Today we don't have that when people interact with the computer. We aren't yet trying to get people to think they're someplace else. People explore around the edges of these things with things like Second Life. But there you're really putting a representative of yourself into another world that you know is a make-believe environment. I think that the question is, can we use these tools of cinematography, of human psychology, of high-quality rendering to create an experience that does feel completely natural, to the point that you suspend disbelief—that you're dealing with the machine just as if you were dealing with another person.

So the third component is just raw computing, right?

As computers get more powerful, two things happen. Each component of the interaction model can be refined for better and better realism. Speech becomes more articulate, character images become more lifelike, movements become more natural, recognition of language becomes more complete. Each of those drives a requirement for more computing power.

But it's the union of these that creates the natural suspension of disbelief, something you don't get if you're only dealing with one of these modalities of interaction. You need more and more computing, not only to make each element better, but to integrate across them in better ways.

When it comes to solving problems, when do you not just say, "Let's throw more computing power at it"?

That actually isn't that hard to decide. On any given day, a given amount of computing costs a given amount of money. You can't require a million dollars worth of computer if you want to put it on everybody's desk. What we're really doing is looking at computer evolutions and the improvements in algorithms, and recognizing that those two things eventually bring new problem classes within the bounds of an acceptable price.

So even within hypothetical research, price is still a factor?

It's absolutely a consideration. We can spend a lot more on the computing to do the research, because we know that while we're finishing research and converting it into a product, there's a continuing reduction in cost. But trying to jockey between those two things and come out at the right place and the right time, that's part of the art form.

Hardware Revolutions, Software Evolutions
Is there some sort of timeline where we're going to shift away from silicon chips?

That's really a question you should ask Intel or AMD or someone else. We aren't trying to do the basic semiconductor research. The closest we get is some of the work we're doing with universities exploring quantum computers, and that's a very long term thing. And even there, a lot of work is with gallium arsenide crystals, not exactly silicon, but a silicon-like material.

Is that the same for flexible screens or non-moving carbon-fiber speakers that work like lightning—are these things you track, but don't research?

They're all things that we track because, in one form or another, they represent the computer, the storage system, the communication system or the human-interaction capabilities. One of the things that Microsoft does at its core is provide an abstraction in the programming models, the tools that allow the introduction of new technologies.

When you talk about this "abstraction," do you mean something like the touch interface in Windows 7, which works with new and different kinds of touchscreens?

Yeah, there are a lot of different ways to make touch happen. The Surface products detect it using cameras. You can have big touch panels that have capacitance overlays or resistive overlays. The TouchSmart that HP makes actually is optical.

The person who writes the touch application just wants to know, "Hey, did he touch it?" He doesn't want to have to write the program six times today and eight times tomorrow for each different way in which someone can detect the touch. What we do is we work with the companies to try to figure out what is the abstraction of this basic notion. What do you have to detect? And what is the right way to represent that to the programmer so they don't have to track every activity, or even worse, know whether it was an optical detector, a capacitive detector or an infrared detector? They just want to know that the guy touched the screen.

Patents and Inventor's Rights
You guys recently crossed 10,000 patent line—is that all your Research division?

No, that's from the whole company. Every year we make a budget for investment in patent development in all the different business groups including Research. They all go and look for the best ideas they've got, and file patents within their areas of specialization. It's done everywhere in the company.

So, take multitouch, something whose patents have been discussed lately. When it comes to inevitability vs. unique product development, how much is something like multitouch simply inevitable? How much can a single company own something that seems so generally accepted in interface design?

The goal of the patent system is to protect novel inventions. The whole process is supposed to weed out things that are already known, things that have already been done. That process isn't perfect—sometimes people get patents on things that they shouldn't, and sometimes they're denied patents on things they probably should get—but on balance you get the desired result.

If you can't identify in the specific claims of a particular patent what it is novel, then you don't get a patent. Just writing a description of something—even if you're the first person to write it down—doesn't qualify as invention if it's already obvious to other people. You have to trust that somehow obvious things aren't going to be withheld from everybody.

That makes sense. We like to look at patents to get an idea of what's coming next—

That's what they were intended to do; that was the deal with the inventor: If you'll share your inventions with the public in the spirit of sharing knowledge, then we'll give you some protection in the use of that invention for a period of time. You're rewarded for doing it, but you don't sequester the knowledge. It's that tradeoff that actually makes the patent system work.

Windows in the Cloud, Lasers in the Retina
Let's get some quick forecasts? How soon until we see Windows in the cloud? I turn on my computer, and even my operating system exists somewhere else.

That's technologically possible, but I don't think it's going to be commonplace. We tend to believe the world is trending towards cloud plus client, not timeshared mainframe and dumb display. The amount of intrinsic computing capability in all these client devices—whether they're phones, cars, game consoles, televisions or computers—is so large, and growing larger still exponentially, that the bulk of the world's computing power is always going to be in the client devices. The idea that the programmers of the world would let that lie fallow, wouldn't try to get any value out of it, isn't going to happen.

What you really want to do is find what component is best solved in the shared facility and what component is best computed locally? We do think that people will want to write arbitrary applications in the cloud. We just don't think that's going to be the predominating usage of it. It's not like the whole concept of computing is going to be sucked back up the wire and put in some giant computing utility.

What happens when the processors are inside our heads and the displays are projected on the inside of our eyeballs?

It'll be interesting to see how that evolution will take place. It's clear that embedding computing inside people is starting to happen fairly regularly. There's special processors, not general processors. But there are now cochlear implants, and even people exploring ways to give people who've lost sight some kind of vision or a way to detect light.

But I don't think you are going to end up with some nanoprojector trying to scribble on your retina. To the extent that you could posit that you're going to get to that level, you might even bypass that and say, "Fine, let me just go into the visual cortex directly." It's hard to know how the man-machine interface will evolve, but I do know that the physiology of it is possible and the electronics of it are becoming possible. Who knows how long it will take? But I certainly think that day will come.

And neural control of our environment? There's already a Star Wars toy that uses brain waves to control a ball—

Yeah, it's been quite a few years since I saw some of the first demos inside Microsoft Research where people would have a couple of electrical sensors on their skull, in order to detect enough brain wave functionality to do simple things like turn a light switch on and off reliably. And again, these are not invasive techniques.

You'll see the evolution of this come from the evolution of diagnostic equipment in medicine. As people learn more about non-invasive monitoring for medical purposes, what gets created as a byproduct are non-invasive sensing people can use for other things. Clearly the people who will benefit first are people with physical disabilities—you want to give them a better interface than just eye-tracking on screens and keyboards. But each of these things is a godsend, and I certainly think that evolution will continue.

I wonder what your dream diary must look like—must have some crazy concepts.

I don't know, I just wake up some mornings and say, yeah, there's a new idea.

Really? Just jot it down and run with it?

Yeah, that's oftentimes the way it is. Just, wasn't there yesterday, it's there today. You know, you just start thinking about it.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Google and NASA Create University for Would-be Futurists

Maybe you've wondered how someone gets to be a "futurist." Simple: Go to futurist university. Google and NASA will unveil just such a school at this week's TED conference; the head of the school with be one of the the leading futurists of our time, Ray Kurzweil. Fittingly, it's been dubbed Singularity University, after Kurzweil's much-discussed idea that computers will soon reach a threshold of such great power that they'll reshape our world.

The institute will offer courses in Kurzweil's traditional pet subjects: nanotechnology, biotechnology, and artificial intelligence. There will also be seven additional courses, geared to the world's greatest challenges, including energy and finance. As Kurzweil told the AP: "One of the objectives of the university is to really dive in depth into these exponentially growing technologies, to create connections between them, and to apply these ideas to the great challenges [facing humanity]."

Classes will take place at NASA's Ames campus. Sartup costs are being footed by Google, which anted $1 million, and several companies yet to be named, that will each donate $250,000. The university's first chancellor will be Peter Diamandis, chariman of the X Prize foundation.

Applications will be accepted at (which is currently crashed). The first year will see 30 students accepted; the year after that, 100. But penniless cranks need not apply, though rich ones can: Tuition will be $25,000 for a nine-week course that begins with a three-week general curriculum, and culminates in a subject specialty.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Google and Nasa back new school for futurists

Google and Nasa are throwing their weight behind a new school for futurists in Silicon Valley to prepare scientists for an era when machines become cleverer than people.

The new institution, known as “Singularity University”, is to be headed by Ray Kurzweil, whose predictions about the exponential pace of technological change have made him a controversial figure in technology circles.

Google and Nasa’s backing demonstrates the growing mainstream acceptance of Mr Kurzweil’s views, which include a claim that before the middle of this century artificial intelligence will outstrip human beings, ushering in a new era of civilisation.

To be housed at Nasa’s Ames Research Center, a stone’s-throw from the Googleplex, the Singularity University will offer courses on biotechnology, nano-technology and artificial intelligence.

The so-called “singularity” is a theorised period of rapid technological progress in the near future. Mr Kurzweil, an American inventor, popularised the term in his 2005 book “The Singularity is Near”.

Proponents say that during the singularity, machines will be able to improve themselves using artificial intelligence and that smarter-than-human computers will solve problems including energy scarcity, climate change and hunger.

Yet many critics call the singularity dangerous. Some worry that a malicious artificial intelligence might annihilate the human race.

Mr Kurzweil said the university was launching now because many technologies were approaching a moment of radical advancement. “We’re getting to the steep part of the curve,” said Mr Kurzweil. “It’s not just electronics and computers. It’s any technology where we can measure the information content, like genetics.”

The school is backed by Larry Page, Google co-founder, and Peter Diamandis, chief executive of X-Prize, an organisation which provides grants to support technological change.

“We are anchoring the university in what is in the lab today, with an understanding of what’s in the realm of possibility in the future,” said Mr Diamandis, who will be vice-chancellor. “The day before something is truly a breakthrough, it’s a crazy idea.”

Despite its title, the school will not be an accredited university. Instead, it will be modelled on the International Space University in Strasbourg, France, the interdisciplinary, multi-cultural school that Mr Diamandis helped establish in 1987.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Thursday, July 28, 2011

Molecular cut and paste

A combination of cheap DNA synthesis, freely accessible databases, and our ever-expanding knowledge of protein science is conspiring to permit a revolution in creating powerful molecular tools, suggests William McEwan, Ph.D., a virologist at the MRC Laboratory of Molecular Biology, Cambridge, U.K., in this excerpt from the new book Future Science: Essays From The Cutting Edge, edited by Max Brockman.

This afternoon I received in the post a slim FedEx envelope containing four small vials of DNA. The DNA had been synthesized according to my instructions in under three weeks, at a cost of 39 U.S. cents per base pair (the rungs adenine-thymine or guanine-cytosine in the DNA ladder). The 10 micrograms I ordered are dried, flaky, and barely visible to the naked eye, yet once I have restored them in water and made an RNA copy of this template, they will encode a virus I have designed.

My virus will be self-replicating, but only in certain tissue-culture cells; it will cause any cell it infects to glow bright green and will serve as a research tool to help me answer questions concerning antiviral immunity. I have designed my virus out of parts—some standard and often used, some particular to this virus—using sequences that hail from bacteria, bacteriophages, jellyfish, and the common cold virus. By simply putting these parts together, I have infinitely increased their usefulness. What is extraordinary is that if I had done this experiment a mere eight years ago, it would have been a world first and unthinkable on a standard research grant. A combination of cheap DNA synthesis, freely accessible databases, and our ever expanding knowledge of protein science is conspiring to permit a revolution in creating powerful molecular tools.

Nature is already an expert in splicing together her existing repertoire to generate proteins with new functions. Her unit of operation is the protein domain, an evolutionarily independent protein structure that specializes in a particular task, such as an enzymatic activity or recognition of other proteins. We can trace the evolutionary descent of the protein domains by examining their sequences and grouping them into family trees. We find that over the eons of evolutionary time the DNA that encodes protein domains has been duplicated and combined in countless ways through rare genetic events, and that such shuffling is one of the main drivers of protein evolution.

The result is an array of single and multidomain proteins that make up an organism’s proteome. We can now view the protein domain as a functional module, which can be cut and pasted into new multidomain contexts while remaining able to perform the same task. This modular capability immediately lends itself to engineering: we don’t have to go about finding or artificially evolving a protein that performs our chosen task; we merely combine components that together are greater than the sum of their parts.

I’m interested in the defense mechanisms within cells — mechanisms that specifically recognize and disable intracellular pathogens. This type of defense is considered separate from the two main branches of immunity that are more intensely studied: the evolutionarily ancient “innate” immune system and the vertebrate-specific “adaptive” immune system. Innate immunity is the recognition of conserved features of pathogens—for example, the detection by specialized cells, such as macrophages, of the sugary capsule that surrounds many bacteria. Adaptive immunity works by fielding a huge diversity of immune recognition molecules, such as antibodies, and then producing large quantities of those that recognize nonself, pathogen-derived targets.

The newly discovered kind of immunity on which I work, sometimes termed “intrinsic immunity,” shares features with innate immunity but tends to be widely expressed, instead of residing just within “professional” immune cells, and is always “on.” In other words, every cell in an organism is primed and ready to disable an invading pathogen. The intrinsic immune system is at a strategic disadvantage, as its targets are often fast-evolving viruses that can rapidly mutate to evade recognition. Unlike the adaptive immune system, which can quickly generate a response to an almost infinite diversity of targets, the intrinsic immune system must rely on rare mutations and blind selection over evolutionary time to compete with its opponents.

So far, the study of the intrinsic immune system has been dominated by its interaction with retroviruses. The retroviruses, an ancient affliction of vertebrates, violate the central dogma of biology—that DNA makes RNA makes protein: they are RNA viruses able to generate DNA copies of themselves and insert this Trojan-horse code into the host’s genome. Almost one-tenth of the human genome is the defunct relic of this sort of infection.

Within the past 7 million to 12 million years, a comparatively recent member of the retrovirus family, the lentivirus, has emerged and spread slowly through the branches of the mammalian family tree. The oldest known traces of lentivirus have been found in the genome of rabbits, but current infections occur in horses, cats, ruminants, and primates. Lentiviruses arrived in humans in the form of HIV, as several cross-species transmission events from other primates. Only one of those viral transmissions— from chimpanzee to human, sometime in the late nineteenth or early twentieth century—has adapted to its new host in such a devastating manner, the virus being HIV-1 M-group, which causes AIDS and currently infects 33 million people worldwide.

One of the major players in intrinsic immunity is TRIM5, a four-domain protein that is expressed in virtually every cell in the human body. By virtue of one of its domains—the RING (which stands for Really Interesting New Gene) domain—TRIM5 has an enormously high turnover rate; that is, each of its molecules is degraded within about an hour of the cell’s having synthesized it. By virtue of another of its domains, it can recognize and engage retroviruses soon after their entry into the cell. As a result, incoming viruses can be degraded along with TRIM5 and thus made noninfective.

A classic arms-race situation has developed, wherein TRIM5 has tried to maintain its ability to recognize the rapidly evolving retroviruses, placing the gene under some of the strongest Darwinian selection in the entire primate genome. However, HIV-1 seems to have the upper hand at the moment: the human TRIM5 variant only marginally reduces the replication of HIV-1. Could this be one of the failures in human immunity that has permitted such a dramatic invasion by this pathogen? And what does human TRIM5 need to do in order to gain the upper hand? Or, to ask a bolder question, what can we do to it to engineer resistance to the disease?

One surprising answer is provided by protein-domain fusions in other primate species. A fascinating thing has happened in South American owl monkeys: TRIM5 has been fused with a small protein that HIV-1 depends on for optimal replication. The resulting fusion protein is called TRIMCyp and can reduce the replication of lentiviruses by orders of magnitude, essentially rendering the owl monkeys’ cells immune to the virus. Almost unbelievably (and it amazes me that espousers of Intelligent Design aren’t onto this), this feat of genomic plasticity has happened twice: versions of TRIMCyp have also been described in the unrelated macaque lineage. Since no wild populations of owl monkeys or macaques have been shown to harbor lentiviruses, it is difficult to say whether TRIMCyps have been selected specifically to combat lentiviruses, but there remains the intriguing possibility that TRIMCyp has helped lead to the current lentivirus-free status of one or both of these species.

So how can we benefit from gene fusions in other species? The first lesson is that by splicing together domains from seemingly unrelated proteins, unexpected and useful products can be generated. Researchers have already generated human TRIMCyps, which would avoid the immune-system rejection that introducing an owl monkey gene would produce. My colleagues and I have also engineered a feline TRIMCyp that prevents replication of the feline immunodeficiency virus in tissue-culture systems. However, in a clinical setting TRIMCyp must be expressed within cells to be useful as an antiviral, and the only effective means of achieving this is through gene therapy to alter the target cell’s genetic material. In a neat twist of roles, the best means we have of doing this is with a modified retroviral vehicle, or vector, to introduce a stretch of engineered DNA into the genome.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Thinking quantitatively about technological progress

I have been thinking about progress a bit recently, mainly because I would like to develop a mathematical model of how brain scanning technology and computational neuroscience might develop.

In general, I think the most solid evidence of technological progress is Wrightean experience curves. These are well documented in economics and found everywhere: typically the cost (or time) of manufacturing per unit behaves as x^a, where a<0 (typically something like -0.1) and x is the number of units produced so far. When you make more things, you learn how to make the process better.

Performance curves

On the output side, we have performance curves: how many units of something useful can we get per dollar. The Santa Fe Institute performance curve database is full of interesting evidence of things getting better/cheaper. Bela Nagy has argued that typically we see “Sahal’s Law“: exponentially increasing sales (since a tech becomes cheaper and more ubiquitous), together with exponential progress, produces Wright’s experience curves.

One interesting problem might be that some techs are limited because of the number of units sold will eventually level off. In sales of new technology we see Bass curves: a sigmoid curve where at first a handful of early adopters get it, then more and more get it (since people copy each other this is roughly exponential) and then a leveling off as most potential buyers already got it.

Lots of literature on it, useless for forecasting (due to noise sensitivity in the early days). If Bela is right, this would mean that a technology obeying the Moore-Sahal-Wright relations would certainly follow a straight line in the “total units sold” vs. “cost per unit” diagram, but there would be a limit point since the total units sold eventually levels off (once you have railroads to every city, building another one will not be useful; once everybody has good enough graphics cards they will buy much fewer).

The technology stagnates, and this is not because of any fundamental physics or engineering limit. The real limit is lack of economic incentives for becoming much better.


Another aspect that I find really interesting is whether a field has sudden jumps or continuous growth. Consider how many fluid dynamics calculations you can get per dollar. You have an underlying Moore’s law exponential, but discrete algorithmic improvements create big jumps as more efficient ways of calculating are discovered.

Typically, these improvements are big, a decade of Moore or so. But this mainly happens in some fields like software (chess program performance behaves like this, and I suspect — if we ever could get a good performance measure — AI does too), where a bright idea changes the process a lot. It is much more rare in fields constrained by physics (mining?) or where the tech is composed of a myriad interacting components (cars?).


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Tuesday, July 26, 2011

Sharper, deeper, faster optical imaging of live biological samples

Researchers from the California Institute of Technology (Caltech) have developed a novel approach that could redefine optical imaging of live biological samples, simultaneously achieving high resolution, high penetration depth (for seeing deep inside 3D samples), and high imaging speed.

The research team employed an unconventional imaging method called light-sheet microscopy: a thin, flat sheet of light is used to illuminate a biological sample from the side, creating a single illuminated optical section through the sample.

The light given off by the sample is then captured with a camera oriented perpendicularly to the light sheet, harvesting data from the entire illuminated plane at once. This allows millions of image pixels to be captured simultaneously, reducing the light intensity that needs to be used for each pixel.

To achieve sharper image resolution with light-sheet microscopy deep inside biological samples, the team used a process called two-photon excitation for the illumination.

“The goal is to create ‘digital embryos,’ providing insights into how embryos are built, which is critical not only for basic understanding of how biology works but also for future medical applications such as robotic surgery, tissue engineering, or stem-cell therapy,” said biologist Scott Fraser, director of the Biological Imaging Center at Caltech’s Beckman Institute.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Minority rules: scientists discover tipping point for the spread of ideas

Scientists at Rensselaer Polytechnic Institute have found that when just 10 percent of the population holds an unshakable belief, their belief will always be adopted by the majority of the society.

The scientists developed computer models of various types of social networks. One of the networks had each person connect to every other person in the network. The second model included certain individuals who were connected to a large number of people, making them opinion hubs or leaders.

The final model gave every person in the model roughly the same number of connections. The initial state of each of the models was a sea of traditional-view holders. Each of these individuals held a view, but were also, importantly, open minded to other views.

Once the networks were built, the scientists then “sprinkled” in some true believers throughout each of the networks.

“As agents of change start to convince more and more people, the situation begins to change,” Sreenivasan said. “People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn’t change anything within the larger system, as we saw with percentages less than 10.

“When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority,” said SCNARC Director Boleslaw Szymanski. “Once that number grows above 10 percent, the idea spreads like flame.

The research has broad implications for understanding how opinion spreads. “There are clearly situations in which it helps to know how to efficiently spread some opinion or how to suppress a developing opinion,” said Associate Professor of Physics and co-author of the paper Gyorgy Korniss. “Some examples might be the need to quickly convince a town to move before a hurricane or spread new information on the prevention of disease in a rural village.”

The researchers are now looking for partners within the social sciences and other fields to compare their computational models to historical examples. They are also looking to study how the percentage might change when input into a model where the society is polarized. Instead of simply holding one traditional view, the society would instead hold two opposing viewpoints. An example of this polarization would be Democrat versus Republican.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at

Saturday, July 23, 2011

A Semiconductor DNA Sequencer

Unlike other systems, Ion Torrent's technology promises to improve in step with advances in electronics—and it's already proving useful for public health.

Last December, Ion Torrent, something of an upstart in the sequencing industry, launched its new semiconductor-based sequencing machine. At $50,000, it was a comparatively inexpensive device designed to move DNA sequencing from large, specialized centers to the standard lab bench. Now the company says its machine is en route to becoming the most popular one in a competitive market.

Life Technologies, which bought Ion Torrent for $375 million in cash and stock last August, is feeling good about its bet. The technology has already proved its worth as a valuable public health tool. In June, two different groups used the Ion Torrent machine to rapidly sequence the genome of a new strain of E. coli that killed more than 20 people in Europe. The effort helped identify the microbe's drug-resistance genes. And researchers across the globe are using it to sequence genes involved in cancer and other diseases, with the aim of creating rapid tests to determine the best medicine for a patient. 

Ion Torrent is competing with a number of sequencing technologies, all racing to become the fastest and cheapest: a landmark goal in the field is to sequence an entire human genome for $1,000, which would put it on par with many other routine medical tests. But Jonathan Rothberg, Ion Torrent's founder, says his technology, which is based on semiconductors, is getting better faster than anyone else's. 

Most advanced sequencing technologies rely on fluorescently tagged molecules and a microscope to sequence DNA. At the heart of Ion Torrent's machine are sequencing chips that detect DNA sequences electronically. This approach removes the need for expensive lasers and cameras. The chips are made in the same semiconductor fabs as computer microprocessors. And just as with computer chips, production costs per chip drop as larger numbers are produced. As sales of the Ion Torrent machine have risen, the cost of the sequencing chips has dropped from $250 to $99.

Researchers have also improved the chip's sequencing capacity by tenfold; each chip can generate 100 million base pairs, up from 10 million base pairs when the technology first launched. Rothberg says a third-generation chip capable of sequencing a billion bases will be available next year.

As a tip of the hat to the power of superconductors, Ion Torrent has now sequenced the full genome of Intel cofounder Gordon E. Moore, now 82. Moore is best known as the creator of Moore's law, which posits that the processing power of new chips would double approximately every two years. Ion Torrent's chip has improved tenfold over six months, a rapid advance that Rothberg attributes to "accumulated Moore's law," or the decades of research and billions of dollars that have gone into making faster microprocessors.

At this point, Ion Torrent's technology isn't well-suited to sequencing entire human genomes. Moore's sequence required about 1,000 chips and a total cost of about $200,000. Other technologies, in contrast, have brought the cost of a whole genome down to $5,000 to $20,000, depending on how the cost is calculated.

Ion Torrent's technology is most adept at sequencing small genomes, like those of microbes, or a selection of genes, such as those that have been linked to cancer. "We are not the cheapest machine for a human genome, but we are the cheapest if you want to look at 200 genes or a pathogen behind an outbreak," says Rothberg. He predicts that by 2013, Ion Torrent will have developed a chip capable of sequencing an entire human genome. 

However, many of the most medically relevant tests that physicians want to run today encompass only tens or hundreds of genes. And in these cases, the biggest advantage of the new technology is its speed; it can sequence a sample of DNA in a couple of hours, rather than the week or more required by most of the machines now on the market. For genetic diagnostics, physicians want results fast. The Ion Torrent machine, however, is still considered a research device; it has not yet been approved by the U.S. Food and Drug Administration for clinical use.

Yemi Adesokan, cofounder of the genomics startup Pathogenica, is using Ion Torrent's technology to develop a test for human papilloma virus in pap smear samples. Unlike existing tests, the Ion Torrent one will be able to detect infection with multiple strains of the virus, which can be linked to an increased risk of cancer. "It works really well, particularly in terms of turnaround time," says Adesokan.


Global Source and/or and/or more resources and/or read more: ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at