Saturday, April 30, 2011

Ultrafast fibre optics set new speed record

THINK your broadband internet connection is fast? Two separate research groups have just lapped the field, setting a world record by sending more than 100 terabits of information per second through a single optical fibre. That's enough to deliver three solid months of HD video- or the contents of 250 double-sided Blu-ray discs.

This marks "a critical milestone in fibre capacity", says Ting Wang at NEC Laboratories in Princeton, New Jersey.

Such lab results are far beyond today's commercial needs. Total capacity between New York and Washington DC, one of the world's busiest routes, is only a few terabits per second, says Tim Strong, of Telegeography Research in Washington. But "traffic has been growing about 50 per cent a year for the last few years", he adds. With bandwidth-hungry video-streaming and social media growing relentlessly, network planners are always searching for ways to expand capacity.

Today's fibre optics use several tricks to enhance bandwidth. Like the radio band, the optical spectrum can be sliced into many distinct channels that can simultaneously carry information at different frequencies. The laser light is pulsed on and off rapidly, with each pulse further sliced up into different polarities, amplitudes and phases of light, each of which contains a bit of information. The trick is to pack all these signals together in one fibre so that they hit the receiver as one pulse without interference.

At the Optical Fiber Communications Conference in Los Angeles last month, Dayou Qian, also of NEC, reported a total data-sending rate of 101.7 terabits per second through 165 kilometres of fibre. He did this by squeezing light pulses from 370 separate lasers into the pulse received by the receiver. Each laser emitted its own narrow sliver of the infrared spectrum, and each contained several polarities, phases and amplitudes of light waves to code each packet of information.

At the same conference, Jun Sakaguchi of Japan's National Institute of Information and Communications Technology in Tokyo also reported reaching the 100-terabit benchmark, this time using a different method. Instead of using a fibre with only one light-guiding core, as happens now, Sakaguchi's team developed a fibre with seven. Each core carried 15.6 terabits per second, yielding a total of 109 terabits per second. "We introduced a new dimension, spatial multiplication, to increasing transmission capacity," Sakaguchi says.

Multi-core fibres are complex to make, as is amplifying signals for long-distance transmission in either technique. For this reason, Wang thinks the first application of 100-terabit transmission will be inside the giant data centres that power Google, Facebook and Amazon.

Source: http://goo.gl/mZcnH

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Friday, April 29, 2011


Like GPS, the brain’s navigation system for the external world requires precisely timed pulses

April 29, 2011

Biologists at the University of California, San Diego have discovered that precisely timed electrical oscillations in neuronal “grid cells” in the brain allow us to navigate through our physical environment by maintaining an internal hexagonal representation.

Located in the entorhinal cortex next to the hippocampus, grid cells use precisely timed theta waves from another part of the brain that serves as a kind of neural pacemaker.

The test this hypothesis, scientists monitored the electrical activity of grid cells in rats that explored a small four-foot by four-foot enclosure.

As an animal navigates through its environment, a given grid cell becomes active when the animal’s position coincides with any of the vertices within the grid. The scientists silenced the theta waves by manipulating a small group of pacemaker cells in the brain and observed a significant deterioration of the grid cells’ maps of the environment.

Their discovery has important implications for understanding the underlying causes of neurological diseases such as Alzheimer’s disease and for restoring memory in areas of the brain that are necessary for orientation, the researchers said.

http://goo.gl/CJE82


Source and/or and/or more resources and/or read more: http://3.ly/rECc ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc

Tuesday, April 26, 2011


Tapping into "The Information Seeker"

To date, health device makers have primarily targeted consumers who are either fitness focused or chronically ill. But between these two extremes sits a large, fragmented and often overlooked population who seek better information to effectively manage their health.

Recent advances in technology are enabling smarter, connected personal healthcare "systems" that can supply crucial information to significantly improve diagnosis, treatment and condition management. These developments now make it feasible to deliver health device solutions that meet the needs of these "information seekers" and help reduce long-term healthcare costs.

Further, as consumers begin to pay a higher share of their total healthcare costs, we believe the average person may become more motivated to manage his or her health to help contain healthcare expenses and reduce insurance premiums.

Our research suggests that successful solution providers will approach this market opportunity as an ecosystem of partners - with an integrated solution that extends beyond the health device itself. By plugging the information gap for these consumers, solution providers can help fuel healthcare innovation.

To learn more, download the complete IBM Institute for Business Value executive report, "The future of connected health devices: Liberating the Information Seeker."

http://goo.gl/L5Ign

Digital Transformation

Businesses today are undertaking digital transformations, rethinking what customers value most and creating operating models that take advantage of what’s newly possible for competitive differentiation. The challenge for business is how fast and how far to go.

Business leaders have long used information technology to improve productivity and efficiency, reach new markets and optimize supply chains. What’s new is that customer expectations have also changed. People everywhere are using social networks to find jobs and restaurants, lost friends and new partners – and, as citizens, to achieve common political goals. They are using the Internet for entertainment, shopping, socializing and household management.

How can businesses best respond to this shift? How can they take advantage of the opportunity to innovate, differentiate and grow? And how can they do all this cost efficiently, leveraging and optimizing the newest information technologies as part of their overall physical operations? In our analysis of leading companies and our work with clients, we have found that companies with a cohesive strategy for integrating digital and physical elements can successfully transform their business models – and set new directions for entire industries. These companies focus on two complementary activities: reshaping their customer value propositions and reconfiguring operating models using digital technologies for greater customer and partner interaction and collaboration. To do so, they are building a new set of capabilities that allows them to progress along both dimensions. 

http://goo.gl/0XyEl

Stable, self-renewing neural stem cells created

April 26, 2011

Researchers at the University of California, San Diego School of Medicine, the Gladstone Institutes in San Francisco and colleagues have reported the creation of long-term, self-renewing, primitive neural precursor cells from human embryonic stem cells (hESCs) that can be directed to become many types of neurons without increased risk of tumor formation.

To produce the neural stem cells, the researchers added small molecules in a chemically defined culture condition that induces hESCs to become primitive neural precursor cells, but then halts the further differentiation process.

Because the process doesn’t use any gene transfer technologies or exogenous cell products, there’s minimal risk of introducing mutations or outside contamination, the researchers said.

The scientists were able to direct the precursor cells to differentiate into different types of mature neurons.  ”You can generate neurons for specific conditions like amyotrophic lateral sclerosis (ALS or Lou Gehrig’s disease), Parkinson’s disease or, in the case of my particular research area, eye-specific neurons that are lost in macular degeneration, retinitis pigmentosa or glaucoma,” said Kang Zhang, M.D., Ph.D.

The same method can be used to push induced pluripotent stem cells (stem cells artificially derived from adult, differentiated mature cells) to become neural stem cells, Zhang said.

http://goo.gl/8BHSQ

Machines will achieve human-level intelligence in the 2028 to 2150 range: poll

April 26, 2011

Machines will achieve human-level intelligence by 2028 (median estimate: 10% chance), by 2050 (median estimate: 50% chance), or by 2150 (median estimate: 90% chance), according to an informal poll at the Future of Humanity Institute (FHI) Winter Intelligence conference on machine intelligence in January.

“Human‐level machine intelligence, whether due to a de novo AGI (artificial general intelligence) or biologically inspired/emulated systems, has a macroscopic probability to occurring mid‐century,” the report authors, Dr. Anders Sandberg and Dr. Nick Bostrom, both researchers at FHI, found.

“This development is more likely to occur from a large organization than as a smaller project. The consequences might be potentially catastrophic, but there is great disagreement and uncertainty about this — radically positive outcomes are also possible.”

Other findings:

Industry, academia and the military are the types of organizations most likely to first develop a human‐level machine intelligence.
The response to “How positive or negative are the ultimate consequences of the creation of a human‐level (and beyond human‐level) machine intelligence likely to be?” were bimodal, with more weight given to extremely good and extremely bad outcomes.
Of the 32 responses to “How similar will the first human‐level machine intelligence be to the human
brain?,” 8 thought “very biologically inspired machine intelligence” the most likely, 12 thought “brain‐inspired AGI” and 12 thought “entirely de novo AGI” was the most likely.
Most participants were only mildly confident of an eventual win by IBM’s Watson over human contestants in the “Jeopardy!” contest.
“This survey was merely an informal polling of an already self‐selected group, so the results should be taken with a large grain of salt,” the authors advise. “The small number of responses, the presence of visiting groups with presumably correlated views, the simple survey design and the limitations of the questionnaire all contribute to make this of limited reliability and validity.”

“While the validity is questionable, the results are consistent with earlier surveys,” Sandberg told KurzweilAI. “The kind of people who respond to this tend to think mid-century human-level AI is fairly plausible, with a tail towards the far future. Opinions on the overall effect were not divided but bimodal — it will likely be really good or really bad, not something in between.”

Brent Allsop, a Senior Software Engineer at 3M, has started a “Human Level AI Milestone?” Canonizer (consensus building open survey system) to encourage public participation in this interesting question in the survey: “Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?”

http://goo.gl/Qqorj


When There’s No Such Thing as Too Much Information

By STEVE LOHR

INFORMATION overload is a headache for individuals and a huge challenge for businesses. Companies are swimming, if not drowning, in wave after wave of data — from increasingly sophisticated computer tracking of shipments, sales, suppliers and customers, as well as e-mail, Web traffic and social-network comments. These Internet-era technologies, by one estimate, are doubling the quantity of business data every 1.2 years.

Yet the data explosion is also an enormous opportunity. In a modern economy, information should be the prime asset — the raw material of new products and services, smarter decisions, competitive advantage for companies, and greater growth and productivity.

Is there any real evidence of a “data payoff” across the corporate world? It has taken a while, but new research led by Erik Brynjolfsson, an economist at the Sloan School of Management at the Massachusetts Institute of Technology, suggests that the beginnings are now visible.

Mr. Brynjolfsson and his colleagues, Lorin Hitt, a professor at the Wharton School of the University of Pennsylvania, and Heekyung Kim, a graduate student at M.I.T., studied 179 large companies. Those that adopted “data-driven decision making” achieved productivity that was 5 to 6 percent higher than could be explained by other factors, including how much the companies invested in technology, the researchers said.

In the study, based on a survey and follow-up interviews, data-driven decision making was defined not only by collecting data, but also by how it is used — or not — in making crucial decisions, like whether to create a new product or service. The central distinction, according to Mr. Brynjolfsson, is between decisions based mainly on “data and analysis” and on the traditional management arts of “experience and intuition.”

A 5 percent increase in output and productivity, he says, is significant enough to separate winners from losers in most industries.

The companies that are guided by data analysis, Mr. Brynjolfsson says, are “harbingers of a trend in how managers make decisions.”

“And it has huge implications for competitiveness and growth,” he adds.

The research is not yet published, but it was presented at an academic conference this month. The conclusion that companies that rely heavily on data analysis are likely to outperform others is not new. Notably, Thomas H. Davenport, a professor of information technology and management at Babson College, has made that point, and his most recent book, with Jeanne G. Harris and Robert Morison, is “Analytics at Work: Smarter Decisions, Better Results” (Harvard Business Press, 2010).

And companies like Google, whose search and advertising business is based on exploiting and organizing online information, are testimony to the power of intelligent data sifting.

But the new research appears to be broader and to apply economic measurement to the impact of data-led decision making in a way not done before.

“To the best of our knowledge,” Mr. Brynjolfsson says, “this is the first quantitative evidence of the anecdotes we’re been hearing about.”

Mr. Brynjolfsson emphasizes that the spread of such decision making is just getting started, even though the data surge began at least a decade ago. That pattern is familiar in history. The productivity payoff from a new technology comes only when people adopt new management skills and new ways of working.

The electric motor, for example, was introduced in the early 1880s. But that technology did not generate discernible productivity gains until the 1920s. It took that long for the use of motors to spread, and for businesses to reorganize work around the mass-production assembly line, the efficiency breakthrough of its day.

The story was much the same with computers. By 1987, the personal computer revolution was more than a decade old, when Robert M. Solow, an economist and Nobel laureate, dryly observed, “You can see the computer age everywhere but in the productivity statistics.”

It was not until 1995 that productivity in the American economy really started to pick up. The Internet married computing to low-cost communications, opening the door to automating all kinds of commercial transactions. The gains continued through 2004, well after the dot-com bubble burst and investment in technology plummeted.

The technology absorption lag accounts for the delayed productivity benefits, observes Robert J. Gordon, an economist at Northwestern University.

“It’s never pure technology that makes the difference,” Mr. Gordon says. “It’s reorganizing things — how work is done. And technology does allow new forms of organization.”

Since 2004, productivity has slowed again. Historically, Mr. Gordon notes, productivity wanes when innovation based on fundamental new technologies runs out. The steam engine and railroads fueled the first industrial revolution, he says; the second was powered by electricity and the internal combustion engine. The Internet, according to Mr. Gordon, qualifies as the third industrial revolution — but one that will prove far more short-lived than the previous two.

“I think we’re seeing hints that we’re running through inventions of the Internet revolution,” he says.



STILL, the software industry is making a big bet that the data-driven decision making described in Mr. Brynjolfsson’s research is the wave of the future. The drive to help companies find meaningful patterns in the data that engulfs them has created a fast-growing industry in what is known as “business intelligence” or “analytics” software and services. Major technology companies — I.B.M., Oracle, SAP and Microsoft — have collectively spent more than $25 billion buying up specialist companies in the field.

I.B.M. alone says it has spent $14 billion on 25 companies that focus on data analytics. That business now employs 8,000 consultants and 200 mathematicians. I.B.M. said last week that it expected its analytics business to grow to $16 billion by 2015.

“The biggest change facing corporations is the explosion of data,” says David Grossman, a technology analyst at Stifel Nicolaus. “The best business is in helping customers analyze and manage all that data.”  

http://goo.gl/tL0VW




Budget crunch mothballs telescopes built to search for alien signals

By John Matson  | Sunday, April 24, 2011 | 23

The hunt for extraterrestrial life just lost one of its best tools. The Allen Telescope Array (ATA), a field of radio dishes in rural northern California built to seek out transmissions from distant alien civilizations, has been shuttered, at least temporarily, as its operators scramble to find a way to continue to fund it.

In an April 22 letter to donors, Tom Pierson, CEO of the SETI Institute in Mountain View, Calif., explained that the ATA has been put into "hibernation," meaning that "starting this week, the equipment is unavailable for normal observations and is being maintained in a safe state by a significantly reduced staff." The ATA is a partnership between the SETI Institute, which is responsible for building the telescope array, and the University of California, Berkeley, which is responsible for operating it. Astronomer Franck Marchis, who is affiliated with both institutions, broke the news on his blog April 22.

The search for extraterrestrial intelligence—SETI for short—is hardly fringe science, but the field has not enjoyed the financial support available to disciplines that return more immediate, predictable benefits to society. The nonprofit SETI Institute was founded in 1984 and has mostly relied on private donations to support its research. NASA had bankrolled a number of early SETI Institute projects, but Congress canceled NASA's short-lived SETI program in 1993. 

The plans for the ATA called for a total of 350 individual six-meter radio antennas, all working in concert to detect radio emissions from civilizations that might exist elsewhere in the galaxy. But the array's growth stalled after the first phase of construction in 2007, when 42 dishes were completed at a cost of $50 million. Microsoft co-founder Paul Allen, the telescope array's billionaire namesake, contributed half of that sum, according to the SETI Institute. 

Funding is considerably scarcer now. U.C. Berkeley's Radio Astronomy Laboratory has relied on funds from the National Science Foundation and the state of California to operate the Hat Creek Radio Observatory (HCRO) where the ATA is based, Pierson explained in his letter, and both of those sources have dried up. "NSF University Radio Observatory funding for HCRO has been reduced to approximately one-tenth of its former sum," Pierson wrote. "This is compounded by growing State of California budget shortfalls that have severely reduced the amount of state funds available to the Radio Astronomy Lab." ATA operations cost about $1.5 million per year, Pierson said, and the SETI science campaign at ATA costs another $1 million annually. 

The SETI Institute would like to use the ATA to listen in on any radio waves that might be emanating from the extrasolar planets now being found by NASA's Kepler spacecraft. In February, Kepler scientists announced that they had compiled a list of 1,235 possible planets orbiting distant stars, including several that might be habitable. A current SETI Institute fundraising campaign is now aimed at raising $5 million to conduct a two-year search of Kepler's most promising finds using the ATA, in the hopes that one of those worlds is inhabited by a technological civilization sending out radio waves. 

The ATA is not the only radio telescope facility that can be used for SETI searches, but it is probably the instrument most committed to the task. SETI researchers elsewhere have to borrow time on telescopes where competition for observing time can be fierce or piggyback their searches on other ongoing observations. 

Pierson said that the SETI Institute has been working for more than two years to find a new funding stream, for instance by offering up the ATA's services to the U.S. Air Force to assist in tracking orbital debris that can endanger defense satellites. "We are continuing discussions with the USAF and remain hopeful that this effort will help provide future operating funds," he wrote.

http://goo.gl/VzEKF


Cloud computing and Internet use suck energy, emit CO2, says Greenpeace

April 22, 2011 |  8:00 am

Clicking on all those viral videos, chain emails, celebrity tweets and paparazzi photos online sucks up enough energy to rank the Internet –- if it were a country -– fifth in the world for electricity use.

That’s more power than Russia uses, according to a new report about cloud-computing from Greenpeace.

Computer servers in data centers account for about 2% of global energy demand, growing about 12% a year, according to the group.  The servers, Greenpeace said, can suck up as much power as 50,000 average U.S. homes.

But most of what powers the cloud comes from coal and nuclear energy rather than renewable sources such as wind and solar, according to Greenpeace. Clusters of data centers are emerging in places like the Midwest, where coal-powered electricity is cheap and plentiful, the group said.

In its report, the organization zeroed in on 10 major tech companies, including Apple, Twitter and Amazon. Recently, the group has waged a feisty fight against Facebook, which relies on coal for 53.2% of its electricity, according to Greenpeace.

Many companies, the organization said, tightly guard data about the environmental impact and energy consumption of their IT operations. They also focus more on using energy efficiently than on sourcing it cleanly, Greenpeace said.

Yahoo landed bonus points for siting facilities near clean energy hot spots and using coal-based power for just 18.3% of its portfolio. Google got love for its extensive support of wind and solar projects and for creating a subsidiary, Google Energy, that can buy electricity directly from independent renewable power producers.

In 2005, the U.S. had 10.3 million data centers gobbling up enough energy to power all of Britain for two months, according to Internet marketing company WordStream.

Each month, electricity used to power searches on Google produces 260,000 kilograms of carbon dioxide and is enough to power a freezer for 5,400 years, according to WordStream. The searches use up 3.9 million kilowatt-hours -– the equivalent of 5 million loads of laundry.

A single spam email of the 62 trillion sent each year creates 0.3 grams of carbon dioxide. A Google search for “Soylent Green” spawns the same amount as driving a car three inches.

http://goo.gl/nAxFw



Controlling Prosthetic Limbs with Electrode Arrays

A new nerve-cell-support design could give amputees better control over prosthetic limbs.

By Nidhi Subbaraman

To design prosthetic limbs with motor control and a sense of touch, researchers have been looking at ways to connect electrodes to nerve endings on the arm or leg and then to translate signals from those nerves into electrical instructions for moving the mechanical limb. However, severed nerve cells on an amputated limb can only grow if a structure is present to support them—much the way a trellis supports a growing vine. And they are notoriously fussy about the shape and size of that structure.

"Cells are like people: they like furniture to sit in that's just the right size," says David Martin, a biomedical engineer at the University of Delaware. "They're looking for a channel that's got the 'Goldilocks'-length scale to it—how far apart the ridges are, how tall they are, how [wide] they are."

Ravi Bellamkonda's lab at Georgia Tech has designed a tubular support scaffold with tiny channels that fit snugly around bundles of nerve cells. The group recently tested the structure with dorsal root ganglion cells and presented the results at the Society for Biomaterials conference earlier this month.

The scaffold begins as a flat sheet with tiny grooves, similar to corrugated iron or cardboard. It is then rolled to form a porous cylinder with many tiny channels suited for healthy nerve-cell growth. The floors of the conduits double as electrodes, brushing up close to the nerve bundles and picking up nerve signals. "The thing that's different is that the patterns can be much more precisely controlled, and the orientation of the nerve bundles is essentially perfect here," says Martin. "It's a nice model system, and the ability to control nerve growth is what's really going to be valuable."

The ultimate goal is to enable two-way communication between the prosthetic limb and the wearer. Eventually, this design could separate the two kinds of nerve cells within a bundle, so neural cues directing hand movement would travel along one channel and information about touch and temperature from the prosthetic limb would travel to the brain along another channel. "The 'jellyroll' should in principle allow [them] to select through those channels—that to me is where the real excitement is," says Martin. "That's news for the future, but you've got to be able to walk before you can run."

In previous attempts to tap into neural signals, scientists have fitted severed nerve cells with "sieve electrodes"—flat metal disks with holes intended for nerves to grow through. "The problem with the sieve electrode is that the nerves wouldn't grow into it reliably," says Bellamkonda.

Current work on growing aligned nerve bundles includes foam supports with pores suited for nerve growth, and fabrics with aligned nanofibers along which nerves are intended to grow. But the jellyroll design has the potential to be a cut above the rest.

The multichannel scaffold could give added dexterity to prosthetic limbs. "You need to be able to stimulate as many axons as possible for movement, and you need to be able to pick up signals from as many axons as possible," says Akhil Srinivasan, primary researcher on the project. The most sophisticated of the electrodes currently used at nerve endings have about 16 channels to control movement. But the arm has 22 degrees of freedom. "You need at least 22 reliable channels," says  Mario Romero-Ortega, associate professor of bioengineering at the University of Texas, Arlington. "That's the limitation—we only have a few, but you need more."

"The novelty, from my perspective, is the materials they use [are ones they can] scale up," Romero-Ortega says. The electrode-roll design builds on previous work, but the new scaffold is made of materials that are safe for biological use. "They're the first to show in vitro growth," Romero-Ortega says.

To make the microarrays, a coat of the polymer polydimethyl siloxane is laid down on a glass slide to create a thin, uniform base, and a layer of a light-sensitive polymer, SU-8, is added. Ultraviolet light is shined on the SU-8 through a grating, and the parts of the surface exposed to the light bond together to form walls. The unbonded sections in between are then washed away, leaving behind row upon row of conduits. The grooved surface is capped with a second layer of base polymer, and the polymer sandwich is rolled into a cylinder.

So far, the rolled-up microarray still lacks electrodes, but Srinivasan says the next steps will be to insert gold electrodes into the base of the scaffold. The wired microarray will then be tested in a rat model.

"I think it's a clever design," says Dominique Durand, a professor of biomedical engineering at Case Western Reserve University. "They still haven't shown the electrodes, but that's a problem for another day."

http://goo.gl/Fak73

Monday, April 25, 2011


‘Time machine’ allows visual exploration of space and time

April 25, 2011 

Researchers at Carnegie Mellon University’s Robotics Institute have leveraged the latest browser technology to create GigaPan Time Machine.

The system enables viewers to explore gigapixel-scale, high-resolution videos and image sequences by panning or zooming in and out of the images while simultaneously moving back and forth through time.

Viewers, for instance, can use the system to watch some plants move wildly as they grow while others get eaten by caterpillars, or view a computer simulation of the early universe as gravity works across 600 million light-years to condense matter into filaments and finally into stars that can be seen by zooming in for a closeup.

The system is an extension of the GigaPan technology developed by the CREATE Lab and NASA, which can capture a mosaic of thousands of digital pictures and stitch those frames into a panorama that be interactively explored via computer. To extend GigaPan into the time dimension, image mosaics are repeatedly captured at set intervals, and then stitched across both space and time to create a video in which each frame can be hundreds of millions, or even billions of pixels.

Using HTML5, CREATE Lab computer scientists have developed algorithms and software architecture that make it possible to shift seamlessly from one video portion to another as viewers zoom in and out of Time Machine imagery. To keep bandwidth manageable, the GigaPan site streams only those video fragments that pertain to the segment and/or time frame being viewed.

Source: http://goo.gl/G5QbZ

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc

Elon Musk: I’ll put a man on Mars in 10 years

April 25, 2011 

SpaceX will launch a rocket into orbit in three years and will “go all the way to Mars” in 10 to 20 years, SpaceX CEO Elon Musk said on The Wall Street Journal MarketWatch Friday.

The statement follows a SpaceX announcement last week that NASA has awarded the company $75 million to develop a revolutionary launch escape system that will enable the company’s Dragon spacecraft to carry astronauts.

Source: http://goo.gl/l4mhf

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc

Neuron migration in the brain suggests how cancer cells might also travel

April 25, 2011 

Researchers at Fred Hutchinson Cancer Research Center have found a new mechanism by which neurons migrate in the developing brain, suggesting how other types of cells, including cancer cells, may also travel within the body in metastasis.

New neurons initially move in a straight line, from the inside to the outside, until they reach a layer called the intermediate zone in the cortex. This zone contains relatively few neurons but many connecting fibers, or axons. When new neurons reach this layer, they lose their way and start wandering — up, down, left and right, frequently changing direction.

When, seemingly by chance, neurons emerge from the intermediate zone, they realign with their original direction of movement and speed ahead through layers of differentiated neurons towards the outer surface of the cortex.

The researchers aimed to determine how neurons get back on track after they emerge from the chaos of the intermediate zone. They identified a signaling protein, called Reelin, which is made by cells in the outermost layer of the cortex. It has been known that mutations in the Reelin gene cause profound cortical layering abnormalities in rodents and people, but it has been unclear which stage of neuron migration goes awry when Reelin is absent.

The researchers showed that new neurons respond to Reelin as they emerge from the intermediate zone. They also showed that a membrane protein called N-cadherin increases on the surface of neurons when the neurons encounter Reelin. The surface increase in N-cadherin allows the cell to choose the appropriate direction for its next stage of migration.

“The new role for N-cadherin in orienting migrating cells is quite unexpected and suggests that cadherins on the surface of other types of normal or cancer cells may also be involved in helping them move rather than stay in place,” the researchers suggest.

Tracking the life cycle of RNA molecules to detect cancer

In a related study, scientists at the Broad Institute have developed an approach that offers many windows into the life cycle of RNA molecules that will enable other scientists to investigate what happens when something in a cell goes wrong.

The scientists developed a method that allows them to tease apart the different stages of this life cycle by measuring how much messenger RNA (mRNA) is produced and how much is degraded. The balance of these two processes contributes to the changes seen in RNA levels in a cell over time, much the way that birth and death rates contribute to a country’s total population.

The scientists harnessed an existing technique to trace the fate of newly produced RNA and paired it with a new sequencing-based technology that counts molecules of mRNA. The results also gave the researchers a view of some of the in-between steps, during which mRNA is edited or processed — an unexpected but serendipitous finding.

The researchers were able to take “snapshots” of RNA levels over very short time intervals. Strung together, these snapshots reveal not only how the amount of RNA changes, but also the short-lived, intermediate phases of the RNA life cycle that are otherwise impossible to detect.

One critical application of the new method is in following up on leads from disease studies, such as mutated genes in cancer or other diseases that impact the RNA life cycle, the scientists said.

Source: http://goo.gl/jKHta

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc


Can Hobbyists and Hackers Transform Biotechnology?

In his new book Biopunk: DIY Scientists Hack the Software of Life, Marcus Wohlsen explores the new movement in garage-based biotech.

By Amanda Gefter

For most of us, managing our health means visiting a doctor. The more serious our concerns, the more specialized a medical expert we seek. Our bodies often feel like foreign and frightening lands, and we are happy to let someone with an MD serve as our tour guide. For most of us, our own DNA never makes it onto our personal reading list.

Biohackers are on a mission to change all that. These do-it-yourself biology hobbyists want to bring biotechnology out of institutional labs and into our homes. Following in the footsteps of revolutionaries like Steve Jobs and Steve Wozniak, who built the first Apple computer in Jobs's garage, and Sergey Brin and Larry Page, who invented Google in a friend's garage, biohackers are attempting bold feats of genetic engineering, drug development, and biotech research in makeshift home laboratories.

In Biopunk, journalist Marcus Wohlsen surveys the rising tide of the biohacker movement, which has been made possible by a convergence of better and cheaper technologies. For a few hundred dollars, anyone can send some spit to a sequencing company and receive a complete DNA scan, and then use free software to analyze the results. Custom-made DNA can be mail-ordered off websites, and affordable biotech gear is available on Craigslist and eBay.

Wohlson discovers that biohackers, like the open-source programmers and software hackers who came before, are united by a profound idealism. They believe in the power of individuals as opposed to corporate interests, in the wisdom of crowds as opposed to the single-mindedness of experts, and in the incentive to do good for the world as opposed to the need to turn a profit. Suspicious of scientific elitism and inspired by the success of open-source computing, the bio DIYers believe that individuals have a fundamental right to biological information, that spreading the tools of biotech to the masses will accelerate the pace of progress, and that the fruits of the biosciences should be delivered into the hands of the people who need them the most.

With all their ingenuity and idealism, it's difficult not to root for the biohackers Wohlsen meets. Take MIT grad student Kay Aull, who built her own genetic testing kit in her closet after her father was diagnosed with the hereditary disease hemochromatosis. "Aull's test does not represent new science but a new way of doing science," Wohlsen writes. Aull's self-test for the disease-causing mutation came back positive.

Or take Meredith Patterson, who is trying to create a cheap, decentralized way to test milk for melamine poisoning without relying on government regulators. Patterson has written a "Biopunk Manifesto" that reads in part, "Scientific literacy empowers everyone who possesses it to be active contributors to their own health care, the quality of their food, water and air, their very interactions with their own bodies and the complex world around them."

Biohackers Josh Perfetto and Tito Jankowski created OpenPCR, a cheap, hackable DNA Xerox machine (PCR stands for "polymerase chain reaction," the name for a method of replicating DNA). Interested biohackers can pre-order one for just over $500 or, once it's ready, download the blueprint free and make their own. According to the website, its apps include DNA sequencing and a test to "check that sushi is legit." Jankowski "hopes to introduce young people to the tools and techniques of biotech in a way that makes gene tweaking as much a part of everyday technology as texting," Wohlsen writes. Jankowski, together with Joseph Jackson and Eri Gentry, also founded BioCurious, a collaborative lab space for biohackers in the Bay area. "Got an idea for a startup? Join the DIY, 'garage biology' movement and found a new breed of biotech," their website exhorts.

Then there's Andrew Hessel, a biohacker fed up with the biotech business model, which he believes is built on the hoarding of intellectual property and leads companies to prioritize one-size-fits-all blockbuster drugs. "During the sixty years or so that computers went from a roomful of vacuum tubes to iPhones, the pace of drug development has never quickened," Hessel tells Wohlsen. Hoping to change that, Hessel is developing the first DIY drug development company, the Pink Army Cooperative, whose goal is to bioengineer custom-made viruses that will battle breast cancer. "Personalized therapies made just for you. In weeks or days, not years. Believe it. It's time for a revolution," the company's website proclaims. "We are trying to be the Linux of cancer," Hessel explains.

Of course, some of these possibilities are frightening. If biohackers can engineer organisms to cure diseases, surely they can engineer organisms to inflict them. Wohlsen, however, isn't overly concerned. The technology just isn't in place for biohackers to bioengineer weapons worth worrying about, he says. Not only is genetic engineering unnecessary to commit acts of bioterror, he writes, but it's also much more complex than other options available for manufacturing biotoxins. In fact, the FBI has expressed interest in using DIY biohackers as "sentries on biosecurity's front lines."

And yet, writes Wohlsen, the biohackers have yet to produce any truly novel results, and he isn't convinced that they will. "They are not about to cure cancer when an eleven-thousand-employee, $80 billion company like Genentech has so far failed. They are not going to unleash the world's first artificial amoeba tomorrow or graft wings onto house cats," he writes. "The real significance of DIY biotechnologists might lie not in any particular technological achievement but in the provocative questions they raise."

Wohlsen, while sympathetic to the biohackers' ideals, remains neutral about the merits of their activities. He offers few opinions of his own but raises the questions we need to begin asking: What is the value of expertise relative to the wisdom of crowds? Do intellectual property laws further or slow scientific progress? Should access to information about our own bodies be held as a basic human right? How much regulatory oversight is warranted when it comes to tinkering with life? And, ultimately, should just anyone be able to do science?

Personally, I'd still rather have a physician in charge of my health than tinker with it myself using partial knowledge and makeshift tools. But it's fun to know that the latter is possible. I won't hold my breath waiting for someone to cure cancer in his or her garage, but I am glad to know people are out there trying—and it will be profoundly cool if they succeed.

Source: http://goo.gl/pDo3t

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc



Fog harvesting for water

April 25, 2011 

MIT researcher Shreerang Chhatre and associates have developed new ways to use “fog harvesting” to provide water to the world’s poor.

A fog-harvesting device consists of a fence-like mesh p anel, which attracts droplets, connected to receptacles into which water drips. To build larger fog harvesters, researchers generally use mesh, rather than a solid surface, because a completely impermeable object creates wind currents that will drag water droplets away from it.

In some field tests, fog harvesters have captured one liter of water (roughly a quart) per one square meter of mesh, per day. Chhatre is conducting laboratory tests to improve the water collection ability of existing meshes.

Source: http://goo.gl/ZllyA

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc



Japanese robots await call to action

Kyodo

Japanese robots designed for heavy lifting and data collection have been prepared for deployment at irradiated reactor buildings of the Fukushima No.1 nuclear power station, where U.S.-made robots have already taken radiation and temperature readings as well as visual images at the crippled facility via remote control.

At the request of the Ministry of Economy, Trade and Industry, Tmsuk Co., a robot builder based in Munakata, Fukuoka Prefecture, has put its rescue robot T-53 Enryu on standby at a dedicated facility in Tsukuba, Ibaraki Prefecture, about 170 km southwest of the power plant in Fukushima Prefecture devastated by the March 11 magnitude-9.0 quake and tsunami.

Enryu (rescue dragon) was developed in the aftermath of the magnitude-7.3 Great Hanshin Earthquake that hit the Kobe area in 1995. Designed to engage in rescue work, the remote-controlled robot has two arms that can lift objects up to 100 kg. It has "undergone training" at the Kitakyushu municipal fire department in Fukuoka Prefecture.

Tmsuk President Yoichi Takamoto said, "We don't know what we can do at a nuclear power plant until we give it a try, but we do believe we can do something about removing rubble" from explosions that have blocked human operations around the plant.

Satoshi Tadokoro, a Tohoku University professor specializing in robots used for disaster operations, said, "Japan doesn't have any military-use robots, but it has technology on a par with the United States."

Tadokoro said a plan is under way to employ at the power plant a highly mobile research robot that he was involved in developing.

In early April, the Robotics Society of Japan and other related organizations jointly set up a task force and sent engineers to the government's project team that is brainstorming with Tokyo Electric Power Co. about how robots may be used at the plant.

But given the urgency of the mission and circumstances, European and U.S.-built robots with a proven track record in military use and nuclear plant accidents have drawn attention.

A pair of PackBots from iRobot Corp. of the United States entered the buildings of reactors 1 to 3 Sunday and Monday to take video footage and check radiation levels, temperatures, oxygen concentrations and other data inside.

A robotics industry source expressed frustration about the absence of Japanese robots at the plant in the initial crisis response at Fukushima No. 1 "We hope to obtain for Japanese manufacturers critical data that may be acquired only through operating machines at a site and use them for robot development," the source said.

Nuclear power plant builders Toshiba Corp., Hitachi Ltd. and Mitsubishi Heavy Industries Ltd. have promoted development of robots for use in accidents at atomic plants since a major accident at a nuclear fuel processing plant in Tokai, Ibaraki Prefecture, in 1999 that claimed the lives of two people and exposed hundreds of others to radioactive materials.

The central government initially contributed ¥3 billion in subsidies for the robot project but its funding did not last long and the development process was halted before any units were perfected for actual use.

An official of the Manufacturing Science and Technology Center, which was in charge of the development at that time, said, "There was a strong sense among us that those types of robot would never have a real-life chance to flex their muscles."

Some prototype robots developed in the process have been put on display at Sendai Science Museum. A museum employee said of the halted development initiative, "It was like stopping premium payments for a nonrefundable insurance policy."

While the Enryu is ready for its mission to remove rubble at the stricken plant, the biggest challenge is combating the spread of radiation.

University of Tokyo professor Hajime Asama said, "Mobilizing a robot without any consideration (for radiation) could complicate the situation and may even hinder work."

Its ability to work in a highly radioactive environment should be checked beforehand and, if need be, it should be reinforced with lead as a shield for radiation, he said.

Source: http://goo.gl/3K8mC

Source and/or and/or additional resources read more: http://3.ly/rECc  Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc