QUANTA

Saturday, July 30, 2011


Inside the Mind of Microsoft’s Chief Futurist (Interview)

If I encountered Craig Mundie on the street, met his kind but humorless gaze and heard that slight southern drawl, I'd guess he was a golf pro—certainly not Microsoft's Chief of the future.

As chief research and strategy officer at Microsoft, Mundie is a living portal of future technology, a focal point between thousands of scattered research projects and the boxes of super-neat products we'll be playing with 5 years, 20 years, maybe 100 years from now. And he's not allowed to even think about anything shipping within the immediate 3 years. I'm pretty sure the guy has his own personal teleporter and hoverboard, but when you sit and talk to him for an hour about his ability to see tomorrow, it's all very matter of fact. So what did we talk about? Quantum computing did come up, as did neural control, retinal implants, Windows-in-the-cloud, multitouch patents and the suspension of disbelief in interface design.

Seeing the Future
Your job is to look not at next year or next five years. Is there a specific number of years you're supposed to be focused on?

I tell people it ranges from from about 3 to 20. There's no specific year that's the right amount, in part because the things we do in Research start at the physics level and work their way up. The closer you are to fundamental change in the computing ecosystem, the longer that lead time is.

When you say 3 years, you're talking about new UIs and when you say 20 you're talking about what, holographic computing?

Yeah, or quantum computing or new models of computation, completely different ways of writing programs, things where we don't know the answer today, and it would take some considerable time to merge it into the ecosystem.

So how do you organize your thoughts?

I don't try to sort by time. Time is a by-product of the specific task that we seek to solve. Since it became clear that we were going to ultimately have to change the microprocessor architecture, even before we knew what exactly it would evolve to be from the hardware guys, we knew they'd be parallel in nature, that there'd be more serial interconnections, that you'd have a different memory hierarchy. From roughly from the time we started to the time that those things will become commonplace in the marketplace will be 10 to 12 years.

Most people don't really realize how long it takes from when you can see the glimmer of things that are big changes in the industry to when they actually show up on store shelves.

Is it hard for you to look at things that far out?

[Chuckles] No, not really. One of the things I think is sort of a gift or a talent that I have, and I think Bill Gates had to some significant degree too, is to assimilate a lot of information from many sources, and your brain tends to work in a way where you integrate it and have an opinion about it. I see all these things and have enough experience that I say, OK, I think that this must be going to happen. Your ability to say exactly when or exactly how isn't all that good, but at least you get a directional statement.

When you look towards the future, there's inevitability of scientific advancement, and then there's your direction, your steering. How do you reconcile those two currents?

There are thousands of people around the world who do research in one form or another. There's a steady flow of ideas that people are advancing. The problem is, each one doesn't typically represent something that will redefine the industry.

So the first problem is to integrate across these things and say, are there some set of these when taken together, the whole is greater than the sum of the parts? The second is to say, by our investment, either in research or development, how can we steer the industry or the consumer towards the use of these things in a novel way? That's where you create differentiated products.

Interface Design and the Suspension of Disbelief
In natural interface and natural interaction, how much is computing power, how much is sociological study and how much is simply Pixar-style animation?

It's a little bit of all of them. When you look at Pixar animation, something you couldn't do in realtime in the past, or if you just look at the video games we have today, the character realism, the scene realism, can be very very good. What that teaches us is that if you have enough compute power, you can make pictures that are almost indistinguishable from real life.

On the other hand, when you're trying to create a computer program that maintains the essence of human-to-human interaction, then many of the historical fields of psychology, people who study human interaction and reasoning, these have to come to the fore. How do you make a model of a person that retains enough essential attributes that people suspend disbelief?

When you go to the movies, what's the goal of the director and the actors? They're trying to get you to suspend disbelief. You know that those aren't real people. You know Starship Enterprise isn't out there flying around—

Don't tell our readers that!

[Grins] Not yet at least. But you suspend disbelief. Today we don't have that when people interact with the computer. We aren't yet trying to get people to think they're someplace else. People explore around the edges of these things with things like Second Life. But there you're really putting a representative of yourself into another world that you know is a make-believe environment. I think that the question is, can we use these tools of cinematography, of human psychology, of high-quality rendering to create an experience that does feel completely natural, to the point that you suspend disbelief—that you're dealing with the machine just as if you were dealing with another person.

So the third component is just raw computing, right?

As computers get more powerful, two things happen. Each component of the interaction model can be refined for better and better realism. Speech becomes more articulate, character images become more lifelike, movements become more natural, recognition of language becomes more complete. Each of those drives a requirement for more computing power.

But it's the union of these that creates the natural suspension of disbelief, something you don't get if you're only dealing with one of these modalities of interaction. You need more and more computing, not only to make each element better, but to integrate across them in better ways.

When it comes to solving problems, when do you not just say, "Let's throw more computing power at it"?

That actually isn't that hard to decide. On any given day, a given amount of computing costs a given amount of money. You can't require a million dollars worth of computer if you want to put it on everybody's desk. What we're really doing is looking at computer evolutions and the improvements in algorithms, and recognizing that those two things eventually bring new problem classes within the bounds of an acceptable price.

So even within hypothetical research, price is still a factor?

It's absolutely a consideration. We can spend a lot more on the computing to do the research, because we know that while we're finishing research and converting it into a product, there's a continuing reduction in cost. But trying to jockey between those two things and come out at the right place and the right time, that's part of the art form.

Hardware Revolutions, Software Evolutions
Is there some sort of timeline where we're going to shift away from silicon chips?

That's really a question you should ask Intel or AMD or someone else. We aren't trying to do the basic semiconductor research. The closest we get is some of the work we're doing with universities exploring quantum computers, and that's a very long term thing. And even there, a lot of work is with gallium arsenide crystals, not exactly silicon, but a silicon-like material.

Is that the same for flexible screens or non-moving carbon-fiber speakers that work like lightning—are these things you track, but don't research?

They're all things that we track because, in one form or another, they represent the computer, the storage system, the communication system or the human-interaction capabilities. One of the things that Microsoft does at its core is provide an abstraction in the programming models, the tools that allow the introduction of new technologies.

When you talk about this "abstraction," do you mean something like the touch interface in Windows 7, which works with new and different kinds of touchscreens?

Yeah, there are a lot of different ways to make touch happen. The Surface products detect it using cameras. You can have big touch panels that have capacitance overlays or resistive overlays. The TouchSmart that HP makes actually is optical.

The person who writes the touch application just wants to know, "Hey, did he touch it?" He doesn't want to have to write the program six times today and eight times tomorrow for each different way in which someone can detect the touch. What we do is we work with the companies to try to figure out what is the abstraction of this basic notion. What do you have to detect? And what is the right way to represent that to the programmer so they don't have to track every activity, or even worse, know whether it was an optical detector, a capacitive detector or an infrared detector? They just want to know that the guy touched the screen.

Patents and Inventor's Rights
You guys recently crossed 10,000 patent line—is that all your Research division?

No, that's from the whole company. Every year we make a budget for investment in patent development in all the different business groups including Research. They all go and look for the best ideas they've got, and file patents within their areas of specialization. It's done everywhere in the company.

So, take multitouch, something whose patents have been discussed lately. When it comes to inevitability vs. unique product development, how much is something like multitouch simply inevitable? How much can a single company own something that seems so generally accepted in interface design?

The goal of the patent system is to protect novel inventions. The whole process is supposed to weed out things that are already known, things that have already been done. That process isn't perfect—sometimes people get patents on things that they shouldn't, and sometimes they're denied patents on things they probably should get—but on balance you get the desired result.

If you can't identify in the specific claims of a particular patent what it is novel, then you don't get a patent. Just writing a description of something—even if you're the first person to write it down—doesn't qualify as invention if it's already obvious to other people. You have to trust that somehow obvious things aren't going to be withheld from everybody.

That makes sense. We like to look at patents to get an idea of what's coming next—

That's what they were intended to do; that was the deal with the inventor: If you'll share your inventions with the public in the spirit of sharing knowledge, then we'll give you some protection in the use of that invention for a period of time. You're rewarded for doing it, but you don't sequester the knowledge. It's that tradeoff that actually makes the patent system work.

Windows in the Cloud, Lasers in the Retina
Let's get some quick forecasts? How soon until we see Windows in the cloud? I turn on my computer, and even my operating system exists somewhere else.

That's technologically possible, but I don't think it's going to be commonplace. We tend to believe the world is trending towards cloud plus client, not timeshared mainframe and dumb display. The amount of intrinsic computing capability in all these client devices—whether they're phones, cars, game consoles, televisions or computers—is so large, and growing larger still exponentially, that the bulk of the world's computing power is always going to be in the client devices. The idea that the programmers of the world would let that lie fallow, wouldn't try to get any value out of it, isn't going to happen.

What you really want to do is find what component is best solved in the shared facility and what component is best computed locally? We do think that people will want to write arbitrary applications in the cloud. We just don't think that's going to be the predominating usage of it. It's not like the whole concept of computing is going to be sucked back up the wire and put in some giant computing utility.

What happens when the processors are inside our heads and the displays are projected on the inside of our eyeballs?

It'll be interesting to see how that evolution will take place. It's clear that embedding computing inside people is starting to happen fairly regularly. There's special processors, not general processors. But there are now cochlear implants, and even people exploring ways to give people who've lost sight some kind of vision or a way to detect light.

But I don't think you are going to end up with some nanoprojector trying to scribble on your retina. To the extent that you could posit that you're going to get to that level, you might even bypass that and say, "Fine, let me just go into the visual cortex directly." It's hard to know how the man-machine interface will evolve, but I do know that the physiology of it is possible and the electronics of it are becoming possible. Who knows how long it will take? But I certainly think that day will come.

And neural control of our environment? There's already a Star Wars toy that uses brain waves to control a ball—

Yeah, it's been quite a few years since I saw some of the first demos inside Microsoft Research where people would have a couple of electrical sensors on their skull, in order to detect enough brain wave functionality to do simple things like turn a light switch on and off reliably. And again, these are not invasive techniques.

You'll see the evolution of this come from the evolution of diagnostic equipment in medicine. As people learn more about non-invasive monitoring for medical purposes, what gets created as a byproduct are non-invasive sensing people can use for other things. Clearly the people who will benefit first are people with physical disabilities—you want to give them a better interface than just eye-tracking on screens and keyboards. But each of these things is a godsend, and I certainly think that evolution will continue.

I wonder what your dream diary must look like—must have some crazy concepts.

I don't know, I just wake up some mornings and say, yeah, there's a new idea.

Really? Just jot it down and run with it?

Yeah, that's oftentimes the way it is. Just, wasn't there yesterday, it's there today. You know, you just start thinking about it.

Source: http://goo.gl/LnPyb


Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk