As Microsoft took the wraps off two dozen research and development projects at an open day at its Cambridge UK research lab it revealed two things: the amount of information in the world is growing exponentially, and the world’s capacity to absorb, store and use it productively is approaching a limit.
Microsoft spends around $9.5bn on R&D. It has pure R&D labs in Redmond, Cambridge UK, Cambridge, Mass, Bangalore and China, with satellite labs in places such as Aachen, Barcelona, Egypt and Israel.
Research in the UK focuses on computational science, computer-mediated living, innovation development, constraint reasoning, machine learning and perception, online services and advertising, programming, principles, tools, systems and networking.
Ever since the iPhone crashed AT&T’s network, it has become common cause that the increase in mobile data, or rather data that is transmitted over the air, is going to grow. A lot.
Networks are already straining to cope. But that’s not the big problem, says Andrew Herbert, director of Microsoft’s Cambridge Lab. The real problem, he says, is latency.
Latency is the delay between sending a signal and receiving an acknowledgment of receipt. The time it takes light to circumnavigate the Earth, about 133 milliseconds, is a fundamental constraint on network speed.
In practice, latency delays are much longer. This is due to “friction” in the media through with the message travels, the route it takes, the time it takes to navigate through switches and routers, and to display.
Humans can mostly tolerate the delays caused by network or application latency, but having to wait more than a second for response from a computer after you’ve pressed the key soon vexes one.
Modern computers are far less tolerant of latency, partly because more things happen simultaneously these days. This happens at a macro level, as in the huge data centres that host Microsoft’s Azure cloud computing products and services, as well as the micro level, as in smartphones that have four or more radios, web browsers and streaming video cameras.
Clever ways to overcome latency have existed for a long time, but mostly they involve asking processors to wait while something else happens. That overhead is less and less tolerable, not only because of the volumes of data that must be processed, but also because waiting wastes energy. Multiply that by tens of thousands of processors, and you begin to see that addressing the problems starts to solve some important problems.
But data-level concurrency raises issues: how to manage data flows through the unit, whether it is a data centre or a smartphone; how to cope with the volumes using the least possible energy; how to ensure that once in, data can be found again.
These problems are the object at least three Microsoft research projects. The latest, which started in December, is a joint investigation with the Barcelona Supercomputer Centre to explore the use of vector processing in data centres and smart phones.
The researchers believe they can build a grid of simple processors that uses a single instruction to process multiple data streams at once. In addition, the processor would reconfigure itself on the fly to accommodate the work load.
Microsoft is also working on Sierra, a storage management system to avoid the high cost of moving data in a data centre in order to be able to switch off (not merely power down) servers when they are not required.
In separate projects that may find they have application in the above projects, the Cambridge Lab is working on new operating systems and programming languages. Barrelfish is a new research operating system that Cambridge is developing with ETH in Zurich for multicore and many-core processors, while F# (F sharp) is a new programming language, now in Visual Studio, that provides a better programming experience for programmers, says Herbert.
At the user end, Microsoft is exploring a new search tool, The Gathering Engine. This assumes that you are not searching for anything special, but trawls the internet looking for stuff that looks interesting and might appeal to you. Which is great, but can it find my car keys? No doubt, as auto makers shift to electronic locks, that will be a trivial request, with the answer delivered to your mobile phone.
- Ad Predictor
Being an industrial lab, commercial considerations are never far away from researchers’ minds at Microsoft’s Cambridge UK laboratory.
One former R&D project, still under refinement, is now at the heart of Microsoft’s attack on Google. Ad Predictor is the lever it hopes will overthrow Google as the search-based advertisement server of choice.
Ad Predictor aims to give Bing users a better experience by not serving advertisements that do not match the user’s interests, and selecting only those with a a high probability of being clicked through.
Senior researcher Thore Graepel explains that Microsoft can charge more for ads that match users’ interests to the point where they click through to view the offer. It must evaluate tens of millions of searches overnight to arrive at a score for an ad. It then uses the score to set reserve prices for an auction for the presentation and click-through of ads.
A hefty dose of game theory informs the auction process. Rather than try to maximise the price, Graepel says Microsoft will always offer the ad for the second-highest price bid. This allows it to capture all the information in the market about how much people are willing to pay for the ad, its position, its frequency etc, says Graepel.
But Microsoft has also to balance the price advertisers are willing to pay against the probability that people will click through to the offer. That means no matter how much someone is prepared to pay, Microsoft may not serve an ad if it has a low chance of being clicked. Naturally, Microsoft offer help in selecting the right keywords to include in the ad.
If Microsoft can get this balance right, it could well upset the juggernaut that is Google. At the very least, for users it means a browsing environment less cluttered with irrelevant ads. That alone may be a blessing devoutly to be wished for.
In this special programme of content from Computer Weekly, in association with Microsoft, we examine the tools, technologies and best practices to create a productive, collaborative modern workforce.
Read the full transcript from this video below:
Microsoft researchers take aim at the latency problem, among others
Andrew Herbert: I am Andrew Herbert. I am the managing director of Microsoft's
Research Lab in Cambridge, and I also look after Innovations
Centers in Germany and Egypt.
Interviewer: Most of the work that you do here, from the presentations
this morning, are related to programming. A couple of things
that I am interested in about programming is getting software
that is resistant to hackers and resistant to error. Is this a
thing that Microsoft is interested in?
Andrew Herbert: Absolutely. The reason that I talked about software is we are a
software company, so programming is at the bottom of everything
we do. We have a very strong group called the Programming
Principals and Tools Group here in Cambridge, and really, they
have two themes. One theme is the design of programming
languages, and I talked about F Sharp and that is a very
expressive language with a very powerful type system that
includes things like generic types, it includes a whole batch of
advanced features, and that is obviously improving the quality
of software itself, so that your chances of writing errors in
your programs are reduced.
Another strand of the work that we do is more in the area of
verification, which is using mathematics to prove properties
about software. One of the leaders in that work is Byron Cork,
who is internationally recognized for his achievements. He works
using something called Model Checking, which essentially is
analyzing a piece of software, exploring all the different
states it can go into, and convincing yourself that none of
those states are erroneous. His best known work was with device
drivers for the Windows operating system, and as you probably
know, if a device driver has a bug, it trips up and brings the
whole machine crashing down. Device drivers are the source of
many of the blue screens that used to trouble Microsoft
operating systems in the past. You might even have noticed you
do not see blue screens as often these days.
What Byron built was something which ultimately shipped as the
Device Driver Verification Tool. Essentially, this will take the
binary code of a device driver and model-check it to make sure
that there is no way that it can make an incorrect call to the
operating system. Since then, he has been working on extending
that technology to verify there is no way that the device driver
can either go into an infinite loop, or put itself in a dead
lock. That is an interesting example because there is a
fundamental result in computer science that says, 'One program
cannot prove the termination of another,' but what I have just
described is a termination prover. It turns out there is a piece
of fine print, there is always fine print, and the fine print on
Cheering's result is in the general case, one program cannot
prove termination of another. For 30 years that stopped people
trying to write termination provers, and Byron got up one
morning and said, 'I wonder which cases are not general.' It
turns out Windows Device Drivers are not general, and indeed
most practical programs are not general.
Interviewer: Does that hold for all the popular operating systems?
Andrew Herbert: I would not know, we have done the work in the context of
Windows. I would think so. He did the termination checker, and
his current work is looking at how memory is shared between
device drivers and operating systems, and making sure there is
no conflicts over who is scribbling the data when and things are
given back when they have been finished with. That kind of
checking is improving the robustness of the code, and it does
not require you to write the code in any special language.
Interviewer: Andrew, in your presentation you talked a lot about
concurrency and parallel processing.
Andrew Herbert: Right.
Interviewer: Does this work have any relevance in that scenario?
Andrew Herbert: It absolutely does. A device driver is a classic example of a
concurrent program because when the device does something, that
signal has to go in the operating system, so there are messages
going in one way as the device is saying, 'What is happening?'
The operating system is now wanting the device to read and write
things, or if it is a disk, to seek new data, so you have these
very asynchronous dialogues, that is why it is often hard to get
them correct. It is not just a case of reading the program from
the top to the bottom; you have got to think about all the
subtle interactions as well as the
interactions between the device and the operating system. You
have probably got various timeouts. If the user cancels the
application, the device driver has got to be cleaned up.
Interviewer: Is this being applied in the development of the new
operating systems that you were mentioning this morning?
Andrew Herbert: In the Context Operating Systems Research, it has been one of
the strands. There was a project done by colleagues in the
Redmond Laboratory, that is work that is substantially complete
now, a system called Singularity. They experimented with: can
you build an operating system using a very strictly tight
programming language? They were going down the path of having the
right program language so you cannot write bugs. That
illustrates an import, but what we do in research is explore
ideas, so the idea they were exploring there was, what is it
like to write an operating system in one of these heavily
I talked today about Barrelfish which is, what is a different
way about organizing communications? When we start talking to
product groups, they start picking up those ideas and putting
them together. There is a piece of research yet to be done,
which is what is the right kind of type checking for the kind of
communications model that Barrelfish has.
Interviewer: Do you have a sense of the time frame that we are
operating to before we see these in commercial products?
Andrew Herbert: Some of the ideas are actually getting taken much more rapidly
in the world of Cloud computing because there, Microsoft is
provisioning its own data centers. We do not have to wait for
the next release of Windows in those; it is not the same as the
service systems we are shipping to customers, so that can evolve
much more quickly, it can be much more customized. One of the
fastest parts for technology transfer is through something
called the Extreme Computing Group, which is a new research
group that was set up about 18 months ago just to look at how we
should go about building and programming these big data centers.
There is a project in this lab that is looking at very different
communication architecture. Rather than having network cards on
the individual machines, why not just connect the network
directly to the back of the machine because on multiple core
machines, one of the cores can look after the network protocols.
That saves you some electronic costs, and it brings the
communications right into the heart of the operating system.
Interviewer: Scott McNeely was right, the network is the computer?
Andrew Herbert: Absolutely.
Interviewer: I have seen some research which says there is a
fundamental restraint for Cloud computing, and that is
essentially availability of bandwidth in the world. There is
also the latency question, in terms of it takes 100 milliseconds
for light to travel around the earth, so you have to deal with
those fundamental constraints. Where is Microsoft in this
Andrew Herbert: I am less worried about bandwidth. One of the legacies of the
dot-com bust actually is a huge amount of fiber. It is not
everywhere -- there is work to be done -- but there is a lot of
bandwidth available, as such. I think latency is the bigger
challenge, and that is why I talked about in Barrelfish, how we
structure our operating systems so that we do not start paying
the latency penalty inside the data center itself. Clearly, to
get from the data center to the end user, you have got to go
across the distance, and that is why companies like ourselves
and our competitors are building global networks of these data
centers so that we can move the computation as close as we
physically can to the user.
There is a lot to be done in thinking about how you tie the user
interface to the application, clever techniques to mask latency
from the end user. Some of it is if you got a smart client, it
can often predict what the outcome is likely to be.
Anything you carry or wear is inevitably going to be limited in
some way, essentially by battery life. While there are fantastic
strides in the amount of computing power and memory, we see in
things like phones and cameras are, to some extent, limited.
That is where I think there is an important role for the Cloud
because the cloud can be the back office for those devices. I
confidently predict there will be more things connected to the
Cloud than there will be people.
We had wearable computing in the sense that you wearing a full
body harness and carrying all you need; I do not think that's
terribly practical. What we are seeing and smartphones are the
real example in people's minds today of devices that
increasingly are always connected to the net and accessing data
across the network. Thinking about how you divide the computing
between what runs on the device and what runs on the other side
of the networks in your enterprise or in your Cloud, so that's
the important programming model.
I see a lot of richness in devices, whether we wear them or they
are bits of furniture in the house, that is all part of the
place to be explored.
Interviewer: I have seen some research which suggests communications
between sensor-based devices is going to be a lot more than the
stuff between people and devices.
Andrew Herbert: Absolutely. I think computers outnumber us. Is it 11 billion
processors that are manufactured to date? There are only 6
billion of us on the planet, so we either got two each or someone
has got really rather a lot. Those things are all talking to
each other. I think one of the subtexts of what I was talking
about today is the way in which the computers are becoming more
autonomous, and their relationship to us is changing. They are
computing things themselves, undertaking tasks, giving us
feedback on that stuff and recognizing what we are trying to do
and becoming . . . The model is much more assistive in what they
do than just being a tool you command to do something.
Interviewer: Taking that to its extreme, do we need robot laws, then?
Andrew Herbert: We got them already. I do not live in fear of my computer
taking over. I think there is a long distance from technologies
that are helpful day-to-day to the nightmare visions of robots
marching around trying to exterminate us because we are not
quite bright enough.
Interviewer: Perhaps there is an interim step, and governments are
increasingly using this technology to watch the populations.
Andrew Herbert: That is a different question. Certainly, I think one of the key
roles that we have as a research laboratory is to talk about
technology and look at the various possibilities and
applications and encourage that kind of debate. It is an
interesting debate, if you talk to many people in the UK, they
will tell you surveillance is good because it captures
criminals. You go to people in other countries and they say,
'Gosh. How can you Brits live in such a country where the Prime
Minister has you on his screen?' Which is a gross extrapolation.
Interviewer: Perhaps, it could be fair if people could have him on
their screen, if we do not have him enough already.
Andrew Herbert: Last week, I think we have seen enough of all politicians.
Those are things where society has to arrive at its own
conclusions, and the role of a scientist is to be part of that
debate and to show both the good uses of the technology and talk
about some of the issues.
Interviewer: Andrew, let us leave it there.
Andrew Herbert: Great.
Interviewer: Thank you very much, indeed, for your time.