APIs lead the way to better interoperability

Feature

APIs lead the way to better interoperability

What do government intelligence spooks and YouTube have in common? Not much, on the face of it, except that they have both bought into the same fundamental principle of what we have come to call Web 2.0: web-based application programming interfaces (APIs).

The Defence Intelligence Agency (DIA) is the part of the US intelligence community that sources battlefield intelligence for all three branches of the military. Things don't always move very quickly through the shadowy corridors of the US intelligence world, especially software development projects. That's a problem, because information has to move more rapidly - and there's more of it today then ever before.

Analysts have to find it, collate it and use it to save lives in the field. And now they're blogging it, and putting it in wikis. They're a much more Web 2.0-savvy bunch than you might imagine, according to Bob Gourley, who was formerly chief technology officer at the DIA. The problem is getting all of the information and analysis from a variety of systems into a form that is easily digestible.

"Before, we created functionality in a very formal, rigid way that took a long time to produce. It gave us some control over the visualisation system, but not a true mashup. It takes years to develop, and it's very expensive," he says. The agency turned to mashups - a way of collating data from multiple sources using the types of API typically seen in consumer-facing applications such as Google Maps and eBay. Information from different databases could be automatically married and presented to analysts and Pentagon officials in the same interface.

"A very small team of developers were able to successfully deliver and deploy this capability in a matter of months, after we acquired the software," says Gourley.

The DIA was able to do this in part thanks to Alien - a service-oriented architecture that it created to expose the data in its applications. SOA and Web 2.0 applications have a lot in common. They're both generally based on XML, the language used to exchange data across IP networks.

Data exchange used to be a far less tractable problem. In the 1980s, you'd use a complicated electronic data interchange (EDI) format to get your data from one application to the other. Then, remote procedure calls enabled applications to probe each other for data across networks. But all of this happened using complicated protocols running over dedicated network ports that usually had to be opened up for that purpose.

Around 10 years after Tim Berners-Lee invented HTTP (the basis for what became the web), the simple object access protocol (Soap) emerged, as a way for applications to transmit messages and data over it. Coding Soap support into applications was still a relatively involved procedure, because the wrapping of the application message in an XML-based envelope added a layer of complexity to the messaging process.

An alternative, representational state transfer (Rest), uses unique resource identifiers the same kind of online address that you might use to access a website. Adding parameters to this (in the same way that you add variables to a web URL to access a web application's data) delivers a set of results. These results can be delivered in machine-readable, XML format, forming the basis for a web-based API.

The other popular web-based API technology is JavaScript object notation (JSON), a text-based protocol used to deliver data from an online application into a back-end application or a web browser. Jackbe, which supplied some of the technology for the DIA's mashed-up applications, sells a server-based system designed to aggregate multiple Web 2.0 data streams and then deliver them in JSON format to the browser.

The DIA uses this architecture to channel information between systems in its own organisation, but it's naturally a sensitive organisation, protective of its own information. At the other end of the spectrum is the consumer-focused internet. Companies like eBay, Amazon, and Flickr all make their information available in the same fundamental way, exposing their data in machine-readable forms that are easy for other applications, or in some cases web browsers, to consume. They use Rest, JSON, and in some cases, Soap, to deliver their data.

Making data accessible over the web with APIs is at least partly what Web 2.0 is about. The concept emerged as online application providers empowered users to submit their own data, and share it both inside and outside of the application. The business benefits of this concept are important. They can be used to rapidly prototype projects with minimal investment, and can capitalise on data resources that they don't own, while opening up their own data to partners.

The challenges are equally significant: companies may find security problems when relying on third party XML streams. Applications that use extensive Javascript code to consume and manipulate that data may fall foul of client-side vulnerabilities. Companies basing their applications on third party APIs may have little if any recourse to service level agreements. If Google Maps goes down and you're basing your CRM application on it, what then?

But in spite of these challenges, APIs are appearing all over the web, and small start-ups are thriving on them. One of the most ubiquitous platforms for mashups is Google Maps, which has clear commercial possibilities. Theo Burry, co-founder of Fanueil Media, develops applications that layer third party data over Google Maps, producing visualisation applications for clients. "Google provides an API that lets you get from addresses to geographical co-ordinates," he says. "It can find out where a particular location is, based on any free-form address that you give them." Burry has used Google Maps to mash up everything from real-time crime data to prices at petrol pumps for corporate clients.

Podbop, a sideline business for Florida-based web developer Daniel Westermann-Clark, uses a Web 2.0 API to provide users with a searchable database of bands playing in their town. The service combines the band information with freely available MP3s available online, creating what amounts to a personalised preview of bands playing in the area.

"We pull events from Eventful.com along with free MP3s from around the web," says Westermann-Clark. Eventful.com is one of the growing numbers of web sites publishing its data as a Web API. However, as with many of these services, some level of back-end data transformation is still necessary, depending on where different data streams are coming from.

He says, "We contact places like Magnatune.com and get them to provide essentially a CSV file containing their MP3s, which we then bring into our own database. There are a lot of algorithms I've developed to parse out the event description and match that to the band's name."

This notion of transforming non-machine readable data for consumption by applications is gaining traction. The software industry wants the commercial value of data locked up in legacy formats, and it's coming to get it, ready or not. Kapow Technologies has developed a tool called OpenKapow, which lets users create robots to query more traditional websites and extract the data. It is not alone. Another site, Dapper, offers a similar service. Folding these into Web 2.0-enabled apps brings legacy sites into the modern world. Industry groups like the Data Portability Project are promoting the idea.

"Our web service APIs are so integral to how companies use our applications and platforms that as a company we now deliver more API traffic than we do web traffic," says Adam Gross, vice-president of developer marketing at Salesforce.com, who says that the switch happened in the last 12 to 18 months. "Companies use APIs to access information in any of the applications that we're running, just as you might use a database connection to do something similar in a traditional server environment." For a company predicated on selling software as service, Web 2.0 APIs are fundamental to its business model.

Gross cites Nick Carr, former Harvard Business Review editor and author of The Big Switch: Rewiring the World from Edison to Google, which describes how businesses built their own power plants in the early days of electricity. Over time, as energy became a commodity, they began buying it in. The same is now happening with computing power, Carr argues. If computing power and data are the new electricity, then the opportunity exists to outsource them in the same way. In that case, Web 2.0 APIs are arguably the way of hooking companies up to such resources.

With many companies beginning to understand and implement Web 2.0, advocates of the technology are already beginning to look at other improvements. John Crupi, CTO of Jackbe, is already working on semantic technologies which do more than simply integrate Web 2.0 data they help applications understand what the data means.

Consider two product suppliers both providing Web 2.0 APIs for their inventory. Both of them sell the same type of widget. One provider might tag their widget , while the other might tag theirs . The applications don't know what each other's data means. A supply chain manager at a distribution firm might be able to use the Web 2.0 APIs to get both inventory lists on to the same screen, but still has to know enough about the data to draw inferences about them.

That's a start, but it would be more useful if the application integrating the two suppliers' data was able to automatically display how many widgets from one supplier were available when the other was out of stock. For that, it has to know that a can replace a . The suppliers are unlikely to spend time mapping their APIs to each other, so someone else will have to do it.

"What I want to be able to do is to get the supply chain manager to tell me that those two things are the same pieces of data," says Crupi. Coding what amounts to a specialised form of Web 2.0 social tagging into a system in an intuitive way for non-technical end-users will be a challenge.

While companies work on enhancing the web API concept, others are still realising the significance of these APIs, which is that they bring us closer to solving a problem that we struggled with for at least a decade. When open computing emerged in the 1980s, software connectivity was still a black art. Now, the underlying language of software integration is becoming standardised enough to make application interoperability much easier over commoditised, well-understood networking protocols. It might still be too early to say that we've totally solved the interoperability problem, but these APIs have kick-started a revolution that will get us a large part of the way there.





Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in February 2008

 

COMMENTS powered by Disqus  //  Commenting policy