No longer just the province of specialist sectors, agent-based computing is changing the way systems interact and how they are managed.
Agent-based computing has already transformed processes such as automated financial markets trading, logistics, and industrial robotics. Now it is moving into the mainstream commercial sector as more complex systems with many different components are used by a wider range of businesses.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Organisations that have successfully implemented agent technologies include DaimlerChrysler, IBM and the Ministry of Defence.
So what are agent technologies? In essence, they are autonomous software systems that can decide for themselves what they need to do. Agents are capable of operating in dynamic and open environments and often interact with other agents - including both people and software.
"Agents are a way to manage interactions between different kinds of computational entities, and to get the right kind of behaviour out of large-scale distributed systems," says Michael Luck of the School of Electronics and Computer Science at the University of Southampton and executive director of the EU-funded AgentLink action co-ordination programme.
"The idea of grid computing is based on large-scale distributed computation in support of what are called virtual organisations. All they need to do is to be able to interact.
"We have built small-scale systems, and we are starting to build large-scale systems, where the component software entities will determine what to do. It is about machines joining and leaving dynamically as they see fit and as the system allows."
Luck argues that the growing complexity of the interactions in emerging distributed systems means new dynamic techniques need to be introduced to provide more flexible mediation and management.
One of the basic ideas of agent-based computing is that there are multiple agents in the environment which talk to each other, essentially autonomous software systems that can decide for themselves what they need to do.
For example, laws, norms, guides for behaviour, even policing and trust between electronic components, can all help in the mediation and management of such computational systems.
"We can build these systems, but we have no experience of how to manage such large-scale, open and dynamic systems," Luck says.
"Management of these systems is concerned with mediating the interactions of components, whether they are supercomputers or groups of low-level factory floor devices like sensors and actuators.
"In human societies we have developed laws, norms, regulations and systems of policing, but we do not have that in computational systems. We need computational entities that will do what we do in the real world.
"We need norms and rules of behaviour within systems, so that if agents joining and leaving a system do not comply, there must be some sort of sanction."
Some aspects of agent-based computing seek to capture human notions such as trust, reputation, dependence, obligations, permissions, institutions and other social structures in electronic form.
Luck adds that computational analogues of trust and reputation need to be developed so that they can make judgments based on past histories of interactions.
The notion of agent-based computing has been adopted enthusiastically in the financial trading community, where autonomous market trading agents are said to outperform human commodity traders by 7%.
"One example is the Zero Intelligence Plus (Zip) autonomous adaptive trading agent algorithm developed by Dave Cliff, a colleague of mine at Southampton University," Luck says.
"Inevitably, machines can monitor stock market movements much more quickly than humans, and if you can encode the kinds of rules that you want, then it is not unreasonable to imagine that computational traders will be able to outperform humans.
"I am surprised that the figure is only 7%. This is based on experiments we have carried out, but there are robo-trader programs being used in the market not just to provide information, but to do actual trading."
Cliff developed the Zip algorithm at HP Labs between 1998 and 2005. It works by calculating the best trading strategy for continuous double auctions, the trading basis of most financial markets. Zip traders have the ability to "learn" from their actions, using simple machine learning rules.
In the manufacturing sector, DaimlerChrysler implemented an agent-based system on one factory floor to allow individual work pieces to be directed dynamically around the production area.
The intention was to implement flexible manufacturing to meet rapidly changing operations targets. The result was claimed to be a 20% increase in productivity.
The military has also muscled in on the act. The Ministry of Defence has used an agent-based system to model changes in human behaviour in military environments due to factors such as heat, fatigue and caffeine.
On the commercial front, Magenta - an Anglo-Russian software company specialising in the commercial use of multi-agent technology - has worked with clients in scheduling supply chains, semantic search, text understanding and document classification, and pattern recognition.
One of these projects is for Newgistics, which offers returned-goods management services for the retail, healthcare, service parts, telecommunications equipment and computing industries.
Magenta has developed Intelligent Returns Management that controls both the package and information flow from the point-of-order, or shipment, to the final destination and creates visibility in the entire reverse logistics chain.
Mark Hinton, chief technology officer at Magenta, says, "When you buy from the internet or by mail order and want to send something back, it is a big problem for retailers and suppliers.
"Newgistics gathers the returns, and gets them back to the supplier. We built them an agent-based system to manage that return supply chain, handling something in the order of 100,000 parcels a week.
"It needs to be an agent-based solution because it is a very dynamic situation. You do not know which items are going to be returned on a day-by-day basis, and the numbers are very high, as are the costs of dealing with them.
"Any small gains that you can get by having an agent-based system that can route individual parcels back through the supply chain, scale up to be significant savings for the retailer."
The alternative is to supply all customers with a postage-paid return package, but that is very expensive. There is also the added complexity of dealing with environmental considerations, for example when returns have to go to landfill.
In traditional object-oriented systems, the software is controlled by a central thread. "However, agents are more active than regular objects, and the key difference is that agents are event-driven," says Hinton.
"As something comes into the system, agents wake up and do things according to their goals and objectives. We are also interested in aspects of agent technology as it would apply to the semantic web, understanding language and text, and doing smarter searching of data.
"We are talking about recognising patterns in unstructured data in a way that would have been done traditionally with statistical analysis techniques, but you can get agents to self-organise and find that information.
"It also has elements of data mining, and can find things that are obvious, like correlations between spending on certain types of food and alcohol, for example, or the fact that people who rent DVDs then go and buy take-away meals, but it can also find things that are not so obvious," Hinton says.
Case study: better performance is grass roots' motivation
The Grass Roots Group is an organisation that helps clients design programmes that motivate employees, customers and partners, and has deployed ASG's Tevista Performance Manager (TPM) to monitor network performance across its web and call centre-based services.
The company's websites are run on Microsoft Internet Information Server across a number of web servers, and run a mixture of SQL and AS/400 back-end databases.
Grass Roots' incentive participants accumulate rewards before redeeming them in an online store for goods, services or holiday. There are also web-based childcare accounts, created by regular contributions and used to make Bacs payments to childcarers - all of whom are enrolled with their banking details.
TPM provides what ASG calls "Tevista Synthetic User", an approach to testing based on agent technology that is able to measure the response times for specific application transactions.
This means that services are not only checked for availability, but the performance experienced by a user is measured against defined service level agreements.
Steve Parkinson, group IS manager at Grass Roots, says, "Before we had an automated monitoring system, people had to do a lot of manual checking to make sure services were up and running.
"We needed a tool that could answer questions like: 'Is the service available? Can I log in? Can I see the database?' You might get to a website, but if you cannot log in, or if it takes forever, you are going to have a bad customer experience."
If a network performance metric monitored by one of the agents exceeds a predefined threshold, network administrators are alerted automatically. It not only notifies the support team, but also checks whether anyone has acknowledged the problem.
"You need to layer up the tests to see if the server is available, if the site is available, if the user can log in and how long it takes. If it meets those and other certain criteria, then it passes," Parkinson says.
"Otherwise, if it fails and an alert is raised, the system will very quickly identify the failure point, and will send out alerts by e-mail and SMS on a 24x7 basis.
"We run these monitors every two minutes, so we will know of any issues before the customer does. The solution is designed so that if the first engineer does not acknowledge the alarm within a defined period, the alarm is escalated onto the next engineer, and so on, up to chief executive level."