Agents and mashups

I’ve been thinking about software agents. Again. I’ve just read Paul Browne’s post, What comes after Java and .Net? Agents. The concept of autonomous software agents is one which has preoccupied me for years, ever since I researched this for my MSc thesis. There is so much that is attractive about this idea, but it somehow never quite gains traction.

The software agent paradigm is problematic, both conceptually, and technically. Definitions of software agents vary - (I don’t want to rehearse them here, but Wikipedia has this to offer), and the concept presents some tricky technical challenges.

Recently, however, I’ve been thinking about how the Web 2.0 meme has caught on, despite refusing to submit to a clear definition. Applications and services seem to fall under the Web 2.0 umbrella if they exhibit certain features or properties. Similarly, software agents are usually described in terms of a set of aspects such as ‘autonomy’, ‘goal-oriented behaviour’ and ‘an ability to react to changes in environment’.

What I find most interesting about current developments in web technologies is best summed up in the phenomenon of the mashup. From a user/business point of view, the significance of mashups is that a new service can be quickly created and offered through the combination of two or more existing services. From the perspective of a developer, the significance of all this activity is that the underlying interfaces are really demonstrating their usefulness. In many cases, simple APIs are succeeding and delivering where complex web service architectures are still being designed. The ‘bottom-up’ development approach is king in Web 2.0, for the time being at least. Although mashups often follow a client/server approach, an implicit peer-peer relationship is also formed.

Quite a lot of early research and development into software agents focussed on autonomous agents negotiating for resources in some virtual ‘marketplace’. The idea was that small software constructs acting in a semi-autonomous, goal oriented manner would ‘discover’ better solutions to certain types of complex problem. This is emergence, and is an important aspect of the rise of social software. Some sort of peer-peer relationship is also present here.

It seems to me that, with simple APIs (typically involving XML over HTTP), microformats and even the semantic web, it becomes simpler, at least on paper, to equip a software agent with the rules it needs to constrain its behaviour to meet certain clear goals. With the ‘web as platform’ aspect of Web 2.0, the environment for agents is more established and easier to program for.

To quote from Paul’s piece:

How does Web 2.0 give a push to Agents? Before, Systems were standalone, and everything planned in advance. With Web 2.0 everything is connected and too complex to manage by one person. We need to look at what works successfully in real life. Just as Market economies overcame the ‘Command and control’ of communism, so Agents will overcome the Command and control of Objects. It may not be perfect, but it will be (slightly) better.

I don’t disagree with Paul’s thesis that agents might come after Java and .Net. While there are certainly examples of successful agent-like software written in Java, and most likely in .Net too, these languages were not designed with autonomous software in mind. Where I depart from Paul is in the idea that agents will replace objects. I think agents are a higher-level concept. Perhaps agents will replace (passive) *services*….?

Do software agents have a larger role to play in the Brave New Web?

This was previously published at and was retrieved from the Internet Archive

Share this post: Facebook Twitter Email Google Plus
comments powered by Disqus