Jörg Denzinger's

Multi-Agent Systems Projects

Multi-agent systems are a very wide field of research connecting with many other fields in computer science, but also with fields outside of computer science, like engineering, social science, or business science. Although the definition of an agent still is not the same for many researchers in this field, the importance of studying, modeling, developing, or analysing the possible interactions between more or less autonomous entities acting in a more or less common environment is very widely recognized. Today, there are many computing systems that cover networks of computers and interact with each other and human beings and, in order to either understand their doing or to develop them to do certain tasks, the concepts from the field multi-agent systems are needed.

Building, using, or "working" in a multi-agent system adds an additional dimension to "conventional" programs that can be very successfully exploited but that also results in additional obstacles, riscs, or problems. When implementing or acting as an agent in a multi-agent system, one might observe behavior of other agents that can range from cooperative and friendly to indifferent to even hostile (and this behavior might not always be intentional on part of other agents or their designers or their "masters"). So, the behavior of the green robbie in the following picture is something the red robbie might have to deal with.

A robot hiding requested information and misleading another robot

On the other side, when designing either a whole multi-agent system or at least a group of agents within such a system, the goal of a developer in most cases is that his or her agents work together to fulfill one or several tasks (and taking what they can get from other agents). If there are several tasks then agents might have different priorities, which results in the necessity to develop some conflict management that hopefully achieves the best compromise between the different goals (naturally, already defining "best" in such a context is not easy and either very subjective or refering to concepts like "the best for the society").

Usually, if all the agents have the same goal (that is clearly defined) then the resulting conflicts are not as hard as in the other case, but still conflicts about how to reach the goal (or to do the task) can arise. But such conflicts do not have to be a bad thing, especially if there is no way how to determine before taking on a task what is the best way to do it. Our concepts for distributed knowledge-based search, TEAMWORK and TECHS, show how to successfully make use of the additional dimension in multi-agent systems to find better solutions faster.

In addition to these projects, we are also interested in how to develop groups of agents that work together well. Even more, we want to automate this development by enabling the agents to learn and to use their learning capabilities to adapt to the given task and the abilities and behavior of other agents. Our OLEMAS system provides us with a testbed and a first collection of agent models and learning methods to tackle this direction of research.

Out of our experience with learning of behavior we developed a new research direction, namely using learning of behavior to test computer systems for unwanted behavior. Our general approach is explained here.

to our OLEMAS project about learning cooperative behavior.

Last Change: 21/3/2017