oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Junges, Robert
Publications (10 of 13) Show all publications
Junges, R. (2017). A Learning-driven Approach for Behavior Modeling in Agent-based Simulation. (Doctoral dissertation). Örebro: Örebro University
Open this publication in new window or tab >>A Learning-driven Approach for Behavior Modeling in Agent-based Simulation
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Agent-based simulation is a prominent application of the agent-based system metaphor. One of the main characteristics of this simulation paradigm is the generative nature of the outcome: the macro-level system behavior is generated from the micro-level agent behavior. Designing this agent behavior becomes challenging, as it is not clear how much each individual agent will contribute to the macro-level phenomenon in the simulation.

Agent learning has proven to be successful for behavior configuration and calibration in many domains. It can also be used to mitigate the design challenge here. Agents learn their behaviors, adapted towards their micro and some macro level goals in the simulation. However, machine learning techniques that in principle could be used in this context usually constitute black-boxes, to which the modeler has no access to understand what was learned.

This thesis proposes an engineering method for developing agent behavior using agent learning. The focus of learning hereby is not on improving performance, but in supporting a modeling endeavor: the results must be readable and explainable to and by the modeler. Instead of pre-equipping the agents with a behavior program, a model of the behavior is learned from scratch within a given environmental model.

The following are the contributions of the research conducted: a) a study of the general applicability of machine learning as means to support agent behavior modeling: different techniques for learning and abstracting the behavior learned were reviewed; b) the formulation of a novel engineering method encapsulating the general approach for learning behavior models: MABLe (Modeling Agent Behavior by Learning); c) the construction of a general framework for applying the devised method inside an easy-accessible agent-based simulation tool; d) evaluating the proposed method and framework.

This thesis contributes to advancing the state-of-the-art in agent-based simulation engineering: the individual agent behavior design is supported by a novel engineering method, which may be more adapted to the general way modelers proceed than others inspired by software engineering.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2017. p. 58
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 75
Keywords
agent-based simulation, agent modeling, agent learning
National Category
Information Systems
Identifiers
urn:nbn:se:oru:diva-61117 (URN)978-91-7529-208-3 (ISBN)
Public defence
2017-11-13, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 09:00 (English)
Opponent
Supervisors
Available from: 2017-09-25 Created: 2017-09-25 Last updated: 2018-01-13Bibliographically approved
Junges, R. & Klügl, F. (2013). Learning Agent Models in SeSAm: (Demonstration). In: Ito, Jonker, Gini and Shehory (Ed.), : . Paper presented at 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), May 2013, St. Paul, USA. The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Open this publication in new window or tab >>Learning Agent Models in SeSAm: (Demonstration)
2013 (English)In: / [ed] Ito, Jonker, Gini and Shehory, The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2013Conference paper, Published paper (Refereed)
Abstract [en]

Designing the agent model in a multiagent simulation is a challenging task due to the generative nature of such systems. In this contribution we present an extension to the multiagent simulation platform SeSAm, introducing a learning-based design strategy for building agent behavior models.

Place, publisher, year, edition, pages
The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2013
National Category
Computer Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-29234 (URN)
Conference
12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), May 2013, St. Paul, USA
Available from: 2013-05-29 Created: 2013-05-29 Last updated: 2018-01-11Bibliographically approved
Junges, R. & Klügl, F. (2013). Learning Tools for Agent-based Modeling and Simulation. Künstliche Intelligenz, 27(3), 273-280
Open this publication in new window or tab >>Learning Tools for Agent-based Modeling and Simulation
2013 (English)In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 27, no 3, p. 273-280Article in journal (Refereed) Published
Abstract [en]

In this project report, we describe ongoing research on supporting the development of agent-based simulation models. The vision is that the agents themselves should learn their (individual) behavior model, instead of letting a human modeler test which of the many possible agent-level behaviors leads to the correct macro-level observations. To that aim, we integrate a suite of agent learning tools into SeSAm, a fully visual platform for agent-based simulation models. This integration is the focus of this contribution.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2013
National Category
Computer Sciences
Research subject
Information technology; Computer Science
Identifiers
urn:nbn:se:oru:diva-33882 (URN)10.1007/s13218-013-0258-z (DOI)
Available from: 2014-02-20 Created: 2014-02-20 Last updated: 2018-01-11Bibliographically approved
Junges, R. & Klügl, F. (2012). Behavior abstraction robustness in agent modeling. In: Web Intelligence and Intelligent Agent Technology (WIIAT): . Paper presented at 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (WI-IAT), Dec, 4-7 2012, Macau, China (pp. 228-235). IEEE Computer Society Digital Library
Open this publication in new window or tab >>Behavior abstraction robustness in agent modeling
2012 (English)In: Web Intelligence and Intelligent Agent Technology (WIIAT), IEEE Computer Society Digital Library, 2012, p. 228-235Conference paper, Published paper (Refereed)
Abstract [en]

Due to the "generative" nature of the macro phenomena, agent-based systems require experience from the modeler to determine the proper low-level agent behavior. Adaptive and learning agents can facilitate this task: Partial or preliminary learnt versions of the behavior can serve as inspiration for the human modeler. Using a simulation process we develop agents that explore sensors and actuators inside a given environment. The exploration is guided by the attribution of rewards to their actions, expressed in an objective function. These rewards are used to develop a situation-action mapping, later abstracted to a human-readable format. In this contribution we test the robustness of a decision-tree-representation of the agent's decision-making process with regards to changes in the objective function. The importance of this study lies on understanding how sensitive the definition of the objective function is to the final abstraction of the model, not merely to a performance evaluation.

Place, publisher, year, edition, pages
IEEE Computer Society Digital Library, 2012
Keywords
Multiagent systems
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-29233 (URN)10.1109/WI-IAT.2012.157 (DOI)
Conference
2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (WI-IAT), Dec, 4-7 2012, Macau, China
Available from: 2013-05-29 Created: 2013-05-29 Last updated: 2018-01-11Bibliographically approved
Junges, R. & Klügl, F. (2012). Behavior modeling from learning agents: sensitivity to objective function details. In: Conitzer,Winikoff, Padgham, and van der Hoek (Ed.), Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012): volume 3. Paper presented at 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012), 4-8 June 2012, Valencia, Spain (pp. 1335-1336). Richland SC: The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Open this publication in new window or tab >>Behavior modeling from learning agents: sensitivity to objective function details
2012 (English)In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012): volume 3 / [ed] Conitzer,Winikoff, Padgham, and van der Hoek, Richland SC: The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2012, p. 1335-1336Conference paper, Published paper (Refereed)
Abstract [en]

The process of finding the appropriate agent behavior is a cumbersome task – no matter whether it is for agent-based software or simulation models. Machine Learning can help by generating partial or preliminary versions of the agent low-level behavior. However, for actually being useful for the human modeler the results should be interpretable, which may require some post-processing step after the actual behavior learning. In this contribution we test the sensitivity of the resulting, interpretable behavior program with respect to parameters and components of the function that describes the intended behavior.

Place, publisher, year, edition, pages
Richland SC: The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2012
National Category
Computer Systems
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-23808 (URN)978-0-9817381-3-0 (ISBN)
Conference
11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012), 4-8 June 2012, Valencia, Spain
Available from: 2012-07-02 Created: 2012-07-02 Last updated: 2018-03-05Bibliographically approved
Junges, R. & Klügl, F. (2012). Generating inspiration for agent design by reinforcement learning. Information and Software Technology, 54(6), 639-649
Open this publication in new window or tab >>Generating inspiration for agent design by reinforcement learning
2012 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 6, p. 639-649Article in journal (Refereed) Published
Abstract [en]

One major challenge in developing multiagent systems is to find the appropriate agent design that is able to generate the intended overall dynamics, but does not contain unnecessary features. In this article we suggest to use agent learning for supporting the development of an agent model during an analysis phase in agent-based software engineering. Hereby, the designer defines the environmental model and the agent interfaces. A reward function captures a description of the overall agent performance with respect to the intended outcome of the agent behavior. Based on this setup, reinforcement learning techniques can be used for learning rules that are optimally governing the agent behavior. However, for really being useful for analysis, the human developer must be able to review and fully understand the learnt behavior program. We propose to use additional learning mechanisms for a post-processing step supporting the usage of the learnt model.

Place, publisher, year, edition, pages
Elsevier, 2012
Keywords
Agent-oriented software engineering, Multiagent systems, Multiagent simulation
National Category
Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-22709 (URN)10.1016/j.infsof.2011.12.002 (DOI)000302587100009 ()2-s2.0-84858074569 (Scopus ID)
Available from: 2012-05-03 Created: 2012-05-03 Last updated: 2018-01-12Bibliographically approved
Junges, R. & Klügl, F. (2012). How to design agent-based simulation models using agent learning. In: Winter Simulation Conference Proceedings: . Paper presented at Winter Simulation Conference (WSC 2012), Berlin, Germany, December 9-12, 2012 (pp. 1-10). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>How to design agent-based simulation models using agent learning
2012 (English)In: Winter Simulation Conference Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 1-10Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

The question of what is the best way to develop an agent-based simulation model becomes more important as this paradigm is more and more used. Clearly, general model development processes can be used, but these do not solve the major problems of actually deciding about the agents' structure and behavior. In this contribution we introduce the MABLe methodology for analyzing and designing agent simulation models that relies on adaptive agents, where the agent helps the modeler by proposing a suitable behavior program. We test our methodology in a pedestrian evacuation scenario. Results demonstrate the agents can learn and report back to the modeler a behavior that is interestingly better than a hand-made model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2012
Series
Winter Simulation Conference Proceedings, ISSN 0891-7736
National Category
Computer Systems
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-24164 (URN)10.1109/WSC.2012.6465017 (DOI)000319225500037 ()2-s2.0-84874704749 (Scopus ID)978-1-4673-4779-2 (ISBN)
Conference
Winter Simulation Conference (WSC 2012), Berlin, Germany, December 9-12, 2012
Available from: 2012-07-25 Created: 2012-07-25 Last updated: 2017-10-27Bibliographically approved
Junges, R. & Klügl, F. (2012). Programming agent behavior by learning in simulation models. Applied Artificial Intelligence, 26(4), 349-375
Open this publication in new window or tab >>Programming agent behavior by learning in simulation models
2012 (English)In: Applied Artificial Intelligence, ISSN 0883-9514, E-ISSN 1087-6545, Vol. 26, no 4, p. 349-375Article in journal (Refereed) Published
Abstract [en]

Designing the proper agent behavior for a multiagent system is a complex task. Often it is not obvious which of the agents' actions, and the interactions among them and with their environment, can produce the intended macro-phenomenon. We assume that the modeler can benefit from using agent-learning techniques. There are several issues with which learning can help modeling; for example, by using self-adaptive agents for calibration. In this contribution we are dealing with another example: the usage of learning for supporting system analysis and model design. A candidate-learning architecture is the combination of reinforcement learning and decision tree learning. The former generates a policy for agent behavior and the latter is used for abstraction and interpretation purposes. Here, we focus on the relation between policy-learning convergence and the quality of the abstracted model produced from that.

Place, publisher, year, edition, pages
Taylor & Francis, 2012
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Information technology
Identifiers
urn:nbn:se:oru:diva-23067 (URN)10.1080/08839514.2012.652906 (DOI)000303822500004 ()2-s2.0-84861056993 (Scopus ID)
Available from: 2012-05-31 Created: 2012-05-31 Last updated: 2018-02-02Bibliographically approved
Junges, R. & Klügl, F. (2011). Evolution for modeling: a genetic programming framework for SeSAm. In: GECCO '11: Proceedings of the 13th annual conference companion on Genetic and evolutionary computation. Paper presented at Evolutionary computation and multi-agent systems and simulation (ECoMASS) - fifth annual workshop, GECCO’11, July 12–16, 2011, Dublin, Ireland (pp. 551-558). ACM Digital Library
Open this publication in new window or tab >>Evolution for modeling: a genetic programming framework for SeSAm
2011 (English)In: GECCO '11: Proceedings of the 13th annual conference companion on Genetic and evolutionary computation, ACM Digital Library, 2011, p. 551-558Conference paper, Published paper (Refereed)
Abstract [en]

Developing a valid agent-based simulation model is not always straight forward, but involves a lot of prototyping, testing and analyzing until the right low-level behavior is fully specified and calibrated. Our aim is to replace the try and error search of a modeler by adaptive agents which learn a behavior that then can serve as a source of inspiration for the modeler. In this contribution, we suggest to use genetic programming as the learning mechanism. For this aim we developed a genetic programming framework integrated into the visual agent-based modeling and simulation tool SeSAm, providing similar easy-to-use functionality.

Place, publisher, year, edition, pages
ACM Digital Library, 2011
Keywords
Multiagent Systems, Multiagent Simulation, Artificial Intelligence
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-16609 (URN)10.1145/2001858.2002047 (DOI)978-1-4503-0690-4 (ISBN)
Conference
Evolutionary computation and multi-agent systems and simulation (ECoMASS) - fifth annual workshop, GECCO’11, July 12–16, 2011, Dublin, Ireland
Available from: 2011-08-19 Created: 2011-08-19 Last updated: 2018-03-06Bibliographically approved
Junges, R. & Klügl, F. (2011). Modeling agent behavior through online evolutionary and reinforcement learning. In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2011: . Paper presented at 5th International Workshop on Multi-Agent Systems and Simulation (MAS&S), Szczecin, Poland, September 18-21, 2011 (pp. 643-650). IEEE
Open this publication in new window or tab >>Modeling agent behavior through online evolutionary and reinforcement learning
2011 (English)In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2011, IEEE, 2011, p. 643-650Conference paper, Published paper (Refereed)
Abstract [en]

The process of creation and validation of an agentbased simulation model requires the modeler to undergo a number of prototyping, testing, analyzing and re-designing rounds. The aim is to specify and calibrate the proper low level agent behavior that truly produces the intended macro level phenomena. We assume that this development can be supported by agent learning techniques, specially by generating inspiration about behaviors as starting points for the modeler. In this contribution we address this learning-driven modeling task and compare two methods that are producing decision trees: reinforcement learning with a post-processing step for generalization and Genetic Programming.

Place, publisher, year, edition, pages
IEEE, 2011
Keywords
Multiagent Systems, Multiagent Simulation, Artificial Intelligence
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-16610 (URN)978-1-4577-0041-5 (ISBN)978-83-60810-39-2 (ISBN)
Conference
5th International Workshop on Multi-Agent Systems and Simulation (MAS&S), Szczecin, Poland, September 18-21, 2011
Available from: 2011-08-19 Created: 2011-08-19 Last updated: 2018-01-12Bibliographically approved
Organisations

Search in DiVA

Show all publications