oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Learning-driven Approach for Behavior Modeling in Agent-based Simulation
Örebro University, School of Science and Technology.
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Agent-based simulation is a prominent application of the agent-based system metaphor. One of the main characteristics of this simulation paradigm is the generative nature of the outcome: the macro-level system behavior is generated from the micro-level agent behavior. Designing this agent behavior becomes challenging, as it is not clear how much each individual agent will contribute to the macro-level phenomenon in the simulation.

Agent learning has proven to be successful for behavior configuration and calibration in many domains. It can also be used to mitigate the design challenge here. Agents learn their behaviors, adapted towards their micro and some macro level goals in the simulation. However, machine learning techniques that in principle could be used in this context usually constitute black-boxes, to which the modeler has no access to understand what was learned.

This thesis proposes an engineering method for developing agent behavior using agent learning. The focus of learning hereby is not on improving performance, but in supporting a modeling endeavor: the results must be readable and explainable to and by the modeler. Instead of pre-equipping the agents with a behavior program, a model of the behavior is learned from scratch within a given environmental model.

The following are the contributions of the research conducted: a) a study of the general applicability of machine learning as means to support agent behavior modeling: different techniques for learning and abstracting the behavior learned were reviewed; b) the formulation of a novel engineering method encapsulating the general approach for learning behavior models: MABLe (Modeling Agent Behavior by Learning); c) the construction of a general framework for applying the devised method inside an easy-accessible agent-based simulation tool; d) evaluating the proposed method and framework.

This thesis contributes to advancing the state-of-the-art in agent-based simulation engineering: the individual agent behavior design is supported by a novel engineering method, which may be more adapted to the general way modelers proceed than others inspired by software engineering.

Place, publisher, year, edition, pages
Örebro: Örebro University , 2017. , p. 58
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 75
Keywords [en]
agent-based simulation, agent modeling, agent learning
National Category
Information Systems
Identifiers
URN: urn:nbn:se:oru:diva-61117ISBN: 978-91-7529-208-3 (print)OAI: oai:DiVA.org:oru-61117DiVA, id: diva2:1144028
Public defence
2017-11-13, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 09:00 (English)
Opponent
Supervisors
Available from: 2017-09-25 Created: 2017-09-25 Last updated: 2018-01-13Bibliographically approved
List of papers
1. Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
Open this publication in new window or tab >>Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
2010 (English)In: Multiagent system technologies / [ed] Jürgen Dix, Cees Witteveen, Berlin, Germany: Springer, 2010, p. 185-196Conference paper, Published paper (Refereed)
Abstract [en]

There have been a number of suggestions for methodologies supporting the development of multiagent simulation models. In this contribution we are introducing a learning-driven methodology that exploits learning techniques for generating suggestions for agent behavior models based on a given environmental model. The output must be human-interpretable. We compare different candidates for learning techniques - classier systems, neural networks and reinforcement learning - concerning their appropriateness for such a modeling methodology.

Place, publisher, year, edition, pages
Berlin, Germany: Springer, 2010
Series
Lecture Notes in Computer Science ; 6251
Keywords
Multiagent Systems, Multiagent Simulation
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-14660 (URN)10.1007/978-3-642-16178-0_18 (DOI)000289106100018 ()2-s2.0-78049373494 (Scopus ID)978-3-642-16177-3 (ISBN)
Conference
8th German Conference, MATES 2010, Leipzig, Germany, September 27-29, 2010
Available from: 2011-02-17 Created: 2011-02-17 Last updated: 2018-01-12Bibliographically approved
2. Generating inspiration for multi-agent simulation design by Q-Learning
Open this publication in new window or tab >>Generating inspiration for multi-agent simulation design by Q-Learning
2010 (English)In: MALLOW-2010: proceedings of  the multi-agent logics, languages, and organisations federated workshops 2010, 2010Conference paper, Published paper (Refereed)
Abstract [en]

One major challenge in developing multiagent simulations is to find the appropriate agent design that is able to generate the intended overall phenomenon dynamics, but does not contain unnecessary details. In this paper we suggest to use agent learning for supporting the development of an agent model: the modeler defines the environmental model and the agent interfaces. Using rewards capturing the intended agent behavior, reinforcement learning techniques can be used for learning the rules that are optimally governing the agent behavior. However, for really being useful in a modeling and simulation context, a human modeler must be able to review and understand the outcome of the learning. We propose to use additional forms of learning as post-processing step for supporting the analysis of the learned model. We test our ideas using a simple evacuation simulation scenario.

Keywords
Multiagent Systems, Multiagent Simulation
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-14663 (URN)
Conference
MAS&S at MALLOW 2010, Lyon, France, August 30 - September 2
Available from: 2011-02-17 Created: 2011-02-17 Last updated: 2018-01-12Bibliographically approved
3. Modeling agent behavior through online evolutionary and reinforcement learning
Open this publication in new window or tab >>Modeling agent behavior through online evolutionary and reinforcement learning
2011 (English)In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2011, IEEE, 2011, p. 643-650Conference paper, Published paper (Refereed)
Abstract [en]

The process of creation and validation of an agentbased simulation model requires the modeler to undergo a number of prototyping, testing, analyzing and re-designing rounds. The aim is to specify and calibrate the proper low level agent behavior that truly produces the intended macro level phenomena. We assume that this development can be supported by agent learning techniques, specially by generating inspiration about behaviors as starting points for the modeler. In this contribution we address this learning-driven modeling task and compare two methods that are producing decision trees: reinforcement learning with a post-processing step for generalization and Genetic Programming.

Place, publisher, year, edition, pages
IEEE, 2011
Keywords
Multiagent Systems, Multiagent Simulation, Artificial Intelligence
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-16610 (URN)978-1-4577-0041-5 (ISBN)978-83-60810-39-2 (ISBN)
Conference
5th International Workshop on Multi-Agent Systems and Simulation (MAS&S), Szczecin, Poland, September 18-21, 2011
Available from: 2011-08-19 Created: 2011-08-19 Last updated: 2018-01-12Bibliographically approved
4. Generating inspiration for agent design by reinforcement learning
Open this publication in new window or tab >>Generating inspiration for agent design by reinforcement learning
2012 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 6, p. 639-649Article in journal (Refereed) Published
Abstract [en]

One major challenge in developing multiagent systems is to find the appropriate agent design that is able to generate the intended overall dynamics, but does not contain unnecessary features. In this article we suggest to use agent learning for supporting the development of an agent model during an analysis phase in agent-based software engineering. Hereby, the designer defines the environmental model and the agent interfaces. A reward function captures a description of the overall agent performance with respect to the intended outcome of the agent behavior. Based on this setup, reinforcement learning techniques can be used for learning rules that are optimally governing the agent behavior. However, for really being useful for analysis, the human developer must be able to review and fully understand the learnt behavior program. We propose to use additional learning mechanisms for a post-processing step supporting the usage of the learnt model.

Place, publisher, year, edition, pages
Elsevier, 2012
Keywords
Agent-oriented software engineering, Multiagent systems, Multiagent simulation
National Category
Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-22709 (URN)10.1016/j.infsof.2011.12.002 (DOI)000302587100009 ()2-s2.0-84858074569 (Scopus ID)
Available from: 2012-05-03 Created: 2012-05-03 Last updated: 2018-01-12Bibliographically approved
5. Programming agent behavior by learning in simulation models
Open this publication in new window or tab >>Programming agent behavior by learning in simulation models
2012 (English)In: Applied Artificial Intelligence, ISSN 0883-9514, E-ISSN 1087-6545, Vol. 26, no 4, p. 349-375Article in journal (Refereed) Published
Abstract [en]

Designing the proper agent behavior for a multiagent system is a complex task. Often it is not obvious which of the agents' actions, and the interactions among them and with their environment, can produce the intended macro-phenomenon. We assume that the modeler can benefit from using agent-learning techniques. There are several issues with which learning can help modeling; for example, by using self-adaptive agents for calibration. In this contribution we are dealing with another example: the usage of learning for supporting system analysis and model design. A candidate-learning architecture is the combination of reinforcement learning and decision tree learning. The former generates a policy for agent behavior and the latter is used for abstraction and interpretation purposes. Here, we focus on the relation between policy-learning convergence and the quality of the abstracted model produced from that.

Place, publisher, year, edition, pages
Taylor & Francis, 2012
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Information technology
Identifiers
urn:nbn:se:oru:diva-23067 (URN)10.1080/08839514.2012.652906 (DOI)000303822500004 ()2-s2.0-84861056993 (Scopus ID)
Available from: 2012-05-31 Created: 2012-05-31 Last updated: 2018-02-02Bibliographically approved
6. Behavior abstraction robustness in agent modeling
Open this publication in new window or tab >>Behavior abstraction robustness in agent modeling
2012 (English)In: Web Intelligence and Intelligent Agent Technology (WIIAT), IEEE Computer Society Digital Library, 2012, p. 228-235Conference paper, Published paper (Refereed)
Abstract [en]

Due to the "generative" nature of the macro phenomena, agent-based systems require experience from the modeler to determine the proper low-level agent behavior. Adaptive and learning agents can facilitate this task: Partial or preliminary learnt versions of the behavior can serve as inspiration for the human modeler. Using a simulation process we develop agents that explore sensors and actuators inside a given environment. The exploration is guided by the attribution of rewards to their actions, expressed in an objective function. These rewards are used to develop a situation-action mapping, later abstracted to a human-readable format. In this contribution we test the robustness of a decision-tree-representation of the agent's decision-making process with regards to changes in the objective function. The importance of this study lies on understanding how sensitive the definition of the objective function is to the final abstraction of the model, not merely to a performance evaluation.

Place, publisher, year, edition, pages
IEEE Computer Society Digital Library, 2012
Keywords
Multiagent systems
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-29233 (URN)10.1109/WI-IAT.2012.157 (DOI)
Conference
2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (WI-IAT), Dec, 4-7 2012, Macau, China
Available from: 2013-05-29 Created: 2013-05-29 Last updated: 2018-01-11Bibliographically approved
7. How to design agent-based simulation models using agent learning
Open this publication in new window or tab >>How to design agent-based simulation models using agent learning
2012 (English)In: Winter Simulation Conference Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 1-10Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

The question of what is the best way to develop an agent-based simulation model becomes more important as this paradigm is more and more used. Clearly, general model development processes can be used, but these do not solve the major problems of actually deciding about the agents' structure and behavior. In this contribution we introduce the MABLe methodology for analyzing and designing agent simulation models that relies on adaptive agents, where the agent helps the modeler by proposing a suitable behavior program. We test our methodology in a pedestrian evacuation scenario. Results demonstrate the agents can learn and report back to the modeler a behavior that is interestingly better than a hand-made model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2012
Series
Winter Simulation Conference Proceedings, ISSN 0891-7736
National Category
Computer Systems
Research subject
Computer and Systems Science
Identifiers
urn:nbn:se:oru:diva-24164 (URN)10.1109/WSC.2012.6465017 (DOI)000319225500037 ()2-s2.0-84874704749 (Scopus ID)978-1-4673-4779-2 (ISBN)
Conference
Winter Simulation Conference (WSC 2012), Berlin, Germany, December 9-12, 2012
Available from: 2012-07-25 Created: 2012-07-25 Last updated: 2017-10-27Bibliographically approved
8. Learning Tools for Agent-based Modeling and Simulation
Open this publication in new window or tab >>Learning Tools for Agent-based Modeling and Simulation
2013 (English)In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 27, no 3, p. 273-280Article in journal (Refereed) Published
Abstract [en]

In this project report, we describe ongoing research on supporting the development of agent-based simulation models. The vision is that the agents themselves should learn their (individual) behavior model, instead of letting a human modeler test which of the many possible agent-level behaviors leads to the correct macro-level observations. To that aim, we integrate a suite of agent learning tools into SeSAm, a fully visual platform for agent-based simulation models. This integration is the focus of this contribution.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2013
National Category
Computer Sciences
Research subject
Information technology; Computer Science
Identifiers
urn:nbn:se:oru:diva-33882 (URN)10.1007/s13218-013-0258-z (DOI)
Available from: 2014-02-20 Created: 2014-02-20 Last updated: 2018-01-11Bibliographically approved

Open Access in DiVA

Cover(63 kB)7 downloads
File information
File name COVER01.pdfFile size 63 kBChecksum SHA-512
fc6bb6ee59eee18e15e552440279d7064753ba004efcb4bb4eaf0aba354a34e9fdb7b8d92ed52c666ba0ffcafcdae5426ea9f2ff667bae71a6071aa2a76ca2ae
Type coverMimetype application/pdf
Spikblad(94 kB)5 downloads
File information
File name SPIKBLAD01.pdfFile size 94 kBChecksum SHA-512
94d5aa5fc43224ee1ce0cd72b771638868b163d9ec33d176a40988d25b19a676f393e141fc98fc009ad9b9fe5c7c4368b1ea5418e916c8d2c3142a1dedfb8f87
Type spikbladMimetype application/pdf

Authority records BETA

Junges, Robert

Search in DiVA

By author/editor
Junges, Robert
By organisation
School of Science and Technology
Information Systems

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 449 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf