oru.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Generating inspiration for agent design by reinforcement learning
Örebro University, School of Science and Technology.
Örebro University, School of Science and Technology.ORCID iD: 0000-0002-1470-6288
2012 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 6, p. 639-649Article in journal (Refereed) Published
Abstract [en]

One major challenge in developing multiagent systems is to find the appropriate agent design that is able to generate the intended overall dynamics, but does not contain unnecessary features. In this article we suggest to use agent learning for supporting the development of an agent model during an analysis phase in agent-based software engineering. Hereby, the designer defines the environmental model and the agent interfaces. A reward function captures a description of the overall agent performance with respect to the intended outcome of the agent behavior. Based on this setup, reinforcement learning techniques can be used for learning rules that are optimally governing the agent behavior. However, for really being useful for analysis, the human developer must be able to review and fully understand the learnt behavior program. We propose to use additional learning mechanisms for a post-processing step supporting the usage of the learnt model.

Place, publisher, year, edition, pages
Elsevier, 2012. Vol. 54, no 6, p. 639-649
Keywords [en]
Agent-oriented software engineering, Multiagent systems, Multiagent simulation
National Category
Computer and Information Sciences
Research subject
Computer and Systems Science
Identifiers
URN: urn:nbn:se:oru:diva-22709DOI: 10.1016/j.infsof.2011.12.002ISI: 000302587100009Scopus ID: 2-s2.0-84858074569OAI: oai:DiVA.org:oru-22709DiVA, id: diva2:524742
Available from: 2012-05-03 Created: 2012-05-03 Last updated: 2018-01-12Bibliographically approved
In thesis
1. A Learning-driven Approach for Behavior Modeling in Agent-based Simulation
Open this publication in new window or tab >>A Learning-driven Approach for Behavior Modeling in Agent-based Simulation
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Agent-based simulation is a prominent application of the agent-based system metaphor. One of the main characteristics of this simulation paradigm is the generative nature of the outcome: the macro-level system behavior is generated from the micro-level agent behavior. Designing this agent behavior becomes challenging, as it is not clear how much each individual agent will contribute to the macro-level phenomenon in the simulation.

Agent learning has proven to be successful for behavior configuration and calibration in many domains. It can also be used to mitigate the design challenge here. Agents learn their behaviors, adapted towards their micro and some macro level goals in the simulation. However, machine learning techniques that in principle could be used in this context usually constitute black-boxes, to which the modeler has no access to understand what was learned.

This thesis proposes an engineering method for developing agent behavior using agent learning. The focus of learning hereby is not on improving performance, but in supporting a modeling endeavor: the results must be readable and explainable to and by the modeler. Instead of pre-equipping the agents with a behavior program, a model of the behavior is learned from scratch within a given environmental model.

The following are the contributions of the research conducted: a) a study of the general applicability of machine learning as means to support agent behavior modeling: different techniques for learning and abstracting the behavior learned were reviewed; b) the formulation of a novel engineering method encapsulating the general approach for learning behavior models: MABLe (Modeling Agent Behavior by Learning); c) the construction of a general framework for applying the devised method inside an easy-accessible agent-based simulation tool; d) evaluating the proposed method and framework.

This thesis contributes to advancing the state-of-the-art in agent-based simulation engineering: the individual agent behavior design is supported by a novel engineering method, which may be more adapted to the general way modelers proceed than others inspired by software engineering.

Place, publisher, year, edition, pages
Örebro: Örebro University, 2017. p. 58
Series
Örebro Studies in Technology, ISSN 1650-8580 ; 75
Keywords
agent-based simulation, agent modeling, agent learning
National Category
Information Systems
Identifiers
urn:nbn:se:oru:diva-61117 (URN)978-91-7529-208-3 (ISBN)
Public defence
2017-11-13, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 09:00 (English)
Opponent
Supervisors
Available from: 2017-09-25 Created: 2017-09-25 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Junges, RobertKlügl, Franziska

Search in DiVA

By author/editor
Junges, RobertKlügl, Franziska
By organisation
School of Science and Technology
In the same journal
Information and Software Technology
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 487 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf