oru.sePublications
Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Junges, Robert
    Örebro University, School of Science and Technology.
    A Learning-driven Approach for Behavior Modeling in Agent-based Simulation2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Agent-based simulation is a prominent application of the agent-based system metaphor. One of the main characteristics of this simulation paradigm is the generative nature of the outcome: the macro-level system behavior is generated from the micro-level agent behavior. Designing this agent behavior becomes challenging, as it is not clear how much each individual agent will contribute to the macro-level phenomenon in the simulation.

    Agent learning has proven to be successful for behavior configuration and calibration in many domains. It can also be used to mitigate the design challenge here. Agents learn their behaviors, adapted towards their micro and some macro level goals in the simulation. However, machine learning techniques that in principle could be used in this context usually constitute black-boxes, to which the modeler has no access to understand what was learned.

    This thesis proposes an engineering method for developing agent behavior using agent learning. The focus of learning hereby is not on improving performance, but in supporting a modeling endeavor: the results must be readable and explainable to and by the modeler. Instead of pre-equipping the agents with a behavior program, a model of the behavior is learned from scratch within a given environmental model.

    The following are the contributions of the research conducted: a) a study of the general applicability of machine learning as means to support agent behavior modeling: different techniques for learning and abstracting the behavior learned were reviewed; b) the formulation of a novel engineering method encapsulating the general approach for learning behavior models: MABLe (Modeling Agent Behavior by Learning); c) the construction of a general framework for applying the devised method inside an easy-accessible agent-based simulation tool; d) evaluating the proposed method and framework.

    This thesis contributes to advancing the state-of-the-art in agent-based simulation engineering: the individual agent behavior design is supported by a novel engineering method, which may be more adapted to the general way modelers proceed than others inspired by software engineering.

    List of papers
    1. Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
    Open this publication in new window or tab >>Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation
    2010 (English)In: Multiagent system technologies / [ed] Jürgen Dix, Cees Witteveen, Berlin, Germany: Springer, 2010, p. 185-196Conference paper, Published paper (Refereed)
    Abstract [en]

    There have been a number of suggestions for methodologies supporting the development of multiagent simulation models. In this contribution we are introducing a learning-driven methodology that exploits learning techniques for generating suggestions for agent behavior models based on a given environmental model. The output must be human-interpretable. We compare different candidates for learning techniques - classier systems, neural networks and reinforcement learning - concerning their appropriateness for such a modeling methodology.

    Place, publisher, year, edition, pages
    Berlin, Germany: Springer, 2010
    Series
    Lecture Notes in Computer Science ; 6251
    Keywords
    Multiagent Systems, Multiagent Simulation
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-14660 (URN)10.1007/978-3-642-16178-0_18 (DOI)000289106100018 ()2-s2.0-78049373494 (Scopus ID)978-3-642-16177-3 (ISBN)
    Conference
    8th German Conference, MATES 2010, Leipzig, Germany, September 27-29, 2010
    Available from: 2011-02-17 Created: 2011-02-17 Last updated: 2018-01-12Bibliographically approved
    2. Generating inspiration for multi-agent simulation design by Q-Learning
    Open this publication in new window or tab >>Generating inspiration for multi-agent simulation design by Q-Learning
    2010 (English)In: MALLOW-2010: proceedings of  the multi-agent logics, languages, and organisations federated workshops 2010, 2010Conference paper, Published paper (Refereed)
    Abstract [en]

    One major challenge in developing multiagent simulations is to find the appropriate agent design that is able to generate the intended overall phenomenon dynamics, but does not contain unnecessary details. In this paper we suggest to use agent learning for supporting the development of an agent model: the modeler defines the environmental model and the agent interfaces. Using rewards capturing the intended agent behavior, reinforcement learning techniques can be used for learning the rules that are optimally governing the agent behavior. However, for really being useful in a modeling and simulation context, a human modeler must be able to review and understand the outcome of the learning. We propose to use additional forms of learning as post-processing step for supporting the analysis of the learned model. We test our ideas using a simple evacuation simulation scenario.

    Keywords
    Multiagent Systems, Multiagent Simulation
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-14663 (URN)
    Conference
    MAS&S at MALLOW 2010, Lyon, France, August 30 - September 2
    Available from: 2011-02-17 Created: 2011-02-17 Last updated: 2018-01-12Bibliographically approved
    3. Modeling agent behavior through online evolutionary and reinforcement learning
    Open this publication in new window or tab >>Modeling agent behavior through online evolutionary and reinforcement learning
    2011 (English)In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2011, IEEE, 2011, p. 643-650Conference paper, Published paper (Refereed)
    Abstract [en]

    The process of creation and validation of an agentbased simulation model requires the modeler to undergo a number of prototyping, testing, analyzing and re-designing rounds. The aim is to specify and calibrate the proper low level agent behavior that truly produces the intended macro level phenomena. We assume that this development can be supported by agent learning techniques, specially by generating inspiration about behaviors as starting points for the modeler. In this contribution we address this learning-driven modeling task and compare two methods that are producing decision trees: reinforcement learning with a post-processing step for generalization and Genetic Programming.

    Place, publisher, year, edition, pages
    IEEE, 2011
    Keywords
    Multiagent Systems, Multiagent Simulation, Artificial Intelligence
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-16610 (URN)978-1-4577-0041-5 (ISBN)978-83-60810-39-2 (ISBN)
    Conference
    5th International Workshop on Multi-Agent Systems and Simulation (MAS&S), Szczecin, Poland, September 18-21, 2011
    Available from: 2011-08-19 Created: 2011-08-19 Last updated: 2018-01-12Bibliographically approved
    4. Generating inspiration for agent design by reinforcement learning
    Open this publication in new window or tab >>Generating inspiration for agent design by reinforcement learning
    2012 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 6, p. 639-649Article in journal (Refereed) Published
    Abstract [en]

    One major challenge in developing multiagent systems is to find the appropriate agent design that is able to generate the intended overall dynamics, but does not contain unnecessary features. In this article we suggest to use agent learning for supporting the development of an agent model during an analysis phase in agent-based software engineering. Hereby, the designer defines the environmental model and the agent interfaces. A reward function captures a description of the overall agent performance with respect to the intended outcome of the agent behavior. Based on this setup, reinforcement learning techniques can be used for learning rules that are optimally governing the agent behavior. However, for really being useful for analysis, the human developer must be able to review and fully understand the learnt behavior program. We propose to use additional learning mechanisms for a post-processing step supporting the usage of the learnt model.

    Place, publisher, year, edition, pages
    Elsevier, 2012
    Keywords
    Agent-oriented software engineering, Multiagent systems, Multiagent simulation
    National Category
    Computer and Information Sciences
    Research subject
    Computer and Systems Science
    Identifiers
    urn:nbn:se:oru:diva-22709 (URN)10.1016/j.infsof.2011.12.002 (DOI)000302587100009 ()2-s2.0-84858074569 (Scopus ID)
    Available from: 2012-05-03 Created: 2012-05-03 Last updated: 2018-01-12Bibliographically approved
    5. Programming agent behavior by learning in simulation models
    Open this publication in new window or tab >>Programming agent behavior by learning in simulation models
    2012 (English)In: Applied Artificial Intelligence, ISSN 0883-9514, E-ISSN 1087-6545, Vol. 26, no 4, p. 349-375Article in journal (Refereed) Published
    Abstract [en]

    Designing the proper agent behavior for a multiagent system is a complex task. Often it is not obvious which of the agents' actions, and the interactions among them and with their environment, can produce the intended macro-phenomenon. We assume that the modeler can benefit from using agent-learning techniques. There are several issues with which learning can help modeling; for example, by using self-adaptive agents for calibration. In this contribution we are dealing with another example: the usage of learning for supporting system analysis and model design. A candidate-learning architecture is the combination of reinforcement learning and decision tree learning. The former generates a policy for agent behavior and the latter is used for abstraction and interpretation purposes. Here, we focus on the relation between policy-learning convergence and the quality of the abstracted model produced from that.

    Place, publisher, year, edition, pages
    Taylor & Francis, 2012
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Research subject
    Information technology
    Identifiers
    urn:nbn:se:oru:diva-23067 (URN)10.1080/08839514.2012.652906 (DOI)000303822500004 ()2-s2.0-84861056993 (Scopus ID)
    Available from: 2012-05-31 Created: 2012-05-31 Last updated: 2018-02-02Bibliographically approved
    6. Behavior abstraction robustness in agent modeling
    Open this publication in new window or tab >>Behavior abstraction robustness in agent modeling
    2012 (English)In: Web Intelligence and Intelligent Agent Technology (WIIAT), IEEE Computer Society Digital Library, 2012, p. 228-235Conference paper, Published paper (Refereed)
    Abstract [en]

    Due to the "generative" nature of the macro phenomena, agent-based systems require experience from the modeler to determine the proper low-level agent behavior. Adaptive and learning agents can facilitate this task: Partial or preliminary learnt versions of the behavior can serve as inspiration for the human modeler. Using a simulation process we develop agents that explore sensors and actuators inside a given environment. The exploration is guided by the attribution of rewards to their actions, expressed in an objective function. These rewards are used to develop a situation-action mapping, later abstracted to a human-readable format. In this contribution we test the robustness of a decision-tree-representation of the agent's decision-making process with regards to changes in the objective function. The importance of this study lies on understanding how sensitive the definition of the objective function is to the final abstraction of the model, not merely to a performance evaluation.

    Place, publisher, year, edition, pages
    IEEE Computer Society Digital Library, 2012
    Keywords
    Multiagent systems
    National Category
    Computer Sciences
    Research subject
    Computer Science
    Identifiers
    urn:nbn:se:oru:diva-29233 (URN)10.1109/WI-IAT.2012.157 (DOI)
    Conference
    2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (WI-IAT), Dec, 4-7 2012, Macau, China
    Available from: 2013-05-29 Created: 2013-05-29 Last updated: 2018-01-11Bibliographically approved
    7. How to design agent-based simulation models using agent learning
    Open this publication in new window or tab >>How to design agent-based simulation models using agent learning
    2012 (English)In: Winter Simulation Conference Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 1-10Conference paper, Oral presentation with published abstract (Refereed)
    Abstract [en]

    The question of what is the best way to develop an agent-based simulation model becomes more important as this paradigm is more and more used. Clearly, general model development processes can be used, but these do not solve the major problems of actually deciding about the agents' structure and behavior. In this contribution we introduce the MABLe methodology for analyzing and designing agent simulation models that relies on adaptive agents, where the agent helps the modeler by proposing a suitable behavior program. We test our methodology in a pedestrian evacuation scenario. Results demonstrate the agents can learn and report back to the modeler a behavior that is interestingly better than a hand-made model.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2012
    Series
    Winter Simulation Conference Proceedings, ISSN 0891-7736
    National Category
    Computer Systems
    Research subject
    Computer and Systems Science
    Identifiers
    urn:nbn:se:oru:diva-24164 (URN)10.1109/WSC.2012.6465017 (DOI)000319225500037 ()2-s2.0-84874704749 (Scopus ID)978-1-4673-4779-2 (ISBN)
    Conference
    Winter Simulation Conference (WSC 2012), Berlin, Germany, December 9-12, 2012
    Available from: 2012-07-25 Created: 2012-07-25 Last updated: 2017-10-27Bibliographically approved
    8. Learning Tools for Agent-based Modeling and Simulation
    Open this publication in new window or tab >>Learning Tools for Agent-based Modeling and Simulation
    2013 (English)In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 27, no 3, p. 273-280Article in journal (Refereed) Published
    Abstract [en]

    In this project report, we describe ongoing research on supporting the development of agent-based simulation models. The vision is that the agents themselves should learn their (individual) behavior model, instead of letting a human modeler test which of the many possible agent-level behaviors leads to the correct macro-level observations. To that aim, we integrate a suite of agent learning tools into SeSAm, a fully visual platform for agent-based simulation models. This integration is the focus of this contribution.

    Place, publisher, year, edition, pages
    Heidelberg: Springer, 2013
    National Category
    Computer Sciences
    Research subject
    Information technology; Computer Science
    Identifiers
    urn:nbn:se:oru:diva-33882 (URN)10.1007/s13218-013-0258-z (DOI)
    Available from: 2014-02-20 Created: 2014-02-20 Last updated: 2018-01-11Bibliographically approved
  • 2.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Behavior abstraction robustness in agent modeling2012In: Web Intelligence and Intelligent Agent Technology (WIIAT), IEEE Computer Society Digital Library, 2012, p. 228-235Conference paper (Refereed)
    Abstract [en]

    Due to the "generative" nature of the macro phenomena, agent-based systems require experience from the modeler to determine the proper low-level agent behavior. Adaptive and learning agents can facilitate this task: Partial or preliminary learnt versions of the behavior can serve as inspiration for the human modeler. Using a simulation process we develop agents that explore sensors and actuators inside a given environment. The exploration is guided by the attribution of rewards to their actions, expressed in an objective function. These rewards are used to develop a situation-action mapping, later abstracted to a human-readable format. In this contribution we test the robustness of a decision-tree-representation of the agent's decision-making process with regards to changes in the objective function. The importance of this study lies on understanding how sensitive the definition of the objective function is to the final abstraction of the model, not merely to a performance evaluation.

  • 3.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Behavior modeling from learning agents: sensitivity to objective function details2012In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012): volume 3 / [ed] Conitzer,Winikoff, Padgham, and van der Hoek, Richland SC: The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2012, p. 1335-1336Conference paper (Refereed)
    Abstract [en]

    The process of finding the appropriate agent behavior is a cumbersome task – no matter whether it is for agent-based software or simulation models. Machine Learning can help by generating partial or preliminary versions of the agent low-level behavior. However, for actually being useful for the human modeler the results should be interpretable, which may require some post-processing step after the actual behavior learning. In this contribution we test the sensitivity of the resulting, interpretable behavior program with respect to parameters and components of the function that describes the intended behavior.

  • 4.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University.
    Evaluation of techniques for a learning-driven modeling methodology in multiagent simulation2010In: Multiagent system technologies / [ed] Jürgen Dix, Cees Witteveen, Berlin, Germany: Springer, 2010, p. 185-196Conference paper (Refereed)
    Abstract [en]

    There have been a number of suggestions for methodologies supporting the development of multiagent simulation models. In this contribution we are introducing a learning-driven methodology that exploits learning techniques for generating suggestions for agent behavior models based on a given environmental model. The output must be human-interpretable. We compare different candidates for learning techniques - classier systems, neural networks and reinforcement learning - concerning their appropriateness for such a modeling methodology.

  • 5.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Evolution for modeling: a genetic programming framework for SeSAm2011In: GECCO '11: Proceedings of the 13th annual conference companion on Genetic and evolutionary computation, ACM Digital Library, 2011, p. 551-558Conference paper (Refereed)
    Abstract [en]

    Developing a valid agent-based simulation model is not always straight forward, but involves a lot of prototyping, testing and analyzing until the right low-level behavior is fully specified and calibrated. Our aim is to replace the try and error search of a modeler by adaptive agents which learn a behavior that then can serve as a source of inspiration for the modeler. In this contribution, we suggest to use genetic programming as the learning mechanism. For this aim we developed a genetic programming framework integrated into the visual agent-based modeling and simulation tool SeSAm, providing similar easy-to-use functionality.

  • 6.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Generating inspiration for agent design by reinforcement learning2012In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 6, p. 639-649Article in journal (Refereed)
    Abstract [en]

    One major challenge in developing multiagent systems is to find the appropriate agent design that is able to generate the intended overall dynamics, but does not contain unnecessary features. In this article we suggest to use agent learning for supporting the development of an agent model during an analysis phase in agent-based software engineering. Hereby, the designer defines the environmental model and the agent interfaces. A reward function captures a description of the overall agent performance with respect to the intended outcome of the agent behavior. Based on this setup, reinforcement learning techniques can be used for learning rules that are optimally governing the agent behavior. However, for really being useful for analysis, the human developer must be able to review and fully understand the learnt behavior program. We propose to use additional learning mechanisms for a post-processing step supporting the usage of the learnt model.

  • 7.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Generating inspiration for multi-agent simulation design by Q-Learning2010In: MALLOW-2010: proceedings of  the multi-agent logics, languages, and organisations federated workshops 2010, 2010Conference paper (Refereed)
    Abstract [en]

    One major challenge in developing multiagent simulations is to find the appropriate agent design that is able to generate the intended overall phenomenon dynamics, but does not contain unnecessary details. In this paper we suggest to use agent learning for supporting the development of an agent model: the modeler defines the environmental model and the agent interfaces. Using rewards capturing the intended agent behavior, reinforcement learning techniques can be used for learning the rules that are optimally governing the agent behavior. However, for really being useful in a modeling and simulation context, a human modeler must be able to review and understand the outcome of the learning. We propose to use additional forms of learning as post-processing step for supporting the analysis of the learned model. We test our ideas using a simple evacuation simulation scenario.

  • 8.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    How to design agent-based simulation models using agent learning2012In: Winter Simulation Conference Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2012, p. 1-10Conference paper (Refereed)
    Abstract [en]

    The question of what is the best way to develop an agent-based simulation model becomes more important as this paradigm is more and more used. Clearly, general model development processes can be used, but these do not solve the major problems of actually deciding about the agents' structure and behavior. In this contribution we introduce the MABLe methodology for analyzing and designing agent simulation models that relies on adaptive agents, where the agent helps the modeler by proposing a suitable behavior program. We test our methodology in a pedestrian evacuation scenario. Results demonstrate the agents can learn and report back to the modeler a behavior that is interestingly better than a hand-made model.

  • 9.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Learning Agent Models in SeSAm: (Demonstration)2013In: / [ed] Ito, Jonker, Gini and Shehory, The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2013Conference paper (Refereed)
    Abstract [en]

    Designing the agent model in a multiagent simulation is a challenging task due to the generative nature of such systems. In this contribution we present an extension to the multiagent simulation platform SeSAm, introducing a learning-based design strategy for building agent behavior models.

  • 10.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Learning convergence and agent behavior interpretation for designing agent-based simulations2010Conference paper (Refereed)
    Abstract [en]

    Designing a proper agent behavior for a multiagent simulation is a complex task as it is not obvious how the agents actions, and interactions among them and with their environment, result in an intended macro-phenomenon. To cope with the complexity involved in this challenge, and to achieve the intended overall result, the modeler may benefit from using agent learning techniques. In this contribution we focus on testing different configurations of the interface between the learning algorithm and the simulation scenario. The learned result is post-processed by a decision tree learner, to derive a comprehensible model for the agent behavior.

  • 11.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Learning Tools for Agent-based Modeling and Simulation2013In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 27, no 3, p. 273-280Article in journal (Refereed)
    Abstract [en]

    In this project report, we describe ongoing research on supporting the development of agent-based simulation models. The vision is that the agents themselves should learn their (individual) behavior model, instead of letting a human modeler test which of the many possible agent-level behaviors leads to the correct macro-level observations. To that aim, we integrate a suite of agent learning tools into SeSAm, a fully visual platform for agent-based simulation models. This integration is the focus of this contribution.

  • 12.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Modeling agent behavior through online evolutionary and reinforcement learning2011In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2011, IEEE, 2011, p. 643-650Conference paper (Refereed)
    Abstract [en]

    The process of creation and validation of an agentbased simulation model requires the modeler to undergo a number of prototyping, testing, analyzing and re-designing rounds. The aim is to specify and calibrate the proper low level agent behavior that truly produces the intended macro level phenomena. We assume that this development can be supported by agent learning techniques, specially by generating inspiration about behaviors as starting points for the modeler. In this contribution we address this learning-driven modeling task and compare two methods that are producing decision trees: reinforcement learning with a post-processing step for generalization and Genetic Programming.

  • 13.
    Junges, Robert
    et al.
    Örebro University, School of Science and Technology.
    Klügl, Franziska
    Örebro University, School of Science and Technology.
    Programming agent behavior by learning in simulation models2012In: Applied Artificial Intelligence, ISSN 0883-9514, E-ISSN 1087-6545, Vol. 26, no 4, p. 349-375Article in journal (Refereed)
    Abstract [en]

    Designing the proper agent behavior for a multiagent system is a complex task. Often it is not obvious which of the agents' actions, and the interactions among them and with their environment, can produce the intended macro-phenomenon. We assume that the modeler can benefit from using agent-learning techniques. There are several issues with which learning can help modeling; for example, by using self-adaptive agents for calibration. In this contribution we are dealing with another example: the usage of learning for supporting system analysis and model design. A candidate-learning architecture is the combination of reinforcement learning and decision tree learning. The former generates a policy for agent behavior and the latter is used for abstraction and interpretation purposes. Here, we focus on the relation between policy-learning convergence and the quality of the abstracted model produced from that.

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf