To Örebro University

oru.seÖrebro University Publications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Accelerating route choice learning with experience sharing in a commuting scenario: An agent-based approach
Örebro University, School of Science and Technology. (AASS)ORCID iD: 0000-0002-1470-6288
Instituto de Informatica, Universidade Federal do Rio Grando do Sul (UFRGS), Brazil.
2021 (English)In: AI Communications, ISSN 0921-7126, E-ISSN 1875-8452, Vol. 34, no 1, p. 105-119Article in journal (Refereed) Published
Abstract [en]

Navigation apps have become more and more popular, as they give information about the current traffic state to drivers who then adapt their route choice. In commuting scenarios, where people repeatedly travel between a particular origin and destination, people tend to learn and adapt to different situations. What if the experience gained from such a learning task is shared via an app? In this paper, we analyse the effects that adaptive driver agents cause on the overall network, when those agents share their aggregated experience about route choice in a reinforcement learning setup. In particular, in this investigation, Q-learning is used and drivers share what they have learnt about the system, not just information about their current travel times. Using a classical commuting scenario, we show that experience sharing can improve convergence times that underlie a typical learning task. Further, we analyse individual learning dynamics to get an impression how aggregate and individual dynamics are related to each other. Based on that interesting pattern of individual learning dynamics can be observed that would otherwise be hidden in an only aggregate analysis. 

Place, publisher, year, edition, pages
IOS Press, 2021. Vol. 34, no 1, p. 105-119
Keywords [en]
Route choice, reinforcement learning, traffic app
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:oru:diva-89767DOI: 10.3233/AIC-201582ISI: 000620785700008Scopus ID: 2-s2.0-85101226729OAI: oai:DiVA.org:oru-89767DiVA, id: diva2:1529698
Note

Funding Agencies:

National Council for Scientific and Technological Development (CNPq) 307215/2017-2

CAPES 001

Available from: 2021-02-19 Created: 2021-02-19 Last updated: 2021-03-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Klügl, Franziska

Search in DiVA

By author/editor
Klügl, Franziska
By organisation
School of Science and Technology
In the same journal
AI Communications
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 154 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf