Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Saffiotti, Alessandro, ProfessorORCID iD iconorcid.org/0000-0001-8229-1363
Alternativa namn
Biografi [eng]

My research interests encompass Artificial Intelligence (AI), autonomous robotics, and technology for elderly care.  I have been active for more than 25 years in the integration of AI and Robotics into a "cognitive robots" - you may say: how to give a brain to a body, or a body to a brain!  I also organize a number of international activities on combining AI and Robotics, including the "Lucia" series of PhD schools. I enjoy collaborative work, and I have participated in 12 EU projects, several EU networks, and many national projects. I am in the editorial board of the Artificial Intelligence journal, and of the International Journal on Social Robotics. I am a member of AAAI, a senior member of IEEE, and an EurAI fellow.

Publikationer (10 of 205) Visa alla publikationer
Sabu, K. M., Renoux, J. & Saffiotti, A. (2024). Deliberative Communication for Human-Agent Interaction: A Position Paper. In: HAI 2024 - Proceedings of the 12th International Conference on Human-Agent Interaction: . Paper presented at 12th International Conference on Human-Agent Interaction, HAI 2024, Swansea, November 24-27, 2024 (pp. 11-16). Association for Computing Machinery, Inc
Öppna denna publikation i ny flik eller fönster >>Deliberative Communication for Human-Agent Interaction: A Position Paper
2024 (Engelska)Ingår i: HAI 2024 - Proceedings of the 12th International Conference on Human-Agent Interaction, Association for Computing Machinery, Inc , 2024, s. 11-16Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this position paper, we argue for the need for deliberation in communication for artificial agents that perform tasks together with humans. Existing works use a set of terms and concepts with different meanings, resulting in ambiguity which does not allow for a general framework. As an initial step towards such a framework, we propose the notion of deliberative communication, clarify the necessary concepts and terminology, highlight the capabilities required in using deliberation for agents that communicate with human users, and discuss the main challenges.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery, Inc, 2024
Nyckelord
Human-Robot Interaction, Human-Virtual Agent Interaction, Chatbots, Intelligent virtual agents, Microrobots, Agent interaction, Artificial agents, Human users, Human-agent interaction, Humans-robot interactions, Position papers, Virtual agent, Human robot interaction
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Identifikatorer
urn:nbn:se:oru:diva-119083 (URN)10.1145/3687272.3688299 (DOI)001436563800002 ()2-s2.0-85215510952 (Scopus ID)9798400708244 (ISBN)
Konferens
12th International Conference on Human-Agent Interaction, HAI 2024, Swansea, November 24-27, 2024
Tillgänglig från: 2025-02-04 Skapad: 2025-02-04 Senast uppdaterad: 2025-09-08Bibliografiskt granskad
Faridghasemnia, M., Renoux, J. & Saffiotti, A. (2024). Visual Noun Modifiers: The Problem of Binding Visual and Linguistic Cues. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation, ICRA 2024, Yokohama, May 13-17, 2024 (pp. 11178-11185). Institute of Electrical and Electronics Engineers Inc.
Öppna denna publikation i ny flik eller fönster >>Visual Noun Modifiers: The Problem of Binding Visual and Linguistic Cues
2024 (Engelska)Ingår i: 2024 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers Inc. , 2024, s. 11178-11185Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In many robotic applications, especially those involving humans and the environment, linguistic and visual information must be processed jointly and bound together. Existing works either encode the image or the language into a subsymbolic space, like the CLIP model, or create a symbolic space of extracted information, like the object detection models. In this paper, we propose to describe images by nouns and modifiers and introduce a new embedded binding space where the linguistic and visual cues can effectively be bound. We investigate how state-of-the-art models perform in recognizing nouns and modifiers from images, and propose our method by introducing a dataset and CLIP-like recognition techniques based on transfer learning and metric learning. We show real-world experiments that demonstrate the practical applicability of our approach to robotics applications. Our results indicate that our method can surpass the state-of-the-art in recognizing nouns and modifiers from images. Interestingly, our method exhibits a language characteristic related to context sensitivity.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers Inc., 2024
Nyckelord
Adversarial machine learning, Contrastive Learning, Image coding, Object detection, Object recognition, Robot learning, Transfer learning, Visual languages, ART model, Detection models, Environment information, Linguistic information, Objects detection, Robotics applications, State of the art, Sub-symbolic, Visual cues, Visual information, Linguistics
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:oru:diva-118593 (URN)10.1109/ICRA57147.2024.10611332 (DOI)001369728001128 ()2-s2.0-85202445079 (Scopus ID)9798350384574 (ISBN)
Konferens
IEEE International Conference on Robotics and Automation, ICRA 2024, Yokohama, May 13-17, 2024
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horisont 2020, 101016442
Anmärkning

This work has been partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and has also been supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101016442 (AIPlan4EU).

Tillgänglig från: 2025-01-16 Skapad: 2025-01-16 Senast uppdaterad: 2025-09-08Bibliografiskt granskad
De Filippo, A., Milano, M., Presutti, V. & Saffiotti, A. (2023). CREAI 2023: Preface to the Second Workshop on Artificial Intelligence and Creativity. In: CEUR Workshop Proceedings: . Paper presented at 2nd Workshop on Artificial Intelligence and Creativity, CREAI 2023, Roma, Italy, 6 November, 2023.. CEUR-WS, Article ID 193706.
Öppna denna publikation i ny flik eller fönster >>CREAI 2023: Preface to the Second Workshop on Artificial Intelligence and Creativity
2023 (Engelska)Ingår i: CEUR Workshop Proceedings, CEUR-WS , 2023, artikel-id 193706Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In recent years, Artificial Intelligence (AI) has gained increasing popularity in the area of art creation, by demonstrating its great potential. Research in this topic has developed AI systems able to generate creative outputs in fields such as music, painting, games, design and scientific discovery, either autonomously or in collaboration with humans. Therefore, AI also helped to analyze and study the mechanisms of creativity from a broader perspective: from the socio-anthropological to psychological, as well as cognitive impact of the autonomous creative processes of artificial intelligence. These advances are leading to new opportunities research perspectives, while also posing challenging questions related to authorship, integrity, bias and evaluation of AI artistic outputs. CREAI, the workshop on AI and creativity, tries to address these research lines and aims to provide a forum for the AI community to discuss problems, challenges and innovative approaches in the various sub-fields of AI and creativity.

Ort, förlag, år, upplaga, sidor
CEUR-WS, 2023
Nyckelord
Artificial intelligence systems, Creative process, Creatives, Design discoveries, Game design, In-field, Innovative approaches, Intelligence communities, Scientific discovery, Sub fields, Artificial intelligence
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Identifikatorer
urn:nbn:se:oru:diva-118334 (URN)2-s2.0-85176606442 (Scopus ID)
Konferens
2nd Workshop on Artificial Intelligence and Creativity, CREAI 2023, Roma, Italy, 6 November, 2023.
Tillgänglig från: 2025-01-13 Skapad: 2025-01-13 Senast uppdaterad: 2025-09-08Bibliografiskt granskad
Gugliermo, S., Schaffernicht, E., Koniaris, C. & Saffiotti, A. (2023). Extracting Planning Domains from Execution Traces: a Progress Report. In: : . Paper presented at ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023.
Öppna denna publikation i ny flik eller fönster >>Extracting Planning Domains from Execution Traces: a Progress Report
2023 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

One of the difficulties of using AI planners in industrial applications pertains to the complexity of writing planning domain models. These models are typically constructed by domain planning experts and can become increasingly difficult to codify for large applications. In this paper, we describe our ongoing research on a novel approach to automatically learn planning domains from previously executed traces using Behavior Trees as an intermediate human-readable structure. By involving human planning experts in the learning phase, our approach can benefit from their validation. This paper outlines the initial steps we have taken in this research, and presents the challenges we face in the future.

Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-110796 (URN)
Konferens
ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF)
Tillgänglig från: 2024-01-17 Skapad: 2024-01-17 Senast uppdaterad: 2024-06-03Bibliografiskt granskad
Lamanna, L., Faridghasemnia, M., Gerevini, A., Saetti, A., Saffiotti, A., Serafini, L. & Traverso, P. (2023). Learning to Act for Perceiving in Partially Unknown Environments. In: Edith Elkind (Ed.), Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023): . Paper presented at 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Macao, S.A.R., August 19-25, 2023 (pp. 5485-5493). International Joint Conferences on Artificial Intelligence
Öppna denna publikation i ny flik eller fönster >>Learning to Act for Perceiving in Partially Unknown Environments
Visa övriga...
2023 (Engelska)Ingår i: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023) / [ed] Edith Elkind, International Joint Conferences on Artificial Intelligence , 2023, s. 5485-5493Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Autonomous agents embedded in a physical environment need the ability to correctly perceive the state of the environment from sensory data. In partially observable environments, certain properties can be perceived only in specific situations and from certain viewpoints that can be reached by the agent by planning and executing actions. For instance, to understand whether a cup is full of coffee, an agent, equipped with a camera, needs to turn on the light and look at the cup from the top. When the proper situations to perceive the desired properties are unknown, an agent needs to learn them and plan to get in such situations. In this paper, we devise a general method to solve this problem by evaluating the confidence of a neural network online and by using symbolic planning. We experimentally evaluate the proposed approach on several synthetic datasets, and show the feasibility of our approach in a real-world scenario that involves noisy perceptions and noisy actions on a real robot.

Ort, förlag, år, upplaga, sidor
International Joint Conferences on Artificial Intelligence, 2023
Serie
IJCAI International Joint Conference on Artificial Intelligence, ISSN 1045-0823
Nyckelord
Artificial intelligence, General method, Learn+, Neural-networks, Partially observable environments, Physical environments, Property, Real-world scenario, Sensory data, Synthetic datasets, Unknown environments, Autonomous agents
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-112138 (URN)10.24963/ijcai.2023/609 (DOI)001202344205065 ()2-s2.0-85170365795 (Scopus ID)9781956792034 (ISBN)
Konferens
32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Macao, S.A.R., August 19-25, 2023
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut och Alice Wallenbergs StiftelseEU, Horisont 2020, 101016442
Anmärkning

We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. This work has also been partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and AIPlan4EU funded by the EU Horizon 2020 research and innovation program under GA n. 101016442.

Tillgänglig från: 2024-03-06 Skapad: 2024-03-06 Senast uppdaterad: 2024-08-13Bibliografiskt granskad
Köckemann, U., Calisi, D., Gemignani, G., Renoux, J. & Saffiotti, A. (2023). Planning for Automated Testing of Implicit Constraints in Behavior Trees. In: Sven Koenig; Roni Stern; Mauro Vallati (Ed.), Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling: . Paper presented at 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), Prague, Czech Republic, July 8-13, 2023 (pp. 649-658). AAAI Press, 33
Öppna denna publikation i ny flik eller fönster >>Planning for Automated Testing of Implicit Constraints in Behavior Trees
Visa övriga...
2023 (Engelska)Ingår i: Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling / [ed] Sven Koenig; Roni Stern; Mauro Vallati, AAAI Press , 2023, Vol. 33, s. 649-658Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Behavior Trees (BTs) are a formalism increasingly used to control the execution of robotic systems. The strength of BTs resides in their compact, hierarchical and transparent representation. However, when used in practical applications transparency is often hindered by the introduction of implicit run-time relations between nodes, e.g., because of data dependencies or hardware-related ordering constraints. Manually verifying the correctness of a BT with respect to these hidden relations is a tedious and error-prone task. This paper presents a modular planning-based approach for automatically testing BTs offline at design time, to identify possible executions that may violate given data and ordering constraints and to exhibit traces of these executions to help debugging. Our approach supports both basic and advanced BT node types, e.g., supporting parallel behaviors, and can be extended with other node types as needed. We evaluate our approach on BTs used in a commercially deployed robotics system and on a large set of randomly generated trees showing that our approach scales to realistic sizes of more than 3000 nodes. 

Ort, förlag, år, upplaga, sidor
AAAI Press, 2023
Serie
Proceedings of the ... International Conference on Automated Planning and Scheduling, ISSN 2334-0835, E-ISSN 2334-0843 ; 33
Nyckelord
Automated Planning, Robotics, Behavior Trees
Nationell ämneskategori
Data- och informationsvetenskap
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-112201 (URN)10.1609/icaps.v33i1.27247 (DOI)2-s2.0-85169788442 (Scopus ID)
Konferens
33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), Prague, Czech Republic, July 8-13, 2023
Projekt
AIPlan4EU
Forskningsfinansiär
Europeiska kommissionen, 101016442
Tillgänglig från: 2024-03-07 Skapad: 2024-03-07 Senast uppdaterad: 2024-06-03Bibliografiskt granskad
Lamanna, L., Serafini, L., Faridghasemnia, M., Saffiotti, A., Saetti, A., Gerevini, A. & Traverso, P. (2023). Planning for Learning Object Properties. In: Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 37 No. 10: AAAI-23 Technical Tracks 10. Paper presented at 37th AAAI Conference on Artificial Intelligence, Washington, D.C., USA, February 7-14, 2023 (pp. 12005-12013). AAAI Press, 37:10
Öppna denna publikation i ny flik eller fönster >>Planning for Learning Object Properties
Visa övriga...
2023 (Engelska)Ingår i: Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 37 No. 10: AAAI-23 Technical Tracks 10, AAAI Press , 2023, Vol. 37:10, s. 12005-12013Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Autonomous agents embedded in a physical environment need the ability to recognize objects and their properties from sensory data. Such a perceptual ability is often implemented by supervised machine learning models, which are pre-trained using a set of labelled data. In real-world, open-ended deployments, however, it is unrealistic to assume to have a pre-trained model for all possible environments. Therefore, agents need to dynamically learn/adapt/extend their perceptual abilities online, in an autonomous way, by exploring and interacting with the environment where they operate. This paper describes a way to do so, by exploiting symbolic planning. Specifically, we formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem (using PDDL). We use planning techniques to produce a strategy for automating the training dataset creation and the learning process. Finally, we provide an experimental evaluation in both a simulated and a real environment, which shows that the proposed approach is able to successfully learn how to recognize new object properties.

Ort, förlag, år, upplaga, sidor
AAAI Press, 2023
Serie
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; Vol. 37 No. 10
Nyckelord
Learning systems, Supervised learning, Labeled data, Learn+, Learning objects, Machine learning models, Object property, Physical environments, Property, Real-world, Sensory data, Supervised machine learning, Autonomous agents
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-112139 (URN)10.1609/aaai.v37i10.26416 (DOI)001243749200056 ()2-s2.0-85165143019 (Scopus ID)9781577358800 (ISBN)
Konferens
37th AAAI Conference on Artificial Intelligence, Washington, D.C., USA, February 7-14, 2023
Forskningsfinansiär
EU, Horisont 2020, 101016442; 952215Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut och Alice Wallenbergs Stiftelse
Anmärkning

This work has been partially supported by AI-Plan4EU and TAILOR, two projects funded by the EU Horizon 2020 research and innovation program under GA n. 101016442 and n. 952215, respectively, and by MUR PRIN-2020 project RIPER (n. 20203FFYLK). We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. This work has also been partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Tillgänglig från: 2024-03-06 Skapad: 2024-03-06 Senast uppdaterad: 2024-08-21Bibliografiskt granskad
Buyukgoz, S., Grosinger, J., Chetouani, M. & Saffiotti, A. (2022). Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures. Frontiers in Robotics and AI, 9, Article ID 929267.
Öppna denna publikation i ny flik eller fönster >>Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures
2022 (Engelska)Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, artikel-id 929267Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.

Ort, förlag, år, upplaga, sidor
Frontiers Media S.A., 2022
Nyckelord
Autonomous robots, human intentions, human-centered AI, human–robot interaction, proactive agents, social robot
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:oru:diva-101051 (URN)10.3389/frobt.2022.929267 (DOI)000848417400001 ()36045640 (PubMedID)2-s2.0-85136846004 (Scopus ID)
Forskningsfinansiär
Europeiska kommissionen, 765955 952026
Tillgänglig från: 2022-09-02 Skapad: 2022-09-02 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Bontempi, G., Chavarriaga, R., eD Canck, H., Girardi, E., Hoos, H., Kilbane-Dawe, I., . . . Maratea, M. (2021). The CLAIRE COVID-19 initiative: approach, experiences and recommendations. Ethics and Information Technology, 23(Suppl. 1), 127-133
Öppna denna publikation i ny flik eller fönster >>The CLAIRE COVID-19 initiative: approach, experiences and recommendations
Visa övriga...
2021 (Engelska)Ingår i: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 23, nr Suppl. 1, s. 127-133Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

A volunteer effort by Artificial Intelligence (AI) researchers has shown it can deliver significant research outcomes rapidly to help tackle COVID-19. Within two months, CLAIRE's self-organising volunteers delivered the World's first comprehensive curated repository of COVID-19-related datasets useful for drug-repurposing, drafted review papers on the role CT/X-ray scan analysis and robotics could play, and progressed research in other areas. Given the pace required and nature of voluntary efforts, the teams faced a number of challenges. These offer insights in how better to prepare for future volunteer scientific efforts and large scale, data-dependent AI collaborations in general. We offer seven recommendations on how to best leverage such efforts and collaborations in the context of managing future crises.

Ort, förlag, år, upplaga, sidor
Springer, 2021
Nyckelord
Artificial intelligence, COVID-19, Emergency response
Nationell ämneskategori
Programvaruteknik
Identifikatorer
urn:nbn:se:oru:diva-89619 (URN)10.1007/s10676-020-09567-7 (DOI)000616464600001 ()33584129 (PubMedID)2-s2.0-85101426290 (Scopus ID)
Tillgänglig från: 2021-02-16 Skapad: 2021-02-16 Senast uppdaterad: 2023-12-08Bibliografiskt granskad
Thörn, O., Knudsen, P. & Saffiotti, A. (2020). Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): . Paper presented at 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020 (pp. 845-850). IEEE
Öppna denna publikation i ny flik eller fönster >>Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance
2020 (Engelska)Ingår i: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE , 2020, s. 845-850Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.

Ort, förlag, år, upplaga, sidor
IEEE, 2020
Serie
IEEE RO-MAN, ISSN 1944-9445
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:oru:diva-88686 (URN)10.1109/RO-MAN47096.2020.9223446 (DOI)000598571700122 ()2-s2.0-85090918508 (Scopus ID)978-1-7281-6075-7 (ISBN)
Konferens
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020
Forskningsfinansiär
EU, Horisont 2020, 825619
Tillgänglig från: 2021-01-20 Skapad: 2021-01-20 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-8229-1363

Sök vidare i DiVA

Visa alla publikationer