To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Saffiotti, Alessandro, ProfessorORCID iD iconorcid.org/0000-0001-8229-1363
Alternative names
Biography [eng]

My research interests encompass Artificial Intelligence (AI), autonomous robotics, and technology for elderly care.  I have been active for more than 25 years in the integration of AI and Robotics into a "cognitive robots" - you may say: how to give a brain to a body, or a body to a brain!  I also organize a number of international activities on combining AI and Robotics, including the "Lucia" series of PhD schools. I enjoy collaborative work, and I have participated in 12 EU projects, several EU networks, and many national projects. I am in the editorial board of the Artificial Intelligence journal, and of the International Journal on Social Robotics. I am a member of AAAI, a senior member of IEEE, and an EurAI fellow.

Publications (10 of 199) Show all publications
Gugliermo, S., Schaffernicht, E., Koniaris, C. & Saffiotti, A. (2023). Extracting Planning Domains from Execution Traces: a Progress Report. In: : . Paper presented at ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023.
Open this publication in new window or tab >>Extracting Planning Domains from Execution Traces: a Progress Report
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

One of the difficulties of using AI planners in industrial applications pertains to the complexity of writing planning domain models. These models are typically constructed by domain planning experts and can become increasingly difficult to codify for large applications. In this paper, we describe our ongoing research on a novel approach to automatically learn planning domains from previously executed traces using Behavior Trees as an intermediate human-readable structure. By involving human planning experts in the learning phase, our approach can benefit from their validation. This paper outlines the initial steps we have taken in this research, and presents the challenges we face in the future.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-110796 (URN)
Conference
ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023
Funder
Swedish Foundation for Strategic Research
Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-01-18Bibliographically approved
Buyukgoz, S., Grosinger, J., Chetouani, M. & Saffiotti, A. (2022). Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures. Frontiers in Robotics and AI, 9, Article ID 929267.
Open this publication in new window or tab >>Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures
2022 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 929267Article in journal (Refereed) Published
Abstract [en]

Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2022
Keywords
Autonomous robots, human intentions, human-centered AI, human–robot interaction, proactive agents, social robot
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-101051 (URN)10.3389/frobt.2022.929267 (DOI)000848417400001 ()36045640 (PubMedID)2-s2.0-85136846004 (Scopus ID)
Funder
European Commission, 765955 952026
Available from: 2022-09-02 Created: 2022-09-02 Last updated: 2022-09-13Bibliographically approved
Bontempi, G., Chavarriaga, R., eD Canck, H., Girardi, E., Hoos, H., Kilbane-Dawe, I., . . . Maratea, M. (2021). The CLAIRE COVID-19 initiative: approach, experiences and recommendations. Ethics and Information Technology, 23(Suppl. 1), 127-133
Open this publication in new window or tab >>The CLAIRE COVID-19 initiative: approach, experiences and recommendations
Show others...
2021 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 23, no Suppl. 1, p. 127-133Article in journal (Refereed) Published
Abstract [en]

A volunteer effort by Artificial Intelligence (AI) researchers has shown it can deliver significant research outcomes rapidly to help tackle COVID-19. Within two months, CLAIRE's self-organising volunteers delivered the World's first comprehensive curated repository of COVID-19-related datasets useful for drug-repurposing, drafted review papers on the role CT/X-ray scan analysis and robotics could play, and progressed research in other areas. Given the pace required and nature of voluntary efforts, the teams faced a number of challenges. These offer insights in how better to prepare for future volunteer scientific efforts and large scale, data-dependent AI collaborations in general. We offer seven recommendations on how to best leverage such efforts and collaborations in the context of managing future crises.

Place, publisher, year, edition, pages
Springer, 2021
Keywords
Artificial intelligence, COVID-19, Emergency response
National Category
Software Engineering
Identifiers
urn:nbn:se:oru:diva-89619 (URN)10.1007/s10676-020-09567-7 (DOI)000616464600001 ()33584129 (PubMedID)2-s2.0-85101426290 (Scopus ID)
Available from: 2021-02-16 Created: 2021-02-16 Last updated: 2023-12-08Bibliographically approved
Thörn, O., Knudsen, P. & Saffiotti, A. (2020). Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): . Paper presented at 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020 (pp. 845-850). IEEE
Open this publication in new window or tab >>Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance
2020 (English)In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE , 2020, p. 845-850Conference paper, Published paper (Refereed)
Abstract [en]

Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-88686 (URN)10.1109/RO-MAN47096.2020.9223446 (DOI)000598571700122 ()2-s2.0-85090918508 (Scopus ID)978-1-7281-6075-7 (ISBN)
Conference
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020
Funder
EU, Horizon 2020, 825619
Available from: 2021-01-20 Created: 2021-01-20 Last updated: 2021-01-20Bibliographically approved
Tomic, S., Pecora, F. & Saffiotti, A. (2020). Learning Normative Behaviors through Abstraction. In: Giuseppe De Giacomo; Alejandro Catala; Bistra Dilkina; Michela Milano; Senén Barro; Alberto Bugarín; Jérôme Lang (Ed.), ECAI 2020: . Paper presented at 24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostela, Spain, August 29 - September 8, 2020 (pp. 1547-1554). IOS Press, 325
Open this publication in new window or tab >>Learning Normative Behaviors through Abstraction
2020 (English)In: ECAI 2020 / [ed] Giuseppe De Giacomo; Alejandro Catala; Bistra Dilkina; Michela Milano; Senén Barro; Alberto Bugarín; Jérôme Lang, IOS Press, 2020, Vol. 325, p. 1547-1554Conference paper, Published paper (Refereed)
Abstract [en]

Future robots should follow human social norms to be useful and accepted in human society. In this paper, we show how prior knowledge about social norms, represented using an existing normative framework, can be used to (1) guide reinforcement learning agents towards normative policies, and (2) re-use (transfer) learned policies in novel domains. The proposed method is not dependent on a particular reinforcement learning algorithm and can be seen as a means to learn abstract procedural knowledge based on declarative domain-independent semantic specifications.

Place, publisher, year, edition, pages
IOS Press, 2020
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 325
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-90586 (URN)10.3233/FAIA200263 (DOI)000650971301101 ()2-s2.0-85091786020 (Scopus ID)978-1-64368-100-9 (ISBN)978-1-64368-101-6 (ISBN)
Conference
24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostela, Spain, August 29 - September 8, 2020
Funder
EU, Horizon 2020, 825619 ”AI4EU”
Available from: 2021-03-19 Created: 2021-03-19 Last updated: 2021-06-21Bibliographically approved
Saffiotti, A., Fogel, P., Knudsen, P., de Miranda, L. & Thörn, O. (2020). On human-AI collaboration in artistic performance. In: Alessandro Saffiotti, Luciano Serafini, Paul Lukowicz (Ed.), NeHuAI 2020: First International Workshop on New Foundations for Human-Centered AI: Proceedings of the First International Workshop on New Foundations for Human-Centered AI (NeHuAI) co-located with 24th European Conference on Artificial Intelligence (ECAI 2020). Paper presented at First International Workshop on New Foundations for Human-Centered AI (NeHuAI) co-located with 24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostella, Spain, September 4, 2020 (pp. 38-43). CEUR-WS, 2659
Open this publication in new window or tab >>On human-AI collaboration in artistic performance
Show others...
2020 (English)In: NeHuAI 2020: First International Workshop on New Foundations for Human-Centered AI: Proceedings of the First International Workshop on New Foundations for Human-Centered AI (NeHuAI) co-located with 24th European Conference on Artificial Intelligence (ECAI 2020) / [ed] Alessandro Saffiotti, Luciano Serafini, Paul Lukowicz, CEUR-WS , 2020, Vol. 2659, p. 38-43Conference paper, Published paper (Refereed)
Abstract [en]

Live artistic performance, like music, dance or acting, provides an excellent domain to observe and analyze the mechanisms of human-human collaboration. In this note, we use this domain to study human-AI collaboration. We propose a model for collaborative artistic performance, in which an AI system mediates the interaction between a human and an artificial performer. We then instantiate this model in three case studies involving different combinations of human musicians, human dancers, robot dancers, and a virtual drummer. All case studies have been demonstrated in public live performances involving improvised artistic creation, with audiences of up to 250 people. We speculate that our model can be used to enable human-AI collaboration beyond the domain of artistic performance. 

Place, publisher, year, edition, pages
CEUR-WS, 2020
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords
AI systems, Artistic creations, Case-studies, Artificial intelligence
National Category
Computer Sciences Performing Arts Musicology
Identifiers
urn:nbn:se:oru:diva-86062 (URN)2-s2.0-85090911355 (Scopus ID)
Conference
First International Workshop on New Foundations for Human-Centered AI (NeHuAI) co-located with 24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostella, Spain, September 4, 2020
Funder
EU, Horizon 2020, 825619,AI4EU
Note

CC BY 4.0

Available from: 2020-09-28 Created: 2020-09-28 Last updated: 2023-05-29Bibliographically approved
Faridghasemnia, M., Nardi, D. & Saffiotti, A. (2020). Towards Abstract Relational Learning in Human Robot Interaction. In: CognitIve RobotiCs for intEraction (CIRCE): . Paper presented at 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual Confernece, 31 Aug. - 4 Sept., 2020..
Open this publication in new window or tab >>Towards Abstract Relational Learning in Human Robot Interaction
2020 (English)In: CognitIve RobotiCs for intEraction (CIRCE), 2020Conference paper, Published paper (Refereed)
Abstract [en]

Humans have a rich representation of the entities in their environment. Entities are described by their attributes, and entities that share attributes are often semantically related. For example, if two books have "Natural Language Processing" as the value of their `title' attribute, we can expect that their `topic' attribute will also be equal, namely, "NLP". Humans tend to generalize such observations, and infer sufficient conditions under which the `topic' attribute of any entity is "NLP". If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way. This ends in a contextualized cognitive agent that can adapt its understanding, where context provides sufficient conditions for a correct understanding. In this work, we address the problem of how to obtain these representations through human-robot interaction. We integrate visual perception and natural language input to incrementally build a semantic model of the world, and then use inductive reasoning to infer logical rules that capture generic semantic relations, true in this model. These relations can be used to enrich the human-robot interaction, to populate a knowledge base with inferred facts, or to remove uncertainty in the robot's sensory inputs.

Keywords
Relational learning, Human Robot Interaction, Natural Language Processing, Inductive logic, Entity Attribute Value model, Grounding, Scene Understanding
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-89142 (URN)
Conference
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual Confernece, 31 Aug. - 4 Sept., 2020.
Available from: 2021-01-31 Created: 2021-01-31 Last updated: 2021-04-20Bibliographically approved
Bacciu, D., Di Rocco, M., Dragone, M., Gallicchio, C., Micheli, A. & Saffiotti, A. (2019). An ambient intelligence approach for learning in smart robotic environments. Computational intelligence, 35(4), 1060-1087
Open this publication in new window or tab >>An ambient intelligence approach for learning in smart robotic environments
Show others...
2019 (English)In: Computational intelligence, ISSN 0824-7935, E-ISSN 1467-8640, Vol. 35, no 4, p. 1060-1087Article in journal (Refereed) Published
Abstract [en]

Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. To reduce the amount of preparation and preprogramming required for their deployment in real-world applications, it is important to make these systems self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to proactively and smoothly adapt to subtle changes in the environment and in the habits and preferences of their user(s), in presence of appropriately defined performance measuring functions.

Place, publisher, year, edition, pages
Wiley-Blackwell, 2019
Keywords
Adaptive planning, ambient intelligence, recurrent neural networks, robotic ecology, self-adaptive system, smart environment
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:oru:diva-75959 (URN)10.1111/coin.12233 (DOI)000481324700001 ()2-s2.0-85070095236 (Scopus ID)
Note

Funding Agency:

European Commission  FP7-ICT-269914

Available from: 2019-08-29 Created: 2019-08-29 Last updated: 2023-12-08Bibliographically approved
Sgorbissa, A., Papadopoulos, I., Papadopoulos, C., Saffiotti, A., Pandey, A. K., Merton, L., . . . Mastrolonardo, R. (2019). CARESSES: The Flower that Taught Robots about Culture. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2019), Daegu, South Korea, March 11-14, 2019 (pp. 371-371). IEEE, Article ID 8673086.
Open this publication in new window or tab >>CARESSES: The Flower that Taught Robots about Culture
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 371-371, article id 8673086Conference paper, Published paper (Refereed)
Abstract [en]

The video describes the novel concept of "culturally competent robotics", which is the main focus of the project CARESSES (Culturally-Aware Robots and Environmental Sensor Systems for Elderly Support). CARESSES a multidisciplinary project whose goal is to design the first socially assistive robots that can adapt to the culture of the older people they are taking care of. Socially assistive robots are required to help the users in many ways including reminding them to take their medication, encouraging them to keep active, helping them keep in touch with family and friends. The video describes a new generation of robots that will perform their actions with attention to the older person's customs, cultural practices and individual preferences.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121, E-ISSN 2167-2148
Keywords
Culturally competent robots, elderly care
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-74427 (URN)10.1109/HRI.2019.8673086 (DOI)000467295400053 ()2-s2.0-85064002082 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2019), Daegu, South Korea, March 11-14, 2019
Available from: 2019-05-28 Created: 2019-05-28 Last updated: 2019-05-28Bibliographically approved
Bruno, B., Recchiuto, C. T., Papadopoulos, I., Saffiotti, A., Koulouglioti, C., Menicatti, R., . . . Sgorbissa, A. (2019). Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment. International Journal of Social Robotics, 11(3), 515-538
Open this publication in new window or tab >>Knowledge Representation for Culturally Competent Personal Robots: Requirements, Design Principles, Implementation, and Assessment
Show others...
2019 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 11, no 3, p. 515-538Article in journal (Refereed) Published
Abstract [en]

Culture, intended as the set of beliefs, values, ideas, language, norms and customs which compose a person's life, is an essential element to know by any robot for personal assistance. Culture, intended as that person's background, can be an invaluable source of information to drive and speed up the process of discovering and adapting to the person's habits, preferences and needs. This article discusses the requirements posed by cultural competence on the knowledge management system of a robot. We propose a framework for cultural knowledge representation that relies on (i) a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person-specific information and preferences; (ii) an algorithm for the acquisition of person-specific knowledge, which uses culture-specific knowledge to drive the search; (iii) a Bayesian Network for speeding up the adaptation to the person by propagating the effects of acquiring one specific information onto interconnected concepts. We have conducted a preliminary evaluation of the framework involving 159 Italian and German volunteers and considering 122 among habits, attitudes and social norms.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Culture-aware robotics, Companion robot, Knowledge representation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-75376 (URN)10.1007/s12369-019-00519-w (DOI)000474401100010 ()2-s2.0-85068880768 (Scopus ID)
Funder
EU, Horizon 2020, 737858
Available from: 2019-07-29 Created: 2019-07-29 Last updated: 2019-07-29Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8229-1363

Search in DiVA

Show all publications