To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 195) Show all publications
Aregbede, V., Abraham, S. S., Persson, A., Längkvist, M. & Loutfi, A. (2024). Affordance-Based Goal Imagination for Embodied AI Agents. In: 2024 IEEE International Conference on Development and Learning (ICDL): . Paper presented at IEEE International Conference on Development and Learning (ICDL 2024), Austin, Texas, USA, May 20-23, 2024 (pp. 1-6). IEEE
Open this publication in new window or tab >>Affordance-Based Goal Imagination for Embodied AI Agents
Show others...
2024 (English)In: 2024 IEEE International Conference on Development and Learning (ICDL), IEEE, 2024, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Goal imagination in robotics is an emerging concept and involves the capability to automatically generate realistic goals, which, in turn, requires the assessment of the feasibility of transitioning from the current conditions of an initial scene to thedesired goal state. Existing research has explored the utilization of diverse image-generative models to create images depicting potential goal states based on the current state and instructions. In this paper, we illustrate the limitations of current state-of-the-art image generative models in accurately assessing the feasibility of specific actions in particular situations. Consequently, we present how integrating large language models, which possess profound knowledge of real-world objects and affordances, can enhance the performance of image-generative models in discerning plausible from implausible actions and simulating the outcomes of actions in a given context. This will be a step towards achieving the pragmatic goal of imagination in robotics.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Embodiment, Affordance
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-118193 (URN)10.1109/ICDL61372.2024.10644764 (DOI)001338553000023 ()2-s2.0-85203835311 (Scopus ID)9798350348552 (ISBN)9798350348569 (ISBN)
Conference
IEEE International Conference on Development and Learning (ICDL 2024), Austin, Texas, USA, May 20-23, 2024
Funder
Swedish Research Council, 2021-05229
Available from: 2025-01-09 Created: 2025-01-09 Last updated: 2025-02-07Bibliographically approved
Morillo-Mendez, L., Schrooten, M. G. S., Loutfi, A. & Martinez Mozos, O. (2024). Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. International Journal of Social Robotics, 16(6), 1069-1081
Open this publication in new window or tab >>Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction
2024 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 16, no 6, p. 1069-1081Article in journal (Refereed) Published
Abstract [en]

There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults.

Supplementary Information: The online version contains supplementary material available at 10.1007/s12369-022-00926-6.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Aging, Gaze following, Human-robot interaction, Non-verbal cues, Referential gaze, Social cues
National Category
Gerontology, specialising in Medical and Health Sciences Robotics and automation
Identifiers
urn:nbn:se:oru:diva-101615 (URN)10.1007/s12369-022-00926-6 (DOI)000857896500001 ()36185773 (PubMedID)2-s2.0-85138680591 (Scopus ID)
Funder
European Commission, 754285Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agency:

RobWell project - Spanish Ministerio de Ciencia, Innovacion y Universidades RTI2018-095599-A-C22

Available from: 2022-10-04 Created: 2022-10-04 Last updated: 2025-02-05Bibliographically approved
Rahaman, G. M., Längkvist, M. & Loutfi, A. (2024). Deep learning based automated estimation of urban green space index from satellite image: A case study. Urban Forestry & Urban Greening, 97, Article ID 128373.
Open this publication in new window or tab >>Deep learning based automated estimation of urban green space index from satellite image: A case study
2024 (English)In: Urban Forestry & Urban Greening, ISSN 1618-8667, E-ISSN 1610-8167, Vol. 97, article id 128373Article in journal (Refereed) Published
Abstract [en]

The green area factor model is a crucial tool for conserving and creating urban greenery and ecosystem services within neighborhood land. This model serves as a valuable index, streamlining the planning, assessment, and comparison of local-scale green infrastructures. However, conventional on-site measurements of the green area factor are resource intensive. In response, this study pioneers a computational approach that integrates ecological and social dimensions to estimate the green area factor. Employing satellite remote sensing and advanced deep learning techniques, the methodology utilizes satellite orthophotos of urban areas subjected to semantic segmentation, identifying and categorizing diverse green elements. Ground truths are established through on-site measurements of green area factors and satellite orthophotos from benchmarking sites in <spacing diaeresis>Orebro, Sweden. Results reveal an 82.0% average F1-score for semantic segmentations, signifying a favourable correlation between computationally estimated and measured green area factors. The proposed methodology is potential for adapting to various urban settings. In essence, this research introduces a promising, cost-effective solution for assessing urban greenness, particularly beneficial for urban administrators and planners aiming for insightful and comprehensive green strategies in city planning.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Deep convolutional neural networks (CNN), Green infrastructure, Green index, Semantic segmentation, Urban greenery, Urban planning
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:oru:diva-114996 (URN)10.1016/j.ufug.2024.128373 (DOI)001247062200001 ()2-s2.0-85194227352 (Scopus ID)
Funder
Region Örebro County, 20294202
Available from: 2024-07-25 Created: 2024-07-25 Last updated: 2024-07-25Bibliographically approved
Mironenko, O., Banaee, H. & Loutfi, A. (2024). Evaluation of Human Interaction with Fleets of Automated Vehicles in Dynamic Underground Mining Environments. In: Angelo Ferrando; Rafael C. Cardoso (Ed.), Agents and Robots for reliable Engineered Autonomy: 4th Workshop, AREA 2024, Santiago de Compostela, Spain, October 19, 2024, Proceedings. Paper presented at Agents and Robots for reliable Engineered Autonomy (AREA 2024), in conjunction with ECAI 2024, Santiago de Compostela, Spain, October 19, 2024 (pp. 54-72). Springer
Open this publication in new window or tab >>Evaluation of Human Interaction with Fleets of Automated Vehicles in Dynamic Underground Mining Environments
2024 (English)In: Agents and Robots for reliable Engineered Autonomy: 4th Workshop, AREA 2024, Santiago de Compostela, Spain, October 19, 2024, Proceedings / [ed] Angelo Ferrando; Rafael C. Cardoso, Springer, 2024, p. 54-72Conference paper, Published paper (Refereed)
Abstract [en]

This study investigates the complexities of Mixed Traffic with Fleets of Automated Vehicles (MTF-AVs) in underground mining environments characterized by confined spaces, limited visibility, and strict navigation requirements. The research focuses on integrating human-controlled vehicles into coordinated AV fleets, addressing the unpredictable interactions that arise from human behaviour. The ORU coordination framework, originally designed for a fully autonomous system, is adapted for mixed traffic scenarios to evaluate the impact of human behaviour on system efficiency and safety. Through a series of simulations, the study explores how fleet coordination algorithms adapt to human driver behaviour. These simulations demonstrate that human error and rule violations significantly reduce performance, increasing safety risks and decreasing efficiency. Findings emphasize the need for advanced coordination algorithms that dynamically adapt to unpredictable human behaviour in MTF-AVs. Such algorithms would optimize interactions between automated and human-controlled vehicles, enhancing both safety and efficiency in these complex and dynamic environments. Future research will further explore the influence of human behaviour on the coordination system and develop advanced coordination algorithms with methods to evaluate these interactions effectively.

Place, publisher, year, edition, pages
Springer, 2024
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937
Keywords
Human behaviour in driving, Mixed traffic with fleets of automated vehicles, Centralised coordination, Underground mining
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-117072 (URN)10.1007/978-3-031-73180-8_4 (DOI)9783031731808 (ISBN)9783031731792 (ISBN)
Conference
Agents and Robots for reliable Engineered Autonomy (AREA 2024), in conjunction with ECAI 2024, Santiago de Compostela, Spain, October 19, 2024
Funder
Knowledge Foundation, 20190128
Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2024-10-31Bibliographically approved
Neelakantan, S., Hansson, A., Norell, J., Schött, J., Längkvist, M. & Loutfi, A. (2024). Machine Learning for Lithology Analysis using a Multi-Modal Approach of Integrating XRF and XCT data. In: 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden: . Paper presented at 14th Scandinavian Conference on Artificial Intelligence (SCAI 2024), Jönköping, Sweden, June 10-11, 2024.
Open this publication in new window or tab >>Machine Learning for Lithology Analysis using a Multi-Modal Approach of Integrating XRF and XCT data
Show others...
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden, 2024Conference paper, Published paper (Refereed)
Abstract [en]

We explore the use of various machine learning (ML) models for classifying lithologies utilizing data from X-ray fluorescence (XRF) and X-ray computed tomography (XCT). Typically, lithologies are identified over several meters, which restricts the use of ML models due to limited training data. To address this issue, we augment the original interval dataset, where lithologies are marked over extensive sections, into finer segments of 10cm, to produce a high resolution dataset with vastly increased sample size. Additionally, we examine the impact of adjacent lithologies on building a more generalized ML model. We also demonstrate that combining XRF and XCT data leads to an improved classification accuracy compared to using only XRF data, which is the common practice in current studies, or solely relying on XCT data.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-115219 (URN)10.3384/ecp208021 (DOI)
Conference
14th Scandinavian Conference on Artificial Intelligence (SCAI 2024), Jönköping, Sweden, June 10-11, 2024
Funder
Knowledge Foundation, Dnr:20190128
Available from: 2024-08-06 Created: 2024-08-06 Last updated: 2024-08-12Bibliographically approved
Neelakantan, S., Norell, J., Hansson, A., Längkvist, M. & Loutfi, A. (2024). Neural network approach for shape-based euhedral pyrite identification in X-ray CT data with adversarial unsupervised domain adaptation. Applied Computing and Geosciences, 21, Article ID 100153.
Open this publication in new window or tab >>Neural network approach for shape-based euhedral pyrite identification in X-ray CT data with adversarial unsupervised domain adaptation
Show others...
2024 (English)In: Applied Computing and Geosciences, E-ISSN 2590-1974, Vol. 21, article id 100153Article in journal (Refereed) Published
Abstract [en]

We explore an attenuation and shape-based identification of euhedral pyrites in high-resolution X-ray Computed Tomography (XCT) data using deep neural networks. To deal with the scarcity of annotated data we generate a complementary training set of synthetic images. To investigate and address the domain gap between the synthetic and XCT data, several deep learning models, with and without domain adaption, are trained and compared. We find that a model trained on a small set of human annotations, while displaying over-fitting, can rival the human annotators. The unsupervised domain adaptation approaches are successful in bridging the domain gap, which significantly improves their performance. A domain-adapted model, trained on a dataset that fuses synthetic and real data, is the overall best-performing model. This highlights the possibility of using synthetic datasets for the application of deep learning in mineralogy.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Mineral identification, Unsupervised domain adaptation, Deep convolutional neural network, Semantic segmentation, Euhedral pyrites
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-111189 (URN)10.1016/j.acags.2023.100153 (DOI)001155327400001 ()2-s2.0-85182272087 (Scopus ID)
Funder
Knowledge Foundation, 20190128
Note

This work has been supported by the Industrial Graduate School Collaborative AI and Robotics funded by the Swedish Knowledge Foundation Dnr:20190128 and in collaboration with the industrial partner Orexplore Technologies.

Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-02-14Bibliographically approved
Gutiérrez Maestro, E., Banaee, H. & Loutfi, A. (2024). Towards Addressing Label Ambiguity in Sequential Emotional Responses Through Distribution Learning. In: 12th International Conference on Affective Computing and Intelligent Interactions, Glasgow, United Kingdom, September 15-18, 2024: . Paper presented at 12th International Conference on Affective Computing and Intelligent Interaction (ACII 2024), Glasgow, UK, September 15-18, 2024.
Open this publication in new window or tab >>Towards Addressing Label Ambiguity in Sequential Emotional Responses Through Distribution Learning
2024 (English)In: 12th International Conference on Affective Computing and Intelligent Interactions, Glasgow, United Kingdom, September 15-18, 2024, 2024Conference paper, Published paper (Refereed)
Abstract [en]

This work highlights the challenge of labeling data with single-label categories, as there may be ambiguity in the assigned labels. This ambiguity arises when a data sample, which can be influenced by previous affective events is labeled with a single-label category (known as priming). Label distribution learning (LDL) is proposed as an approach to contend with the ambiguity among labels. This approach has been relatively unexplored in the field of affective computing. In this work, an experiment is designed to explore the benefits of employing LDL using specifically the SEED and SEED-V datasets. In these datasets, different emotions are induced by exposing participants to a sequence of stimuli (videoclip watching). However, these datasets provide single labels, where each data point corresponds to one affective state or emotion. Due to the lack of label distributions within existing benchmarks, label enhancement serves as a preparatory step, whose goal is to compute label distributions from the feature space and single labels before training a label distribution learning model. Experimental results show that the LDL approach reduces confusion with respect to the emotion induced in the previous trial. Distribution learning is an approach that can help to further improve the prediction of affect, which to date remains a difficult and ambiguous concept to label.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-117858 (URN)
Conference
12th International Conference on Affective Computing and Intelligent Interaction (ACII 2024), Glasgow, UK, September 15-18, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-01-17 Created: 2025-01-17 Last updated: 2025-01-17Bibliographically approved
Akalin, N., Kiselev, A., Kristoffersson, A. & Loutfi, A. (2023). A Taxonomy of Factors Influencing Perceived Safety in Human-Robot Interaction. International Journal of Social Robotics, 15, 1993-2004
Open this publication in new window or tab >>A Taxonomy of Factors Influencing Perceived Safety in Human-Robot Interaction
2023 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 15, p. 1993-2004Article in journal (Refereed) Published
Abstract [en]

Safety is a fundamental prerequisite that must be addressed before any interaction of robots with humans. Safety has been generally understood and studied as the physical safety of robots in human-robot interaction, whereas how humans perceive these robots has received less attention. Physical safety is a necessary condition for safe human-robot interaction. However, it is not a sufficient condition. A robot that is safe by hardware and software design can still be perceived as unsafe. This article focuses on perceived safety in human-robot interaction. We identified six factors that are closely related to perceived safety based on the literature and the insights obtained from our user studies. The identified factors are the context of robot use, comfort, experience and familiarity with robots, trust, the sense of control over the interaction, and transparent and predictable robot actions. We then made a literature review to identify the robot-related factors that influence perceived safety. Based the literature, we propose a taxonomy which includes human-related and robot-related factors. These factors can help researchers to quantify perceived safety of humans during their interactions with robots. The quantification of perceived safety can yield computational models that would allow mitigating psychological harm.

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Perceived safety, Human-robot interaction, Comfort, Sense of control, Trust
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-107200 (URN)10.1007/s12369-023-01027-8 (DOI)001024550100001 ()2-s2.0-85164166548 (Scopus ID)
Funder
Örebro University
Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2025-02-07Bibliographically approved
Landin, C., Zhao, X., Längkvist, M. & Loutfi, A. (2023). An Intelligent Monitoring Algorithm to Detect Dependencies between Test Cases in the Manual Integration Process. In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW): . Paper presented at 16th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2023), Dublin, Ireland, April 16-20, 2023 (pp. 353-360). IEEE
Open this publication in new window or tab >>An Intelligent Monitoring Algorithm to Detect Dependencies between Test Cases in the Manual Integration Process
2023 (English)In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), IEEE, 2023, p. 353-360Conference paper, Published paper (Refereed)
Abstract [en]

Finding a balance between meeting test coverage and minimizing the testing resources is always a challenging task both in software (SW) and hardware (HW) testing. Therefore, employing machine learning (ML) techniques for test optimization purposes has received a great deal of attention. However, utilizing machine learning techniques frequently requires large volumes of valuable data to be trained. Although, the data gathering is hard and also expensive, manual data analysis takes most of the time in order to locate the source of failure once they have been produced in the so-called fault localization. Moreover, by applying ML techniques to historical production test data, relevant and irrelevant features can be found using strength association, such as correlation- and mutual information-based methods. In this paper, we use production data records of 100 units of a 5G radio product containing more than 7000 test results. The obtained results show that insightful information can be found after clustering the test results by their strength association, most linear and monotonic, which would otherwise be challenging to identify by traditional manual data analysis methods.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW, ISSN 2159-4848
Keywords
Terms Test Optimization, Machine Learning, Fault Localization, Dependence Analysis, Mutual Information
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-107727 (URN)10.1109/ICSTW58534.2023.00066 (DOI)001009223100052 ()2-s2.0-85163076493 (Scopus ID)9798350333350 (ISBN)9798350333367 (ISBN)
Conference
16th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2023), Dublin, Ireland, April 16-20, 2023
Funder
Knowledge FoundationVinnova
Available from: 2023-08-28 Created: 2023-08-28 Last updated: 2023-10-05Bibliographically approved
Liao, Q., Sun, D., Zhang, S., Loutfi, A. & Andreasson, H. (2023). Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment. IEEE Transactions on Image Processing, 32, 550-564
Open this publication in new window or tab >>Fuzzy Cluster-based Group-wise Point Set Registration with Quality Assessment
Show others...
2023 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 32, p. 550-564Article in journal (Refereed) Published
Abstract [en]

This article studies group-wise point set registration and makes the following contributions: "FuzzyGReg", which is a new fuzzy cluster-based method to register multiple point sets jointly, and "FuzzyQA", which is the associated quality assessment to check registration accuracy automatically. Given a group of point sets, FuzzyGReg creates a model of fuzzy clusters and equally treats all the point sets as the elements of the fuzzy clusters. Then, the group-wise registration is turned into a fuzzy clustering problem. To resolve this problem, FuzzyGReg applies a fuzzy clustering algorithm to identify the parameters of the fuzzy clusters while jointly transforming all the point sets to achieve an alignment. Next, based on the identified fuzzy clusters, FuzzyQA calculates the spatial properties of the transformed point sets and then checks the alignment accuracy by comparing the similarity degrees of the spatial properties of the point sets. When a local misalignment is detected, a local re-alignment is performed to improve accuracy. The proposed method is cost-efficient and convenient to be implemented. In addition, it provides reliable quality assessments in the absence of ground truth and user intervention. In the experiments, different point sets are used to test the proposed method and make comparisons with state-of-the-art registration techniques. The experimental results demonstrate the effectiveness of our method.The code is available at https://gitsvn-nt.oru.se/qianfang.liao/FuzzyGRegWithQA

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Quality assessment, Measurement, Three-dimensional displays, Registers, Probability distribution, Point cloud compression, Optimization, Group-wise registration, registration quality assessment, joint alignment, fuzzy clusters, 3D point sets
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-102755 (URN)10.1109/TIP.2022.3231132 (DOI)000908058200002 ()
Funder
Vinnova, 2019- 05878Swedish Research Council Formas, 2019-02264
Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3122-693X

Search in DiVA

Show all publications