Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Bhatt, Mehul, ProfessorORCID iD iconorcid.org/0000-0002-6290-5492
Biografi [eng]

 

 
Biografi [swe]

 

 
Publikationer (10 of 133) Visa alla publikationer
Chaudhri, V. K., Baru, C., Bennett, B., Bhatt, M., Cassel, D., Cohn, A. G., . . . Witbrock, M. (2025). A community-driven vision for a new knowledge resource for AI. The AI Magazine, 46(4), Article ID e70035.
Öppna denna publikation i ny flik eller fönster >>A community-driven vision for a new knowledge resource for AI
Visa övriga...
2025 (Engelska)Ingår i: The AI Magazine, ISSN 0738-4602, E-ISSN 2371-9621, Vol. 46, nr 4, artikel-id e70035Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The long-standing goal of creating a comprehensive, multi-purpose knowledge resource, reminiscent of the 1984 Cyc project, still persists in AI. Despite the success of knowledge resources like WordNet, ConceptNet, Wolfram|Alpha and other commercial knowledge graphs, verifiable, general-purpose, widely available sources of knowledge remain a critical deficiency in AI infrastructure. Large language models struggle due to knowledge gaps; robotic planning lacks necessary world knowledge; and the detection of factually false information relies heavily on human expertise. What kind of knowledge resource is most needed in AI today? How can modern technology shape its development and evaluation? A recent AAAI workshop gathered over 50 researchers to explore these questions. This paper synthesizes our findings and outlines a community-driven vision for a new knowledge infrastructure. In addition to leveraging contemporary advances in knowledge representation and reasoning, one promising idea is to build an open engineering framework to exploit knowledge modules effectively within the context of practical applications. Such a framework should include sets of conventions and social structures that are adopted by contributors.

Ort, förlag, år, upplaga, sidor
John Wiley & Sons, 2025
Nationell ämneskategori
Artificiell intelligens
Identifikatorer
urn:nbn:se:oru:diva-124792 (URN)10.1002/aaai.70035 (DOI)001599790300001 ()
Anmärkning

Funding agency: 

National Science Foundation (NSF) 2514820

Tillgänglig från: 2025-11-05 Skapad: 2025-11-05 Senast uppdaterad: 2025-11-05Bibliografiskt granskad
Kondyli, V. & Bhatt, M. (2024). Effects of Temporal Load on Attentional Engagement: Preliminary Outcomes with a Change Detection Task in a VR Setting. In: Rachel McDonnell; Lauren Buck; Julien Pettré; Manfred Lau; Göksu Yamaç (Ed.), ACM Symposium on Applied Perception 2024: Proceedings. Paper presented at ACM Symposium on Applied Perception (SAP'24), Dublin Ireland, August 30-31, 2024. Association for Computing Machinery, Article ID Article 18.
Öppna denna publikation i ny flik eller fönster >>Effects of Temporal Load on Attentional Engagement: Preliminary Outcomes with a Change Detection Task in a VR Setting
2024 (Engelska)Ingår i: ACM Symposium on Applied Perception 2024: Proceedings / [ed] Rachel McDonnell; Lauren Buck; Julien Pettré; Manfred Lau; Göksu Yamaç, Association for Computing Machinery , 2024, artikel-id Article 18Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Situation awareness in driving involves detection of events and environmental changes. Failure in detection can be attributed to the density of these events in time, amongst other factors. In this research, we explore the effect of temporal proximity, and event duration in a change detection task during driving in VR. We replicate real-world interaction events in the streetscape and systematically manipulate temporal proximity among them. The results demonstrate that events occurring simultaneously deteriorate detection performance, while performance improves as the temporal gap increases. Moreover, attentional engagement to an event of 5-10 sec leads to compromised perception for the following event. We discuss the importance of naturalistic embodied perception studies for evaluating driving assistance and driver’s education.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery, 2024
Serie
SAP ’24
Nationell ämneskategori
Data- och informationsvetenskap Psykologi
Forskningsämne
Datavetenskap; Psykologi; Människa-dator interaktion
Identifikatorer
urn:nbn:se:oru:diva-115594 (URN)10.1145/3675231.3687149 (DOI)001325273000018 ()2-s2.0-85205102252 (Scopus ID)9798400710612 (ISBN)
Konferens
ACM Symposium on Applied Perception (SAP'24), Dublin Ireland, August 30-31, 2024
Forskningsfinansiär
Vetenskapsrådet, 2022-02960
Tillgänglig från: 2024-08-23 Skapad: 2024-08-23 Senast uppdaterad: 2024-11-26Bibliografiskt granskad
Bhatt, M. (2024). Neurosymbolic Visual Commonsense: On Integrated Reasoning and Learning about Space and Motion in Embodied Multimodal Interaction. In: Parisa Kordjamshidi; Jae Hee Lee; Mehul Bhatt; Michael Sioutis; Zhiguo Long (Ed.), Proceedings of the 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024: . Paper presented at 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024. Technical University of Aachen, 3827
Öppna denna publikation i ny flik eller fönster >>Neurosymbolic Visual Commonsense: On Integrated Reasoning and Learning about Space and Motion in Embodied Multimodal Interaction
2024 (Engelska)Ingår i: Proceedings of the 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024 / [ed] Parisa Kordjamshidi; Jae Hee Lee; Mehul Bhatt; Michael Sioutis; Zhiguo Long, Technical University of Aachen , 2024, Vol. 3827Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We present recent and emerging advances in computational cognitive vision addressing artificial visual and spatial intelligence at the interface of (spatial) language, (spatial) logic and (spatial) cognition research. With a primary focus on explainable sensemaking of dynamic visuospatial imagery, we highlight the (systematic and modular) integration of methods from knowledge representation and reasoning, computer vision, spatial informatics, and computational cognitive modelling. A key emphasis here is on generalised (declarative) neurosymbolic reasoning & learning about space, motion, actions, and events relevant to embodied multimodal interaction under ecologically valid naturalistic settings in everyday life. Practically, this translates to general-purpose mechanisms for computational visual commonsense encompassing capabilities such as (neurosymbolic) semantic question-answering, relational spatio-temporal learning, visual abduction etc.

The presented work is motivated by and demonstrated in the applied backdrop of areas as diverse as autonomous driving, cognitive robotics, design of digital visuoauditory media, and behavioural visual perception research in cognitive psychology and neuroscience. More broadly, our emerging work is driven by an interdisciplinary research mindset addressing human-centred responsible AI through a methodological confluence of AI, Vision, Psychology, and (human-factors centred) Interaction Design.

Ort, förlag, år, upplaga, sidor
Technical University of Aachen, 2024
Serie
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 3827
Nyckelord
Cognitive vision, Knowlede representation and reasoning (KR), Machine Learning, Integration of reasoning & learning, Commonsense reasoning, Declarative spatial reasoning, Relational Learning, Computational cognitive modelling, Human-Centred AI, Responsible AI
Nationell ämneskategori
Datavetenskap (datalogi) Människa-datorinteraktion (interaktionsdesign) Datorgrafik och datorseende
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-117535 (URN)2-s2.0-85210239908 (Scopus ID)
Konferens
3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF)Vetenskapsrådet
Tillgänglig från: 2024-12-03 Skapad: 2024-12-03 Senast uppdaterad: 2025-02-01Bibliografiskt granskad
Kordjamshidi, P., Lee, J. H., Bhatt, M., Sioutis, M. & Long, Z. (Eds.). (2024). Proceedings of the 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024.. Paper presented at 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024. Technical University of Aachen, 3827
Öppna denna publikation i ny flik eller fönster >>Proceedings of the 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024.
Visa övriga...
2024 (Engelska)Proceedings (redaktörskap) (Övrigt vetenskapligt)
Ort, förlag, år, upplaga, sidor
Technical University of Aachen, 2024
Serie
CEUR Workshop Proceedings, ISSN 1613-0073 ; 3827
Nationell ämneskategori
Datavetenskap (datalogi) Människa-datorinteraktion (interaktionsdesign) Datorgrafik och datorseende
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-117536 (URN)
Konferens
3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) co-located with the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju island, South Korea, August 5, 2024
Forskningsfinansiär
Vetenskapsrådet
Anmärkning

Scopus ID 2-s2.0-85210227088

Tillgänglig från: 2024-12-03 Skapad: 2024-12-03 Senast uppdaterad: 2025-08-11Bibliografiskt granskad
Kondyli, V. & Bhatt, M. (2024). Temporal proximity and events duration affects change detection during driving. In: 9th International Conference on Driver Distraction and Inattention: . Paper presented at 9th International Conference on Driver Distraction and Inattention (DDI 2024), Ann Arbor in the US, Michigan, USA, October 22-24, 2024.
Öppna denna publikation i ny flik eller fönster >>Temporal proximity and events duration affects change detection during driving
2024 (Engelska)Ingår i: 9th International Conference on Driver Distraction and Inattention, 2024Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Failure in change detection in the surrounding environment while driving is attributed among other things to the number of incidents and the density of them occurring along the task as this is directly related to an increase in cognitive load. Here, we investigate the role of time proximity between events on the detection performance during a naturalistic driving task in a virtual simulation. Participants performed a change detection task while driving, in which we systematically manipulated the time difference between changes and we analysed the effect on detection performance by the driver. Our research demonstrates that events occurring simultaneously deteriorate detection performance (in terms of detection rate, and detection time), while performance improves as the temporal gap increases. Moreover, the outcomes suggest that the duration of an event affects the detection of the following one, with better performance recorded for very short or very long duration events and worse for medium duration events between (5-10 sec). These outcomes are crucial for driving assistance and training, considering the detection of safety-critical events or efficient attentional disengagement on time from irrelevant targets.

Nyckelord
visual attention, multimodality, naturalistic studies, embodied interactions, driving, cognitive technologies
Nationell ämneskategori
Datavetenskap (datalogi) Psykologi
Forskningsämne
Datavetenskap; Psykologi
Identifikatorer
urn:nbn:se:oru:diva-116529 (URN)
Konferens
9th International Conference on Driver Distraction and Inattention (DDI 2024), Ann Arbor in the US, Michigan, USA, October 22-24, 2024
Forskningsfinansiär
Vetenskapsrådet
Tillgänglig från: 2024-10-03 Skapad: 2024-10-03 Senast uppdaterad: 2024-10-04Bibliografiskt granskad
Bhatt, M. & Suchan, J. (2023). Artificial Visual Intelligence: Perceptual Commonsense for Human-Centred Cognitive Technologies. In: Chetouani, Mohamed; Dignum, Virginia; Lukowicz, Paul; Sierra, Carles (Ed.), Human-Centered Artificial Intelligence: Advanced Lectures (pp. 216-242). Springer
Öppna denna publikation i ny flik eller fönster >>Artificial Visual Intelligence: Perceptual Commonsense for Human-Centred Cognitive Technologies
2023 (Engelska)Ingår i: Human-Centered Artificial Intelligence: Advanced Lectures / [ed] Chetouani, Mohamed; Dignum, Virginia; Lukowicz, Paul; Sierra, Carles, Springer, 2023, s. 216-242Kapitel i bok, del av antologi (Refereegranskat)
Abstract [en]

We address computational cognitive vision and perception at the interface of language, logic, cognition, and artificial intelligence. The chapter presents general methods for the processing and semantic interpretation of dynamic visuospatial imagery with a particular emphasis on the ability to abstract, learn, and reason with cognitively rooted structured characterisations of commonsense knowledge pertaining to space and motion. The presented work constitutes a systematic model and methodology integrating diverse, multi-faceted AI methods pertaining Knowledge Representation and Reasoning, Computer Vision, and Machine Learning towards realising practical, human-centred artificial visual intelligence.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 13500
Nyckelord
Cognitive vision, Knowledge representation and reasoning, Commonsense reasoning, Deep semantics, Declarative spatial reasoning, Computer vision, Computational models of narrative, Human-centred computing and design, Spatial cognition and AI, Visual perception, Multimodal interaction, Autonomous driving, HRI, Media, Visual art
Nationell ämneskategori
Datavetenskap (datalogi) Människa-datorinteraktion (interaktionsdesign) Psykologi
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-105389 (URN)10.1007/978-3-031-24349-3_12 (DOI)9783031243486 (ISBN)9783031243493 (ISBN)
Forskningsfinansiär
Vetenskapsrådet
Tillgänglig från: 2023-04-06 Skapad: 2023-04-06 Senast uppdaterad: 2023-04-11Bibliografiskt granskad
Kondyli, V., Daniel, L. & Bhatt, M. (2023). Drivers avoid attentional elaboration under safety-critical situations and complex environments. In: 17th European Workshop on Imagery and Cognition: . Paper presented at 17th European Workshop on Imagery and Cognition, Anglia Ruskin University, Cambridge, UK, June 20-22, 2023 (pp. 18-18).
Öppna denna publikation i ny flik eller fönster >>Drivers avoid attentional elaboration under safety-critical situations and complex environments
2023 (Engelska)Ingår i: 17th European Workshop on Imagery and Cognition, 2023, s. 18-18Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

In everyday activities where continuous visual awareness is critical such as driving, several cognitive processes pertaining to visual attention are of the essence, for instance, change detection, anticipation, monitoring, etc. Research suggests that environmental load and task difficulty contribute to failures in visual perception that can be essential for detecting and reacting to safety-critical incidents. However, it is unclear how gaze patterns and attentional strategies are compromised because of environmental complexity in naturalistic driving. In a change detection task during everyday simulated driving, we investigate inattention blindness in relation to environmental complexity and the kind of interaction incidents drivers address. We systematically analyse and evaluate safety-critical situations from real-world driving videos and replicate a number of them in a virtual driving experience. Participants (N= 80) aged 23-45 years old, drove along three levels of environmental complexity (low-medium-high) and various incidents of interaction with roadside users (e.g., pedestrians, cyclists, pedestrians in a wheelchair), categorized as safety critical or not. Participants detected changes in the behaviour of road users and in object properties. We collect multimodal data including eye-tracking, egocentric view videos, movement trace, head movements, driving behaviour, and detection button presses. Results suggest that gaze behaviour (number and duration of fixations, 1st fixation on AOI) is affected negatively by an increase in environmental complexity, but the effect is moderate for safety-critical incidents. Moreover, anticipatory and monitoring attention was crucial for detecting critical changes in behaviour and reacting on time. However, in highly complex environments participants effectively limit attentional monitoring and lingering for non-critical changes and they also controlled “look-but-fail-to-see errors", especially while addressing a safety-related event. We conclude that drivers change attentional strategies, avoiding non-productive forms of attentional elaboration (anticipatory and monitoring) and efficiently disengaging from targets when the task difficulty is high. We discuss the implications for driving education and research driven development of autonomous driving. 

Nyckelord
Visual perception, Change blindness, Visuospatial complexity, Attentional strategies, Naturalistic observation, Everyday driving
Nationell ämneskategori
Psykologi Datavetenskap (datalogi) Transportteknik och logistik
Forskningsämne
Psykologi; Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-108117 (URN)
Konferens
17th European Workshop on Imagery and Cognition, Anglia Ruskin University, Cambridge, UK, June 20-22, 2023
Projekt
Counterfactual Commonsense
Forskningsfinansiär
Örebro universitetEU, Horisont 2020, 754285Vetenskapsrådet
Tillgänglig från: 2023-09-06 Skapad: 2023-09-06 Senast uppdaterad: 2023-09-07Bibliografiskt granskad
Kondyli, V., Bhatt, M., Levin, D. & Suchan, J. (2023). How do drivers mitigate the effects of naturalistic visual complexity? On attentional strategies and their implications under a change blindness protocol. Cognitive Research: Principles and Implications, 8(1), Article ID 54.
Öppna denna publikation i ny flik eller fönster >>How do drivers mitigate the effects of naturalistic visual complexity? On attentional strategies and their implications under a change blindness protocol
2023 (Engelska)Ingår i: Cognitive Research: Principles and Implications, E-ISSN 2365-7464, Vol. 8, nr 1, artikel-id 54Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

How do the limits of high-level visual processing affect human performance in naturalistic, dynamic settings of (multimodal) interaction where observers can draw on experience to strategically adapt attention to familiar forms of complexity? In this backdrop, we investigate change detection in a driving context to study attentional allocation aimed at overcoming environmental complexity and temporal load. Results indicate that visuospatial complexity substantially increases change blindness but also that participants effectively respond to this load by increasing their focus on safety-relevant events, by adjusting their driving, and by avoiding non-productive forms of attentional elaboration, thereby also controlling “looked-but-failed-to-see” errors. Furthermore, analyses of gaze patterns reveal that drivers occasionally, but effectively, limit attentional monitoring and lingering for irrelevant changes. Overall, the experimental outcomes reveal how drivers exhibit effective attentional compensation in highly complex situations. Our findings uncover implications for driving education and development of driving skill-testing methods, as well as for human-factors guided development of AI-based driving assistance systems.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Nyckelord
Visual perception, Change blindness, Visuospatial complexity, Attentional strategies, Naturalistic observation, Everyday driving
Nationell ämneskategori
Psykologi Datavetenskap (datalogi) Transportteknik och logistik
Forskningsämne
Psykologi; Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-107517 (URN)10.1186/s41235-023-00501-1 (DOI)001044388200001 ()37556047 (PubMedID)2-s2.0-85167370133 (Scopus ID)
Projekt
Counterfactual Commonsense
Forskningsfinansiär
Örebro universitetVetenskapsrådetEU, Horisont 2020, 754285
Tillgänglig från: 2023-08-10 Skapad: 2023-08-10 Senast uppdaterad: 2023-09-27Bibliografiskt granskad
Nair, V., Hemeren, P., Vignolo, A., Noceti, N., Nicora, E., Sciutti, A., . . . Sandini, G. (2023). Kinematic primitives in action similarity judgments: A human-centered computational model. IEEE Transactions on Cognitive and Developmental Systems, 15(4), 1981-1992
Öppna denna publikation i ny flik eller fönster >>Kinematic primitives in action similarity judgments: A human-centered computational model
Visa övriga...
2023 (Engelska)Ingår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, E-ISSN 2379-8939, Vol. 15, nr 4, s. 1981-1992Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This article investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.

Ort, förlag, år, upplaga, sidor
IEEE, 2023
Nyckelord
Action similarity, action matching, biological motion, optical flow, point light display, kinematic primitives, computational model, comparative study
Nationell ämneskategori
Psykologi Datavetenskap (datalogi) Människa-datorinteraktion (interaktionsdesign) Datorgrafik och datorseende
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:oru:diva-103866 (URN)10.1109/TCDS.2023.3240302 (DOI)001126639000035 ()2-s2.0-85148457281 (Scopus ID)
Forskningsfinansiär
KK-stiftelsen, 2014022EU, Horisont 2020, 804388
Anmärkning

Funding agency:

AFOSR FA8655-20-1-7035

Tillgänglig från: 2023-01-31 Skapad: 2023-01-31 Senast uppdaterad: 2025-02-01Bibliografiskt granskad
Lloret, E., Barreiro, A., Bhatt, M., Bugarín-Diz, A., Modoni, G. E., Silberztein, M., . . . Erdem, A. (2023). Multi3Generation: Multitask, Multilingual, and Multimodal Language Generation. Open Research Europe, 3, Article ID 176.
Öppna denna publikation i ny flik eller fönster >>Multi3Generation: Multitask, Multilingual, and Multimodal Language Generation
Visa övriga...
2023 (Engelska)Ingår i: Open Research Europe, E-ISSN 2732-5121, Vol. 3, artikel-id 176Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The purpose of this article is to highlight the critical importance of language generation today. In particular, language generation is explored from the following three aspects: multi-modality, multilinguality, and multitask, which all of them play crucial role for Natural Language Generation (NLG) community. We present the activities conducted within the Multi3Generation COST Action (CA18231), as well as current trends and future perspectives for multitask, multilingual and multimodal language generation.

Ort, förlag, år, upplaga, sidor
European Commission, 2023
Nyckelord
Language Technologies, Multi-task, Multi3Generation, Multilinguality, Multimodality, Natural Language Generation
Nationell ämneskategori
Språkbehandling och datorlingvistik
Identifikatorer
urn:nbn:se:oru:diva-110610 (URN)10.12688/openreseurope.16307.2 (DOI)38131050 (PubMedID)2-s2.0-85180942557 (Scopus ID)
Forskningsfinansiär
Örebro universitetVetenskapsrådet
Anmärkning

This project has received funding from the European Cooperation in Science and Technology (COST) under the agreement no. CA18231 - Multi3Generation: Multitask, Multilingual, Multimodal Language Generation. In addition, this research work is partially conducted within the R&D projects “CORTEX: Conscious Text Generation” (PID2021-123956OB-I00) partially funded by MCIN/ AEI/10.13039/501100011033/ and by “ERDF A way of making Europe” and “Enhancing the modernization public sector organizations by deploying Natural Language Processing to make their digital content CLEARER to those with cognitive disabilities” (TED2021-130707B-I00), funded by MCIN/AEI/10.13039/501100011033 and “European Union NextGenerationEU/PRTR”, and by the Generalitat Valenciana through the project “NL4DISMIS: Natural Language Technologies for dealing with dis- and misinformation with grant reference (CIPROM/2021/21)”, Project Counterfactual Commonsense (Örebro University), funded by the Swedish Research Council (VR).

Tillgänglig från: 2024-01-09 Skapad: 2024-01-09 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-6290-5492

Sök vidare i DiVA

Visa alla publikationer