oru.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Bhatt, Mehul, ProfessorORCID iD iconorcid.org/0000-0002-6290-5492
Biography [eng]

 

 
Biography [swe]

 

 
Publications (10 of 84) Show all publications
Bhatt, M. (2018). Cognitive media studies: Potentials for spatial cognition and AI research. Cognitive Processing, 19(Suppl. 1), S6-S6
Open this publication in new window or tab >>Cognitive media studies: Potentials for spatial cognition and AI research
2018 (English)In: Cognitive Processing, ISSN 1612-4782, E-ISSN 1612-4790, Vol. 19, no Suppl. 1, p. S6-S6Article in journal, Meeting abstract (Other academic) Published
Abstract [en]

Cognitive media studies has developed as an area of research at the interface of disciplines as diverse as aesthetics, psychology, neuroscience, film theory, and cognitive science. In this context, the focus of this talk is on the foundational significance of artificial intelligence and visuo-spatial cognition and computation for the design of inte-grated analytical–empirical methods for the (multi-modal) analysis of human behaviour data vis-a-vis a range of digital visuo-auditory narrative media (e.g., narrative film). The presentation focusses on the methodological foundations and assistive technologies for systematic formalization and empirical analyses aimed at, for instance, the generation of evidence, establishing and characterizing correlates between principles for the synthesis of the moving image (e.g., from a cinematographic viewpoint), and its perceptual recipient effects and influence on observers.

In the backdrop a range of completed and ongoing experiments, we emphasize the core results on the semantic interpretation of human behaviour vis-a-vis narrative film and its visuo-auditory reception. We demonstrate the manner in which AI-based models for machine coding of narrative, and relational inference and learning serves as basis to externalize explicit and inferred knowledge about embodied visuo-auditory reception, e.g., using modalities such as diagrammatic representations, natural language, complex (dynamic) data visualizations.

Demonstration: The presentation will particularly showcase methods and tools developed to perform perceptual narrativisation or sensemaking with multi-modal, dynamic human-behaviour data (combining visuo-spatial imagery such as film/video, eye-tracking, head-tracking during a perception task) for a chosen set of experimental material based on existing films, as well as lab-developed experimental content.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2018
National Category
Psychology
Identifiers
urn:nbn:se:oru:diva-68757 (URN)10.1007/s10339-018-0884-3 (DOI)000442849900015 ()
Available from: 2018-09-10 Created: 2018-09-10 Last updated: 2018-09-10Bibliographically approved
Kondyli, V. & Bhatt, M. (2018). Decision points in architectural space: How they affect users' visuo-locomotive experience during wayfinding. Cognitive Processing, 19(Suppl. 1), S43-S43
Open this publication in new window or tab >>Decision points in architectural space: How they affect users' visuo-locomotive experience during wayfinding
2018 (English)In: Cognitive Processing, ISSN 1612-4782, E-ISSN 1612-4790, Vol. 19, no Suppl. 1, p. S43-S43Article in journal, Meeting abstract (Other academic) Published
Abstract [en]

Decision points in a wayfinding path are considered not only the intersections but also changes in geometry and in directions, merging of paths, or transitions. Carpman and Simmon (1986) pinpoint the need for environmental cues in these points where users’ confusion arises. In this study, we investigate the morphology and the manifest cues of the decision points in relation to the visuo-locomotive behaviour of users recorded during a wayfinding case-study conducted in two healthcare buildings at the Parkland Hospital (Dallas).

We collect and analyse the embodied visuo-locomotive experience of 25 participants, using eye-tracking, external cameras, behavioural mapping, questionnaires, interviews, and orientations tasks. In our multi-modal qualitative analysis, founded in Spatial Reasoning, Cognitive Vision, and Environmental Psychology, we focus on the aspects of visual perception, decision making, orientation, and spatial knowledge acquisition. The comparison between users’ transition in eight decision points involves correlations between occurrences of confusion-related events, detection and categorisation of manifest cues, navigation performance, as well as visual attention analysis in relation to the available spatial features.

Primary results suggest that (1) stop and looking-around behaviour mostly emerge in the decision points; (2) behaviour that indicates confusion is mostly encoded in narrow and enclosed decision points; (3) transitional spaces intensify visual search; (4) visibility ahead of time, and visual disruptions affect the visuo-locomotive behaviour; and (5) detection of manifest cues is affected by the morphology of decision points. The correlations between behavioural and morphological data encoded to conceptual language can be useful as a baseline for computationally-driven behavioural analysis.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2018
National Category
Psychology
Identifiers
urn:nbn:se:oru:diva-68758 (URN)10.1007/s10339-018-0884-3 (DOI)000442849900135 ()
Available from: 2018-09-10 Created: 2018-09-10 Last updated: 2018-09-10Bibliographically approved
Bhatt, M. (2018). Embodied architecture design: On people-centered design of visuo-locomotive cognitive experiences. Cognitive Processing, 19(Suppl. 1), S5-S5
Open this publication in new window or tab >>Embodied architecture design: On people-centered design of visuo-locomotive cognitive experiences
2018 (English)In: Cognitive Processing, ISSN 1612-4782, E-ISSN 1612-4790, Vol. 19, no Suppl. 1, p. S5-S5Article in journal, Meeting abstract (Other academic) Published
Abstract [en]

This presentation focusses on the analysis and design of human-centered, embodied, cognitive user experiences from the perspectives of spatial cognition and computation, artificial intelligence, and human-computer interaction research. Focusing on large-scale built up spaces (in particular hospitals), this presentation will particularly address:

‘how can human-centered cognitive modalities of visuo-locomotive perception constitute the foundational building blocks of design education, discourse, systems, and the professional practice of spatial design for architecture’.

The presentation will emphasizeevidence-based multimodality studies from the viewpoints of visuo-locomotive (i.e., pertaining to vision, movement, and wayfinding) cognitive experiences. Modalities being investigated include: (1) visual attention (by eye-tracking), gesture, language, facial expressions; (2) human expert guided event segmentation (e.g., coming from behavioral or environmental psychologists, designers, annotators); (3) deep analysis based on dialogic components, think-aloud protocols. We demonstrate (1–3) in the context of a large-scale study conducted at the Old and New Parkland Hospitals in Dallas, Texas.

This research (and symposium) calls for a tightly integrated approach combining analytical methods (rooted in AI and computational cognition) and empirical methods (rooted in psychology and perception studies) for developing human-centered architectural design technologies, and technology-mediated (architectural) design synthesis.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2018
National Category
Psychology
Identifiers
urn:nbn:se:oru:diva-68756 (URN)10.1007/s10339-018-0884-3 (DOI)000442849900011 ()
Available from: 2018-09-10 Created: 2018-09-10 Last updated: 2018-09-10Bibliographically approved
Bhatt, M. (2018). Minds. Movement. Moving image. Cognitive Processing, 19(Suppl. 1), S5-S5
Open this publication in new window or tab >>Minds. Movement. Moving image
2018 (English)In: Cognitive Processing, ISSN 1612-4782, E-ISSN 1612-4790, Vol. 19, no Suppl. 1, p. S5-S5Article in journal, Meeting abstract (Other academic) Published
Abstract [en]

This symposium—conducted in two parts—explores the confluence of empirically-based qualitative research in the cognitive and psychological sciences (focusing on visual and spatial cognition) with computationally-driven analytical methods (rooted in artificial intelligence) in the service of communications, media, design, and human behavioural studies. With a focus on architecture and visuo-auditory media design, the twin-symposia will demonstrate recent results and explore the synergy of research methods for the study of human behaviour in the chosen (design) contexts of socio-cultural, and socio-technological significance.

The symposium brings together experts and addresses methodsand perspectives from:

•  Visuo-Spatial Cognition and Computation

•  Artificial Intelligence, Cognitive Systems

•  Multimodality and Interaction

•  Cognitive Science and Psychology

•  Neuroscience

•  Design Cognition and Computation

•  Communications and Media Studies

•  Architecture, Built Environment

•  Design Studies (focus on architecture and visuo-auditory media)

•  Evidence Based Design

The symposium particularly emphasises the role of multimodality and mediated interaction for the analysis and design of human-centered, embodied, cognitive user experiences in everyday life and work. Here, the focus is on multimodality studies aimed at the semantic interpretation of human behaviour, and the empirically-driven synthesis of embodied interactive experiences in real world settings. In focus are narrative media design, architecture and built environment design, product design, cognitive media studies (film, animation, VR, sound and music design), and user interaction studies. In these contexts, the symposium emphasizes evidence-based multimodality studies from the viewpoints of visual (e.g.,attention and recipient effects), visuo-locomotive (e.g. , movement, wayfinding), and visuo-auditory (e.g., narrative media) cognitive experiences. Modalities being investigated include, but are not limited to:

•  visual attention (by eye-tracking), gesture, speech, language, facial expressions, tactile interactions, olfaction, biosignals;

•  human expert guided event segmentation (e.g. coming from behavioral or environmental psychologists, designers, annotators,crowd-sensing)

•  deep analysis based on dialogic components, think-aloud protocols

The scientific agenda of the twin-symposia also emphasizes the multi-modality of the embodied visuo-spatial thinking involved in ‘‘problem-solving’’ for the design of objects, artefacts, and inter-active people-experiences emanating there from. Universality andinclusion in ‘‘design thinking’’ are of overarching focus in all design contexts relevant to the symposium; here, the implications of mul-timodality studies for inclusive design, e.g.,creation of presentations of the same content in different modalities, are also of interest. The symposium provides a platform to discuss the development of next-generation embodied interaction design systems, practices, and (human-centered) assistive frameworks and technologies encompassing the multi-faceted nature of embodied design conception and synthesis. Individual contributions/talks within the two symposia address the themes under consideration from formal, computational, cognitive, design, engineering, empirical, and philosophical perspectives.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2018
National Category
Psychology
Identifiers
urn:nbn:se:oru:diva-68755 (URN)10.1007/s10339-018-0884-3 (DOI)000442849900010 ()
Available from: 2018-09-10 Created: 2018-09-10 Last updated: 2018-09-10Bibliographically approved
Lieto, A., Bhatt, M., Oltramari, A. & Vernon, D. (2018). The role of cognitive architectures in general artificial intelligence. Cognitive Systems Research, 48, 1-3
Open this publication in new window or tab >>The role of cognitive architectures in general artificial intelligence
2018 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 48, p. 1-3Article in journal (Refereed) Published
Abstract [en]

The term "Cognitive Architectures" indicates both abstract models of cognition, in natural and artificial agents, and the software instantiations of such models which are then employed in the field of Artificial Intelligence (AI). The main role of Cognitive Architectures in AI is that one of enabling the realization of artificial systems able to exhibit intelligent behavior in a general setting through a detailed analogy with the constitutive and developmental functioning and mechanisms underlying human cognition. We provide a brief overview of the status quo and the potential role that Cognitive Architectures may serve in the fields of Computational Cognitive Science and Artificial Intelligence (AI) research.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Cognitive architectures; Artificial intelligence; Autonomous systems; General artificial intelligence
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-64438 (URN)10.1016/j.cogsys.2017.08.003 (DOI)000419552300001 ()
Available from: 2018-01-20 Created: 2018-01-20 Last updated: 2018-08-20Bibliographically approved
Suchan, J., Bhatt, M., Wałęga, P. & Schultz, C. (2018). Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects. In: AAAI 2018: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. Paper presented at Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, USA, February 2-7, 2018.
Open this publication in new window or tab >>Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects
2018 (English)In: AAAI 2018: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018Conference paper, Published paper (Refereed)
Abstract [en]

We propose a hybrid architecture for systematically computing robust visual explanation(s) encompassing hypothesis formation, belief revision, and default reasoning with video data. The architecture consists of two tightly integrated synergistic components: (1)(functional) answer set programming based abductive reasoning with SPACE-TIME TRACKLETS as native entities; and (2) a visual processing pipeline for detection based object tracking and motion analysis.

We present the formal framework, its general implementation as a (declarative) method in answer set programming, and an example application and evaluation based on two diverse video datasets: the MOTChallenge benchmark developed by the vision community, and a recently developed Movie Dataset

Keywords
artificial intelligence, cognitive vision, knowledge representation and reasoning, computer vision, robotics
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-63758 (URN)
Conference
Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, USA, February 2-7, 2018
Available from: 2018-01-02 Created: 2018-01-02 Last updated: 2018-08-16Bibliographically approved
Lieto, A., Bhatt, M., Oltramari, A. & Vernon, D. (Eds.). (2017). Artificial Intelligence and Cognition 2016: Proceedings of the 4th International Workshop on Artificial Intelligence and Cognition co-located with the Joint Multi-Conference on Human-Level Artificial Intelligence (HLAI 2016), New York City, NY, USA, July 16-17, 2016. Paper presented at 4th International Workshop on Artificial Intelligence and Cognition (AIC 2016), New York, USA, July 16-17, 2016. Technical University of Aachen, 1895
Open this publication in new window or tab >>Artificial Intelligence and Cognition 2016: Proceedings of the 4th International Workshop on Artificial Intelligence and Cognition co-located with the Joint Multi-Conference on Human-Level Artificial Intelligence (HLAI 2016), New York City, NY, USA, July 16-17, 2016
2017 (English)Conference proceedings (editor) (Other academic)
Place, publisher, year, edition, pages
Technical University of Aachen, 2017
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 1895
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-63589 (URN)
Conference
4th International Workshop on Artificial Intelligence and Cognition (AIC 2016), New York, USA, July 16-17, 2016
Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2018-08-30Bibliographically approved
Schultz, C., Bhatt, M. & Borrmann, A. (2017). Bridging qualitative spatial constraints and feature-based parametric modelling: Expressing visibility and movement constraints. Advanced Engineering Informatics, 31, 2-17
Open this publication in new window or tab >>Bridging qualitative spatial constraints and feature-based parametric modelling: Expressing visibility and movement constraints
2017 (English)In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 31, p. 2-17Article in journal (Refereed) Published
Abstract [en]

We present a concept for integrating state-of-the-art methods in geometric and qualitative spatial representation and reasoning with feature-based parametric modelling systems. Using a case-study involving a combination of topological, visibility, and movement constraints, we demonstrate the manner in which a parametric model may be constrained by the spatial aspects of conceptual design specifications and higher-level semantic design requirements. We demonstrate the proposed methodology by applying it to architectural floor plan layout design, where a number of spaces with well defined functionalities have to be arranged such that particular functional design constraints are maintained. The case-study is developed by an integration of the declarative spatial reasoning system CLP(QS) (CLP(QS) - a declarative spatial reasoning system. www.spatial-reasoning.com.) with the parametric CAD system FreeCAD.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
Feature-based parametric modelling; Geometric constraint solving; Declarative spatial reasoning; Knowledge representation and reasoning
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-63583 (URN)10.1016/j.aei.2015.10.004 (DOI)000392044800002 ()2-s2.0-84948799892 (Scopus ID)
Note

Funding Agencies:

German Research Foundation (DFG) via Spatial Cognition Research Center  SFB/TR 8

German Research Foundation (DFG) under grant for the SketchMapia project  SCHW 1372/7-1 

Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2018-08-13Bibliographically approved
Bhatt, M., Cutting, J., Levin, D. & Lewis, C. (2017). Cognition, Interaction, Design: Discussions as Part of the Codesign Roundtable 2017. Künstliche Intelligenz, 31(4), 363-371
Open this publication in new window or tab >>Cognition, Interaction, Design: Discussions as Part of the Codesign Roundtable 2017
2017 (English)In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 31, no 4, p. 363-371Article in journal (Refereed) Published
Abstract [en]

This transcript documents select parts of discus-sions on the confluence of cognition, interaction, design, and human behaviour studies. The interview and related events were held as part of the CoDesign 2017 Roundtable (Bhatt in CoDesign 2017—The Bremen Summer of Cognition and Design/CoDesign Roundtable. University of Bremen, Bremen, 2017) at the University of Bremen (Germany) in June 2017. The Q/A sessions were moderated by Mehul Bhatt (University of Bremen, Germany., and Örebro Uni-versity, Sweden) and Daniel Levin (Vanderbilt University, USA). Daniel Levin served in a dual role: as co-moderator of the discussion, as well as interviewee. The transcript is published as part of a KI Journal special issue on “Seman-tic Interpretation of Multi-Modal Human Behaviour Data” (Bhatt and Kersting in Special Issue on: Semantic Interpre-tation of Multimodal Human Behaviour Data, Artif Intell, 2017).

Place, publisher, year, edition, pages
Springer, 2017
Keywords
Artificial intelligence; Cognitive science; Psychology; Design cognition and computation; Engineering; Human behaviour studies
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-63585 (URN)10.1007/s13218-017-0512-x (DOI)000424411300008 ()
Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2018-02-22Bibliographically approved
Suchan, J. & Bhatt, M. (2017). Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW): . Paper presented at ICCV 2017 Workshop - Vision in Practice on Autonomous Robots (ViPAR), 16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 2017 (pp. 742-750). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions
2017 (English)In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 742-750Conference paper, Published paper (Refereed)
Abstract [en]

We present a commonsense, qualitative model for the semantic grounding of embodied visuo-spatial and locomotive interactions. The key contribution is an integrative methodology combining low-level visual processing with high-level, human-centred representations of space and motion rooted in artificial intelligence. We demonstrate practical applicability with examples involving object interactions, and indoor movement.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
IEEE International Conference on Computer Vision Workshops, ISSN 2473-9936, E-ISSN 2473-9944
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-63590 (URN)10.1109/ICCVW.2017.93 (DOI)000425239600085 ()978-1-5386-1034-3 (ISBN)978-1-5386-1035-0 (ISBN)
Conference
ICCV 2017 Workshop - Vision in Practice on Autonomous Robots (ViPAR), 16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 2017
Note

Funding Agency:

Germany Research Foundation (DFG) via the Collabortive Research Center (CRC) EASE - Everyday Activity Science and Engineering

Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2018-03-12Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6290-5492

Search in DiVA

Show all publications