Till Örebro universitet

oru.seÖrebro universitets publikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Persson, Andreas
Publikationer (10 of 18) Visa alla publikationer
Aregbede, V., Sygkounas, A., Persson, A., Längkvist, M. & Loutfi, A. (2025). Generative to Discriminative Knowledge Distillation for Object Affordance. In: 2025 IEEE International Conference on Development and Learning (ICDL): . Paper presented at 2025 IEEE International Conference on Development and Learning (ICDL 2025), Prague, Czech Republic, September 16-19, 2025. IEEE
Öppna denna publikation i ny flik eller fönster >>Generative to Discriminative Knowledge Distillation for Object Affordance
Visa övriga...
2025 (Engelska)Ingår i: 2025 IEEE International Conference on Development and Learning (ICDL), IEEE, 2025Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper, we present a novel approach to relational object affordance learning by leveraging the knowledge distillation paradigm, where large language models (LLMs) serve as generative teacher models. Distinct from traditional affordance learning approaches, which heavily depend on manual annotations, our approach leverages LLMs to automatically generate binary affordance labels and functional rationale explanations, grounded in object semantics and physical plausibility. This reduces the need for labor-intensive labeling while harnessing the rich semantic knowledge embedded in LLMs. To transfer this knowledge, we train a discriminative student model on the generated outputs, ensuring both predictive accuracy and semantic alignment with the teacher model. The student benefits from dual supervision; affordance labels guide classification, while rationales enhance functional understanding. Experimental results demonstrate that our generative-to-discriminative distillation method improves computational efficiency while maintaining a generalizable understanding of affordances across diverse object-object-action scenarios. 

Ort, förlag, år, upplaga, sidor
IEEE, 2025
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-126319 (URN)10.1109/ICDL63968.2025.11204436 (DOI)2-s2.0-105021813137 (Scopus ID)9798331543433 (ISBN)9798331543426 (ISBN)9798331543440 (ISBN)
Konferens
2025 IEEE International Conference on Development and Learning (ICDL 2025), Prague, Czech Republic, September 16-19, 2025
Forskningsfinansiär
Vetenskapsrådet, 2021-05229
Tillgänglig från: 2026-01-15 Skapad: 2026-01-15 Senast uppdaterad: 2026-01-19Bibliografiskt granskad
Sygkounas, A., Athanasiadis, I., Persson, A., Felsberg, M. & Loutfi, A. (2025). Interactive Double Deep Q-network: Integrating Human Interventions and Evaluative Predictions in Reinforcement Learning of Autonomous Driving. In: 2025 IEEE Intelligent Vehicles Symposium (IV): Proceedings. Paper presented at 36th Intelligent Vehicles Symposium-IV-Annual, Cluj-Napoca, Romania, June 22-25, 2025 (pp. 2325-2332). IEEE
Öppna denna publikation i ny flik eller fönster >>Interactive Double Deep Q-network: Integrating Human Interventions and Evaluative Predictions in Reinforcement Learning of Autonomous Driving
Visa övriga...
2025 (Engelska)Ingår i: 2025 IEEE Intelligent Vehicles Symposium (IV): Proceedings, IEEE, 2025, s. 2325-2332Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Integrating human expertise with machine learning is crucial for applications demanding high accuracy and safety, such as autonomous driving. This study introduces Interactive Double Deep Q-network (iDDQN), a Human-in-the-Loop (HITL) approach that enhances Reinforcement Learning (RL) by merging human insights directly into the RL training process, improving model performance. Our proposed iDDQN method modifies the Q-value update equation to integrate human and agent actions, establishing a collaborative approach for policy development. Additionally, we present an offline evaluative framework that simulates the agent's trajectory as if no human intervention to assess the effectiveness of human interventions. Empirical results in simulated autonomous driving scenarios demonstrate that iDDQN outperforms established approaches, including Behavioral Cloning (BC), HG-DAgger, Deep Q-Learning from Demonstrations (DQfD), and vanilla DRL in leveraging human expertise for improving performance and adaptability.

Ort, förlag, år, upplaga, sidor
IEEE, 2025
Serie
IEEE Intelligent Vehicles Symposium (IV), ISSN 1931-0587, E-ISSN 2642-7214
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-124089 (URN)10.1109/IV64158.2025.11097638 (DOI)001556907500332 ()9798331538040 (ISBN)9798331538033 (ISBN)
Konferens
36th Intelligent Vehicles Symposium-IV-Annual, Cluj-Napoca, Romania, June 22-25, 2025
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tillgänglig från: 2025-10-02 Skapad: 2025-10-02 Senast uppdaterad: 2025-10-02Bibliografiskt granskad
Hazra, R., Sygkounas, A., Persson, A., Loutfi, A. & Zuidberg dos Martires, P. (2025). REvolve: Reward Evolution with Large Language Models using Human Feedback. In: 13th International Conference on Learning Representations (ICLR 2025): Proceedings. Paper presented at 13th International Conference on Learning Representations (ICLR 2025), Singapore, April 24-28, 2025 (pp. 25710-25751). International Conference on Learning Representations, ICLR
Öppna denna publikation i ny flik eller fönster >>REvolve: Reward Evolution with Large Language Models using Human Feedback
Visa övriga...
2025 (Engelska)Ingår i: 13th International Conference on Learning Representations (ICLR 2025): Proceedings, International Conference on Learning Representations, ICLR , 2025, s. 25710-25751Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Designing effective reward functions is crucial to training reinforcement learning (RL) algorithms. However, this design is non-trivial, even for domain experts, due to the subjective nature of certain tasks that are hard to quantify explicitly. In recent works, large language models (LLMs) have been used for reward generation from natural language task descriptions, leveraging their extensive instruction tuning and commonsense understanding of human behavior. In this work, we hypothesize that LLMs, guided by human feedback, can be used to formulate reward functions that reflect human implicit knowledge. We study this in three challenging settings - autonomous driving, humanoid locomotion, and dexterous manipulation - wherein notions of “good” behavior are tacit and hard to quantify. To this end, we introduce REvolve, a truly evolutionary framework that uses LLMs for reward design in RL. REvolve generates and refines reward functions by utilizing human feedback to guide the evolution process, effectively translating implicit human knowledge into explicit reward functions for training (deep) RL agents. Experimentally, we demonstrate that agents trained on REvolve-designed rewards outperform other state-of-the-art baselines. 

Ort, förlag, år, upplaga, sidor
International Conference on Learning Representations, ICLR, 2025
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-123277 (URN)10.48550/arXiv.2406.01309 (DOI)2-s2.0-105010222426 (Scopus ID)9798331320850 (ISBN)
Konferens
13th International Conference on Learning Representations (ICLR 2025), Singapore, April 24-28, 2025
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut och Alice Wallenbergs Stiftelse
Tillgänglig från: 2025-09-01 Skapad: 2025-09-01 Senast uppdaterad: 2026-01-16Bibliografiskt granskad
Persson, A. & Loutfi, A. (2022). Embodied Affordance Grounding using Semantic Simulations and Neural-Symbolic Reasoning: An Overview of the PlayGround Project. In: AIC 2022: Abstracts of accepted papers. Paper presented at 8th International Workshop on Artificial Intelligence and Cognition (AIC 2022), Örebro, Sweden, June 15-17, 2022. Technical University of Aachen
Öppna denna publikation i ny flik eller fönster >>Embodied Affordance Grounding using Semantic Simulations and Neural-Symbolic Reasoning: An Overview of the PlayGround Project
2022 (Engelska)Ingår i: AIC 2022: Abstracts of accepted papers, Technical University of Aachen , 2022Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper, we present a synopsis of the PlayGround project. Through neural-symbolic learning and reasoning, the PlayGround project assumes that high-level concepts and reasoning processes can be used to advance both symbol grounding and object affordance inference. However, a prerequisite for reasoning about objects and their affordances is integrated object representations that concurrently maintain symbolic values (e.g., high-level concepts), and sub-symbolic features (e.g., spatial aspects of objects). Integrated representations that, preferably, should be based upon neural-symbolic computation such that neural-symbolic models can, subsequently, be used for high-level reasoning processes. Nevertheless, reasoning processes for symbol grounding and affordance inference often require multiple inference steps. Taking inspiration from the cognitive prospects in simulation semantics, the PlayGround project further presumes that these reasoning processes can be simulated by neural rendering complementary to high-level reasoning processes.

Ort, förlag, år, upplaga, sidor
Technical University of Aachen, 2022
Serie
CEUR Workshop Proceedings, E-ISSN 1613-0073
Nyckelord
Symbol Grounding, Semantic World Modeling, Affordance Inference, Semantic Simulation, Neural-Symbolic Reasoning
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:oru:diva-103950 (URN)
Konferens
8th International Workshop on Artificial Intelligence and Cognition (AIC 2022), Örebro, Sweden, June 15-17, 2022
Forskningsfinansiär
Vetenskapsrådet, 2021-05229
Tillgänglig från: 2023-02-01 Skapad: 2023-02-01 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Persson, A., Martires, P. Z., De Raedt, L. & Loutfi, A. (2021). ProbAnch: a Modular Probabilistic Anchoring Framework. In: Christian Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20: . Paper presented at International Joint Conference on Artificial Intelligence (IJCAI 2020), Yokohama, Japan, January 7-15, 2021. (pp. 5285-5287). International Joint Conferences on Artificial Intelligence Organization (IJCAI)
Öppna denna publikation i ny flik eller fönster >>ProbAnch: a Modular Probabilistic Anchoring Framework
2021 (Engelska)Ingår i: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20 / [ed] Christian Bessiere, International Joint Conferences on Artificial Intelligence Organization (IJCAI) , 2021, s. 5285-5287Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Modeling object representations derived from perceptual observations, in a way that is also semantically meaningful for humans as well as autonomous agents, is a prerequisite for joint human-agent understanding of the world. A practical approach that aims to model such representations is perceptual anchoring, which handles the problem of mapping sub-symbolic sensor data to symbols and maintains these mappings over time. In this paper, we present ProbAnch, a modular data-driven anchoring framework, whose implementation requires a variety of well-orchestrated components, including a probabilistic reasoning system.

Ort, förlag, år, upplaga, sidor
International Joint Conferences on Artificial Intelligence Organization (IJCAI), 2021
Nyckelord
Computer Vision, Uncertainty in AI.
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:oru:diva-88923 (URN)10.24963/ijcai.2020/771 (DOI)
Konferens
International Joint Conference on Artificial Intelligence (IJCAI 2020), Yokohama, Japan, January 7-15, 2021.
Forskningsfinansiär
Vetenskapsrådet, 2016-05321Wallenberg AI, Autonomous Systems and Software Program (WASP)
Anmärkning

Demo

Tillgänglig från: 2021-01-25 Skapad: 2021-01-25 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Längkvist, M., Persson, A. & Loutfi, A. (2020). Learning Generative Image Manipulations from Language Instructions. In: : . Paper presented at Concepts in Action: Representation, Learning, and Application (CARLA 2020), Virtual workshop, September 22-23, 2020.
Öppna denna publikation i ny flik eller fönster >>Learning Generative Image Manipulations from Language Instructions
2020 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

This paper studies whether a perceptual visual system can simulate human-like cognitive capabilities by training a computational model to predict the output of an action using language instruction. The aim is to ground action words such that an AI is able to generate an output image that outputs the effect of a certain action on an given object. The output of the model is a synthetic generated image that demonstrates the effect that the action has on the scene. This work combines an image encoder, language encoder, relational network, and image generator to ground action words, and then visualize the effect an action would have on a simulated scene. The focus in this work is to learn meaningful shared image and text representations for relational learning and object manipulation.

Nyckelord
image manipulation, predictive learning, relational network, cognitive learning, image generation
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:oru:diva-88913 (URN)
Konferens
Concepts in Action: Representation, Learning, and Application (CARLA 2020), Virtual workshop, September 22-23, 2020
Tillgänglig från: 2021-01-25 Skapad: 2021-01-25 Senast uppdaterad: 2021-01-26Bibliografiskt granskad
Persson, A., Zuidberg Dos Martires, P., Loutfi, A. & De Raedt, L. (2020). Semantic Relational Object Tracking. IEEE Transactions on Cognitive and Developmental Systems, 12(1), 84-97
Öppna denna publikation i ny flik eller fönster >>Semantic Relational Object Tracking
2020 (Engelska)Ingår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, E-ISSN 2379-8939, Vol. 12, nr 1, s. 84-97Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper addresses the topic of semantic world modeling by conjoining probabilistic reasoning and object anchoring. The proposed approach uses a so-called bottom-up object anchoring method that relies on rich continuous attribute values measured from perceptual sensor data. A novel anchoring matching function learns to maintain object entities in space and time and is validated using a large set of trained humanly annotated ground truth data of real-world objects. For more complex scenarios, a high-level probabilistic object tracker has been integrated with the anchoring framework and handles the tracking of occluded objects via reasoning about the state of unobserved objects. We demonstrate the performance of our integrated approach through scenarios such as the shell game scenario, where we illustrate how anchored objects are retained by preserving relations through probabilistic reasoning.

Ort, förlag, år, upplaga, sidor
IEEE, 2020
Nyckelord
Semantic World Modeling, Perceptual Anchoring, Probabilistic Reasoning, Probabilistic Logic Programming, Object Tracking, Relational Particle Filtering
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:oru:diva-73529 (URN)10.1109/TCDS.2019.2915763 (DOI)000521175700009 ()2-s2.0-85068148528 (Scopus ID)
Forskningsfinansiär
Vetenskapsrådet, 2016-05321
Anmärkning

Funding Agencies:

ReGROUND Project  G0D7215N

ERC AdG SYNTH  694980

Tillgänglig från: 2019-04-05 Skapad: 2019-04-05 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Zuidberg Dos Martires, P., Kumar, N., Persson, A., Loutfi, A. & De Raedt, L. (2020). Symbolic Learning and Reasoning With Noisy Data for Probabilistic Anchoring. Frontiers in Robotics and AI, 7, Article ID 100.
Öppna denna publikation i ny flik eller fönster >>Symbolic Learning and Reasoning With Noisy Data for Probabilistic Anchoring
Visa övriga...
2020 (Engelska)Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 7, artikel-id 100Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Robotic agents should be able to learn from sub-symbolic sensor data and, at the same time, be able to reason about objects and communicate with humans on a symbolic level. This raises the question of how to overcome the gap between symbolic and sub-symbolic artificial intelligence. We propose a semantic world modeling approach based on bottom-up object anchoring using an object-centered representation of the world. Perceptual anchoring processes continuous perceptual sensor data and maintains a correspondence to a symbolic representation. We extend the definitions of anchoring to handle multi-modal probability distributions and we couple the resulting symbol anchoring system to a probabilistic logic reasoner for performing inference. Furthermore, we use statistical relational learning to enable the anchoring framework to learn symbolic knowledge in the form of a set of probabilistic logic rules of the world from noisy and sub-symbolic sensor input. The resulting framework, which combines perceptual anchoring and statistical relational learning, is able to maintain a semantic world model of all the objects that have been perceived over time, while still exploiting the expressiveness of logical rules to reason about the state of objects which are not directly observed through sensory input data. To validate our approach we demonstrate, on the one hand, the ability of our system to perform probabilistic reasoning over multi-modal probability distributions, and on the other hand, the learning of probabilistic logical rules from anchored objects produced by perceptual observations. The learned logical rules are, subsequently, used to assess our proposed probabilistic anchoring procedure. We demonstrate our system in a setting involving object interactions where object occlusions arise and where probabilistic inference is needed to correctly anchor objects.

Ort, förlag, år, upplaga, sidor
Frontiers Media S.A., 2020
Nyckelord
semantic world modeling, perceptual anchoring, probabilistic anchoring, statistical relational learning, probabilistic logic programming, object tracking, relational particle filtering, probabilistic rule learning
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:oru:diva-85553 (URN)10.3389/frobt.2020.00100 (DOI)000561679200001 ()33501267 (PubMedID)2-s2.0-85089550833 (Scopus ID)
Forskningsfinansiär
Vetenskapsrådet, 2016-05321Knut och Alice Wallenbergs Stiftelse
Anmärkning

Funding Agencies:

FWO

Special Research Fund of the KU Leuven  

European Research Council (ERC) 694980

ReGround project - EU H2020 framework program  

Tillgänglig från: 2020-09-11 Skapad: 2020-09-11 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Can, O. A., Zuidberg Dos Martires, P., Persson, A., Gaal, J., Loutfi, A., De Raedt, L., . . . Saffiotti, A. (2019). Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations. In: Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason (Ed.), Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP): . Paper presented at Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), Minneapolis, Minnesota, USA, June, 2019 (pp. 29-39). Association for Computational Linguistics, Article ID W19-1604.
Öppna denna publikation i ny flik eller fönster >>Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations
Visa övriga...
2019 (Engelska)Ingår i: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) / [ed] Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason, Association for Computational Linguistics , 2019, s. 29-39, artikel-id W19-1604Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.

Ort, förlag, år, upplaga, sidor
Association for Computational Linguistics, 2019
Nationell ämneskategori
Datavetenskap (datalogi) Datorgrafik och datorseende Människa-datorinteraktion (interaktionsdesign)
Identifikatorer
urn:nbn:se:oru:diva-79501 (URN)10.18653/v1/W19-1604 (DOI)
Konferens
Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), Minneapolis, Minnesota, USA, June, 2019
Forskningsfinansiär
Vetenskapsrådet, 2016-05321EU, Horisont 2020
Anmärkning

This work has been supported by the ReGROUND project (http://reground.cs.kuleuven.be), which is a CHISTERA project funded by the EU H2020 framework program, the Research Foundation - Flanders, the Swedish Research Council (Vetenskapsrådet), and the Scientific and Technological Research Council of Turkey (TUBITAK). The work is also supported by Vetenskapsrådet under the grant number: 2016-05321 and by TUBITAK under the grants 114E628 and 215E201.

Tillgänglig från: 2020-01-29 Skapad: 2020-01-29 Senast uppdaterad: 2025-02-01Bibliografiskt granskad
Persson, A. (2019). Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring. (Doctoral dissertation). Örebro: Örebro University
Öppna denna publikation i ny flik eller fönster >>Studies in Semantic Modeling of Real-World Objects using Perceptual Anchoring
2019 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Autonomous agents, situated in real-world scenarios, need to maintain consonance between the perceived world (through sensory capabilities) and their internal representation of the world in the form of symbolic knowledge. An approach for modeling such representations of objects is through the concept of perceptual anchoring, which, by definition, handles the problem of creating and maintaining, in time and space, the correspondence between symbols and sensor data that refer to the same physical object in the external world.

The work presented in this thesis leverages notations found within perceptual anchoring to address the problem of real-world semantic world modeling, emphasizing, in particular, sensor-driven bottom-up acquisition of perceptual data. The proposed method for handling the attribute values that constitute the perceptual signature of an object is to first integrate and explore available resources of information, such as a Convolutional Neural Network (CNN) to classify objects on the perceptual level. In addition, a novel anchoring matching function is proposed. This function introduces both the theoretical procedure for comparing attribute values, as well as establishes the use of a learned model that approximates the anchoring matching problem. To verify the proposed method, an evaluation using human judgment to collect annotated ground truth data of real-world objects is further presented. The collected data is subsequently used to train and validate different classification algorithms, in order to learn how to correctly anchor objects, and thereby learn to invoke correct anchoring functionality.

There are, however, situations that are difficult to handle purely from the perspective of perceptual anchoring, e.g., situations where an object is moved during occlusion. In the absence of perceptual observations, it is necessary to couple the anchoring procedure with probabilistic object tracking to speculate about occluded objects, and hence, maintain a consistent world model. Motivated by the limitation in the original anchoring definition, which prohibited the modeling of the history of an object, an extension to the anchoring definition is also presented. This extension permits the historical trace of an anchored object to be maintained and used for the purpose of learning additional properties of an object, e.g., learning of the action applied to an object.

Ort, förlag, år, upplaga, sidor
Örebro: Örebro University, 2019. s. 93
Serie
Örebro Studies in Technology, ISSN 1650-8580 ; 83
Nyckelord
Perceptual Anchoring, Semantic World Modeling, Sensor-Driven Acquisition of Data, Object Recognition, Object Classification, Symbol Grounding, Probabilistic Object Tracking
Nationell ämneskategori
Systemvetenskap, informationssystem och informatik
Identifikatorer
urn:nbn:se:oru:diva-73175 (URN)978-91-7529-283-0 (ISBN)
Disputation
2019-04-29, Örebro universitet, Teknikhuset, Hörsal T, Fakultetsgatan 1, Örebro, 13:15 (Engelska)
Opponent
Handledare
Tillgänglig från: 2019-03-18 Skapad: 2019-03-18 Senast uppdaterad: 2020-02-14Bibliografiskt granskad
Organisationer

Sök vidare i DiVA

Visa alla publikationer