To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Stoyanov, Todor, Associate Prof.ORCID iD iconorcid.org/0000-0002-6013-4874
Alternative names
Publications (10 of 81) Show all publications
Gugliermo, S., Dominguez, D. C., Iannotta, M., Stoyanov, T. & Schaffernicht, E. (2024). Evaluating behavior trees. Robotics and Autonomous Systems, 178, Article ID 104714.
Open this publication in new window or tab >>Evaluating behavior trees
Show others...
2024 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 178, article id 104714Article in journal (Refereed) Published
Abstract [en]

Behavior trees (BTs) are increasingly popular in the robotics community. Yet in the growing body of published work on this topic, there is a lack of consensus on what to measure and how to quantify BTs when reporting results. This is not only due to the lack of standardized measures, but due to the sometimes ambiguous use of definitions to describe BT properties. This work provides a comprehensive overview of BT properties the community is interested in, how they relate to each other, the metrics currently used to measure BTs, and whether the metrics appropriately quantify those properties of interest. Finally, we provide the practitioner with a set of metrics to measure, as well as insights into the properties that can be derived from those metrics. By providing this holistic view of properties and their corresponding evaluation metrics, we hope to improve clarity when using BTs in robotics. This more systematic approach will make reported results more consistent and comparable when evaluating BTs.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Behavior trees, Robotics, Artificial intelligence, Behavior -based systems
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-114983 (URN)10.1016/j.robot.2024.104714 (DOI)001246926800001 ()2-s2.0-85193904518 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, ID19-0053Knowledge Foundation, 20190128EU, Horizon Europe, 101070596
Note

This work was partially supported by the Swedish Foundation for Strategic Research (SSF) (project ID19-0053), the Industrial Graduate School Collaborative AI & Robotics (CoAIRob), funded by the Swedish Knowledge Foundation under Grant Dnr:20190128, and by the European Union’s Horizon Europe Framework Programme under grant agreement No 101070596 (euROBIN).

Available from: 2024-07-25 Created: 2024-07-25 Last updated: 2025-02-07Bibliographically approved
Shih-Min, Y., Magnusson, M., Stork, J. A. & Stoyanov, T. (2024). Learning Extrinsic Dexterity with Parameterized Manipulation Primitives. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024 (pp. 5404-5410). IEEE
Open this publication in new window or tab >>Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 5404-5410Conference paper, Published paper (Refereed)
Abstract [en]

Many practically relevant robot grasping problems feature a target object for which all grasps are occluded, e.g., by the environment. Single-shot grasp planning invariably fails in such scenarios. Instead, it is necessary to first manipulate the object into a configuration that affords a grasp. We solve this problem by learning a sequence of actions that utilize the environment to change the object’s pose. Concretely, we employ hierarchical reinforcement learning to combine a sequence of learned parameterized manipulation primitives. By learning the low-level manipulation policies, our approach can control the object’s state through exploiting interactions between the object, the gripper, and the environment. Designing such a complex behavior analytically would be infeasible under uncontrolled conditions, as an analytic approach requires accurate physical modeling of the interaction and contact dynamics. In contrast, we learn a hierarchical policy model that operates directly on depth perception data, without the need for object detection, pose estimation, or manual design of controllers. We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace. Our method transfers to a real robot and is able to successfully complete the object picking task in 98% of experimental trials.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Robotics and Automation (ICRA), ISSN 1050-4729, E-ISSN 2577-087X
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-117863 (URN)10.1109/ICRA57147.2024.10611431 (DOI)001294576204026 ()2-s2.0-85202434994 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan, May 13-17, 2024
Projects
DARKO
Funder
EU, Horizon 2020, 101017274Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

This work has received funding from the EU’s Horizon 2020 research and innovation programme under grant agreement No 101017274, and was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2024-12-18 Created: 2024-12-18 Last updated: 2025-02-04Bibliographically approved
Yang, Y., Stork, J. A. & Stoyanov, T. (2024). Tracking Branched Deformable Linear Objects Using Particle Filtering on Depth Images. In: 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE): . Paper presented at 20th International Conference on Automation Science and Engineering (CASE 2024), Bari, Italy, August 28 - September 1, 2024 (pp. 912-919). IEEE
Open this publication in new window or tab >>Tracking Branched Deformable Linear Objects Using Particle Filtering on Depth Images
2024 (English)In: 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), IEEE, 2024, p. 912-919Conference paper, Published paper (Refereed)
Abstract [en]

Branched deformable linear objects (BDLOs), such as wire harnesses, are important connecting components in manufacturing industries. However, due to deformability, a lack of distinct visual features, and complex branched structure, automating tasks involving these BDLOs remains a challenge. In this paper, we propose a particle-filter-based method to track the state of a BDLO. To circumvent the high cost of tracking the complex high-dimensional BDLO state, we instead track each branch as an individual B-spline. Our method learns a data-driven model to predict the likelihood of each particle conditioned on depth image observation. In contrast to current state-of-the-art approaches based on non-rigid registration, we do not require pre-segmenting the BDLO, thus alleviating a strong and limiting assumption. We train our approach on domain-randomized depth data from simulation and achieve zero-shot transfer to real-world BDLOs, achieving state-of-the- art tracking performance when the pre-segmentation fails.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Conference on Automation Science and Engineering, ISSN 2161-8070, E-ISSN 2161-8089
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-118139 (URN)10.1109/CASE59546.2024.10711651 (DOI)2-s2.0-85208279756 ()2-s2.0-85208279756 (Scopus ID)9798350358513 (ISBN)9798350358520 (ISBN)
Conference
20th International Conference on Automation Science and Engineering (CASE 2024), Bari, Italy, August 28 - September 1, 2024
Funder
Vinnova, 2021-04693Vinnova, 2020-04467Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

This work was partially supported by Vinnova / SIP-STRIM projects 2020-04467 and 2021-04693, and was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-03-17Bibliographically approved
Rietz, F., Magg, S., Heintz, F., Stoyanov, T., Wermter, S. & Stork, J. A. (2023). Hierarchical goals contextualize local reward decomposition explanations. Neural Computing & Applications, 35(23), 16693-16704
Open this publication in new window or tab >>Hierarchical goals contextualize local reward decomposition explanations
Show others...
2023 (English)In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 35, no 23, p. 16693-16704Article in journal (Refereed) Published
Abstract [en]

One-step reinforcement learning explanation methods account for individual actions but fail to consider the agent's future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agent's future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies-a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations.

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Reinforcement learning, Explainable AI, Reward decomposition, Hierarchical goals, Local explanations
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-99115 (URN)10.1007/s00521-022-07280-8 (DOI)000794083400001 ()2-s2.0-85129803505 (Scopus ID)
Note

Funding agencies:

Örebro University

Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Federal Ministry for Economic Affairs and Climate FKZ 20X1905A-D

Available from: 2022-05-23 Created: 2022-05-23 Last updated: 2023-11-28Bibliographically approved
Yang, Q., Stork, J. A. & Stoyanov, T. (2023). Learn from Robot: Transferring Skills for Diverse Manipulation via Cycle Generative Networks. In: 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE): . Paper presented at 19th International Conference on Automation Science and Engineering (IEEE CASE 2023), Cordis, Auckland, New Zealand, August 26-30, 2023. IEEE conference proceedings
Open this publication in new window or tab >>Learn from Robot: Transferring Skills for Diverse Manipulation via Cycle Generative Networks
2023 (English)In: 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), IEEE conference proceedings, 2023Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement learning (RL) has shown impressive results on a variety of robot tasks, but it requires a large amount of data for learning a single RL policy. However, in manufacturing there is a wide demand of reusing skills from different robots and it is hard to transfer the learned policy to different hardware due to diverse robot body morphology, kinematics, and dynamics. In this paper, we address the problem of transferring policies between different robot platforms. We learn a set of skills on each specific robot and represent them in a latent space. We propose to transfer the skills between different robots by mapping latent action spaces through a cycle generative network in a supervised learning manner. We extend the policy model learned on one robot with a pre-trained generative network to enable the robot to learn from the skill of another robot. We evaluate our method on several simulated experiments and demonstrate that our Learn from Robot (LfR) method accelerates new skill learning.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2023
Series
IEEE International Conference on Automation Science and Engineering, ISSN 2161-8070, E-ISSN 2161-8089
Keywords
Reinforcement Learning, Transfer Learning, Generative Models
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-108719 (URN)10.1109/CASE56687.2023.10260484 (DOI)9798350320701 (ISBN)9798350320695 (ISBN)
Conference
19th International Conference on Automation Science and Engineering (IEEE CASE 2023), Cordis, Auckland, New Zealand, August 26-30, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-10-03 Created: 2023-10-03 Last updated: 2025-02-09Bibliographically approved
Dominguez, D. C., Iannotta, M., Stork, J. A., Schaffernicht, E. & Stoyanov, T. (2022). A Stack-of-Tasks Approach Combined With Behavior Trees: A New Framework for Robot Control. IEEE Robotics and Automation Letters, 7(4), 12110-12117
Open this publication in new window or tab >>A Stack-of-Tasks Approach Combined With Behavior Trees: A New Framework for Robot Control
Show others...
2022 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, no 4, p. 12110-12117Article in journal (Refereed) Published
Abstract [en]

Stack-of-Tasks (SoT) control allows a robot to simultaneously fulfill a number of prioritized goals formulated in terms of (in)equality constraints in error space. Since this approach solves a sequence of Quadratic Programs (QP) at each time-step, without taking into account any temporal state evolution, it is suitable for dealing with local disturbances. However, its limitation lies in the handling of situations that require non-quadratic objectives to achieve a specific goal, as well as situations where countering the control disturbance would require a locally suboptimal action. Recent works address this shortcoming by exploiting Finite State Machines (FSMs) to compose the tasks in such a way that the robot does not get stuck in local minima. Nevertheless, the intrinsic trade-off between reactivity and modularity that characterizes FSMs makes them impractical for defining reactive behaviors in dynamic environments. In this letter, we combine the SoT control strategy with Behavior Trees (BTs), a task switching structure that addresses some of the limitations of the FSMs in terms of reactivity, modularity and re-usability. Experimental results on a Franka Emika Panda 7-DOF manipulator show the robustness of our framework, that allows the robot to benefit from the reactivity of both SoT and BTs.

Place, publisher, year, edition, pages
IEEE Press, 2022
Keywords
Behavior-based systems, control architectures and programming
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-101946 (URN)10.1109/LRA.2022.3211481 (DOI)000868319800006 ()
Funder
Knut and Alice Wallenberg Foundation
Note

Funding agencies:

Industrial Graduate School Collaborative AI & Robotics (CoAIRob)

General Electric Dnr:20190128

Available from: 2022-10-27 Created: 2022-10-27 Last updated: 2025-02-07Bibliographically approved
Hoang, D.-C., Stork, J. A. & Stoyanov, T. (2022). Context-Aware Grasp Generation in Cluttered Scenes. In: 2022 International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA 2022), Philadelphia, USA, May 23-27, 2022 (pp. 1492-1498). IEEE
Open this publication in new window or tab >>Context-Aware Grasp Generation in Cluttered Scenes
2022 (English)In: 2022 International Conference on Robotics and Automation (ICRA), IEEE, 2022, p. 1492-1498Conference paper, Published paper (Refereed)
Abstract [en]

Conventional methods to autonomous grasping rely on a pre-computed database with known objects to synthesize grasps, which is not possible for novel objects. On the other hand, recently proposed deep learning-based approaches have demonstrated the ability to generalize grasp for unknown objects. However, grasp generation still remains a challenging problem, especially in cluttered environments under partial occlusion. In this work, we propose an end-to-end deep learning approach for generating 6-DOF collision-free grasps given a 3D scene point cloud. To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. Our experimental results confirm that the proposed system performs favorably in terms of predicting object grasps in cluttered environments in comparison to the current state of the art methods.

Place, publisher, year, edition, pages
IEEE, 2022
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-98437 (URN)10.1109/ICRA46639.2022.9811371 (DOI)000941265701005 ()2-s2.0-85136323876 (Scopus ID)9781728196824 (ISBN)9781728196817 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA 2022), Philadelphia, USA, May 23-27, 2022
Funder
EU, Horizon 2020, 101017274 (DARKO)
Available from: 2022-04-01 Created: 2022-04-01 Last updated: 2023-05-03Bibliographically approved
Iannotta, M., Dominguez, D. C., Stork, J. A., Schaffernicht, E. & Stoyanov, T. (2022). Heterogeneous Full-body Control of a Mobile Manipulator with Behavior Trees. In: IROS 2022 Workshop on Mobile Manipulation and Embodied Intelligence (MOMA): Challenges and  Opportunities: . Paper presented at International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 23-27, 2022.
Open this publication in new window or tab >>Heterogeneous Full-body Control of a Mobile Manipulator with Behavior Trees
Show others...
2022 (English)In: IROS 2022 Workshop on Mobile Manipulation and Embodied Intelligence (MOMA): Challenges and  Opportunities, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Integrating the heterogeneous controllers of a complex mechanical system, such as a mobile manipulator, within the same structure and in a modular way is still challenging. In this work we extend our framework based on Behavior Trees for the control of a redundant mechanical system to the problem of commanding more complex systems that involve multiple low-level controllers. This allows the integrated systems to achieve non-trivial goals that require coordination among the sub-systems.

National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-102984 (URN)10.48550/arXiv.2210.08600 (DOI)
Conference
International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 23-27, 2022
Funder
Knowledge Foundation
Available from: 2023-01-09 Created: 2023-01-09 Last updated: 2025-02-09Bibliographically approved
Yang, Y., Stork, J. A. & Stoyanov, T. (2022). Learn to Predict Posterior Probability in Particle Filtering for Tracking Deformable Linear Objects. In: 3rd Workshop on Robotic Manipulation of Deformable Objects: Challenges in Perception, Planning and Control for Soft Interaction (ROMADO-SI), IROS 2022, Kyoto, Japan: . Paper presented at 35th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 24-26, 2022.
Open this publication in new window or tab >>Learn to Predict Posterior Probability in Particle Filtering for Tracking Deformable Linear Objects
2022 (English)In: 3rd Workshop on Robotic Manipulation of Deformable Objects: Challenges in Perception, Planning and Control for Soft Interaction (ROMADO-SI), IROS 2022, Kyoto, Japan, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Tracking deformable linear objects (DLOs) is a key element for applications where robots manipulate DLOs. However, the lack of distinctive features or appearance on the DLO and the object’s high-dimensional state space make tracking challenging and still an open question in robotics. In this paper, we propose a method for tracking the state of a DLO by applying a particle filter approach, where the posterior probability of each sample is estimated by a learned predictor. Our method can achieve accurate tracking even with no prerequisite segmentation which many related works require. Due to the differentiability of the posterior probability predictor, our method can leverage the gradients of posterior probabilities with respect to the latent states to improve the motion model in the particle filter. The preliminary experiments suggest that the proposed method can provide robust tracking results and the estimated DLO state converges quickly to the true state if the initial state is unknown.

National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:oru:diva-102743 (URN)
Conference
35th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 24-26, 2022
Funder
Vinnova, 2019-05175Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2023-01-27 Created: 2023-01-27 Last updated: 2025-02-07Bibliographically approved
Yang, Y., Stork, J. A. & Stoyanov, T. (2022). Learning differentiable dynamics models for shape control of deformable linear objects. Robotics and Autonomous Systems, 158, Article ID 104258.
Open this publication in new window or tab >>Learning differentiable dynamics models for shape control of deformable linear objects
2022 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 158, article id 104258Article in journal (Refereed) Published
Abstract [en]

Robots manipulating deformable linear objects (DLOs) – such as surgical sutures in medical robotics, or cables and hoses in industrial assembly – can benefit substantially from accurate and fast differentiable predictive models. However, the off-the-shelf analytic physics models fall short of differentiability. Recently, neural-network-based data-driven models have shown promising results in learning DLO dynamics. These models have additional advantages compared to analytic physics models, as they are differentiable and can be used in gradient-based trajectory planning. Still, the data-driven approaches demand a large amount of training data, which can be challenging for real-world applications. In this paper, we propose a framework for learning a differentiable data-driven model for DLO dynamics with a minimal set of real-world data. To learn DLO twisting and bending dynamics in a 3D environment, we first introduce a new suitable DLO representation. Next, we use a recurrent network module to propagate effects between different segments along a DLO, thereby addressing a critical limitation of current state-of-the-art methods. Then, we train a data-driven model on synthetic data generated in simulation, instead of foregoing the time-consuming and laborious data collection process for real-world applications. To achieve a good correspondence between real and simulated models, we choose a set of simulation model parameters through parameter identification with only a few trajectories of a real DLO required. We evaluate several optimization methods for parameter identification and demonstrate that the differential evolution algorithm is efficient and effective for parameter identification. In DLO shape control tasks with a model-based controller, the data-driven model trained on synthetic data generated by the resulting models performs on par with the ones trained with a comparable amount of real-world data which, however, would be intractable to collect.

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Deformable linear object, Model learning, Parameter identification, Model predictive control
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-101292 (URN)10.1016/j.robot.2022.104258 (DOI)000869528600006 ()2-s2.0-85138188346 (Scopus ID)
Funder
Vinnova, 2019-05175Knut and Alice Wallenberg Foundation
Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2023-09-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6013-4874

Search in DiVA

Show all publications