Projects


D1: Invasive Software-Hardware Architectures for Robotics

Principal Investigators:

Prof. T. Asfour, Prof. W. Stechele

Scientific Researchers:

M. Kröhnert, J. Paul

Abstract

The main research topic of subproject D1 is the exploration of benefits and limitations of Invasive Computing for humanoid robotics. Our goal is to use complex robotic scenarios with concurrent processes and timely varying resource demands to show high-level invasion and negotiation for resources. Invasive computing mechanisms should allow for efficient resource usage and fast execution times while keeping resource utilisation and predictability of non-functional properties, e.g. power, dependability, security, at an optimal level. Therefore, research on techniques of self-organisation are key to efficient allocation of available resources in situations where multiple applications bargain for the same resources. To take full advantage of invasive computing, our algorithms will be able to run on different hardware configurations and support dynamic switching between them.

In the first funding phase, we defined a robotics scenario comprising several algorithms which are required for the implementation of visually guided grasping on a humanoid robot. A number of selected applications have been analysed to provide design specification requirements for projects of the A, B, and C areas. Fixed resources but changing load situations were used for initial evaluations of these algorithms on an invasive hardware platform. Based upon knowledge gained from these experiments, we established performance metrics which were used later on to adapt parameters of running applications in order to provide optimal throughput while available computing resources change. The mentioned adaptable parameters range from simple threshold values to more complex algorithm partitioning schemes. Exploiting information provided by a given invasive computing platform and combining it with the previously defined metrics allowed us to demonstrate improved application performance and quality as compared to state-of-the-art techniques.

During phase II, we will focus on three main tasks. First, we will expand the visually guided grasping scenario in order to consider human-robot interactions during execution. This will increase the complexity of the scenario since unexpected events may occur during all processing stages resulting in dynamic branching of the robot application. For example, when a human agent enters the scene, the robot has to consider potential collisions, new task goals triggered by speech commands or a changing scene when the human agent interacts with the environment. Second, we will investigate how prediction mechanisms can be used to forecast future resource usage. Therefore, we will build prediction models which can be generated by learning from experience through self-monitoring concepts. Additionally expert knowledge will be used, e.g. for intention prediction of human agents. By using these prediction models during run-time, the robot control program will be able to estimate the characteristics of future resource usage and to inform the invasive run-time system about such future resource demands. This will lead to optimised resource allocation and improved run-time performance. Finally, we will investigate how the invasive computing mechanisms, which we already applied on selected algorithms in phase I, can be extended to cover complete robot program chains. A robot grasping pipeline, for example, covers different algorithms such as object recognition, gaze selection, and motion planning. Compared to current implementations, where all these components interact in a non-predictive and non-resource-aware manner, we will investigate how resource awareness and in particular prediction about the expected resource consumption profile is helpful in distributing available resources in an optimal way. Based on this information, a prediction of resource demands and of future load profiles can be realised. The outcome of this project will be an enhanced grasping pipeline for humanoid robots, which combines invasive computing with a monitoring and resource prediction concept, realised within all stages of the pipeline.

Synopsis

At the beginning of the first funding period, we started with investigating how to enable manycore robot programs to dynamically adjust their resource usage in a self-organising way. Most robotic algorithms known from literature lack support for being easily deployable on future massive parallel MPSoCs. Therefore, we investigated several robotic algorithms in phase I to make them adaptable to varying amounts of available resources at run time. The focus here was the evaluation of a robotic scenario consisting of a visually guided grasping task, which includes algorithms from multiple application classes. Common frame-based robotic vision algorithms were used such as Harris Corner Detection, SIFT Feature Matching and Colour Segmentation to recognise and track objects in video frames. All algorithms were implemented to apply invasive computing and executed on the common FPGA demonstrator platform as provided by Project Z2. Additionally, the demonstrator platform was integrated with the real robot head of ARMAR-III to provide a closed loop demonstration. In the following paragraphs we present our research conducted on HSV colour segmentation algorithm taking advantage of invasive computing mechanisms.

D1 demo scenario

Evaluation scenario with ARMAR robot, robot control PC, and InvasIC hardware running on Synopsys CHIPit system.

The developed invasive HSV colour segmentation algorithm is based on the assumption that changing resource availability results in changing segmentation quality. Therefore, a robot program can adjust the segmentation quality to meet a given deadline or it can modify the execution time to meet a certain quality criteria. This approach makes the segmentation usable in a continuous control loop of a robot program. The execution steps of the HSV colour segmentation is explained in figure below.

Resource-aware HSV colour segmentation algorithm

HSV color segmentation steps: (a) image splitting, (b) distributed region calculation, (c) region fusion.

Images which were received via Ethernet are split up into multiple chunks according to available computing resources. These chunks are then distributed to the tiles of the available resources for processing. Afterwards, recognised regions are returned to the main program where fusion techniques are applied to create continuous regions. Initial profiling was performed on the x86 guest-level variant of OctoPOS and the results are presented in the figure below.

Resource usage and execution time for HSV colour segmentation algorithm

HSV color segmentation quality depending on image resolution (from left to right: 640x480, 320x240, 160x120, 80x60)

It is clearly visible that the overall execution time of the algorithm decreases as the number of invaded processing elements increases. Decreasing the size of the input image makes a huge impact on the overall execution time as well. However, reducing the input image size also requires some computation and memory access which has to be taken into account. Required times for resizing images to a lower quality level is depicted in the table. The transformation time for the image decreases together with the final image size because fewer memory accesses are required for smaller result images. Degradation of segmentation quality correlated to a given input image quality is illustrated in the figure below.

Results from resource-aware HSV colour segmentation algorithm

HSV color segmentation quality depending on image resolution (from left to right: 640x480, 320x240, 160x120, 80x60)

All segmentation results are scaled up to match the size of the original image resolution of 640x480 pixels. Based on the observations above, the algorithm can make dynamic decisions on the number of respurces required, estimate which quality to expect for a given set of processing elements or how many processing elements need to be requested in order to achieve a given quality.

A comprehensive summary of the major achievements of the first funding phase can be found by accessing Project D1 first phase website.

Publications

[1] Nidhi Anantharajaiah, Tamim Asfour, Michael Bader, Lars Bauer, Jürgen Becker, Simon Bischof, Marcel Brand, Hans-Joachim Bungartz, Christian Eichler, Khalil Esper, Joachim Falk, Nael Fasfous, Felix Freiling, Andreas Fried, Michael Gerndt, Michael Glaß, Jeferson Gonzalez, Frank Hannig, Christian Heidorn, Jörg Henkel, Andreas Herkersdorf, Benedict Herzog, Jophin John, Timo Hönig, Felix Hundhausen, Heba Khdr, Tobias Langer, Oliver Lenke, Fabian Lesniak, Alexander Lindermayr, Alexandra Listl, Sebastian Maier, Nicole Megow, Marcel Mettler, Daniel Müller-Gritschneder, Hassan Nassar, Fabian Paus, Alexander Pöppl, Behnaz Pourmohseni, Jonas Rabenstein, Phillip Raffeck, Martin Rapp, Santiago Narváez Rivas, Mark Sagi, Franziska Schirrmacher, Ulf Schlichtmann, Florian Schmaus, Wolfgang Schröder-Preikschat, Tobias Schwarzer, Mohammed Bakr Sikal, Bertrand Simon, Gregor Snelting, Jan Spieck, Akshay Srivatsa, Walter Stechele, Jürgen Teich, Furkan Turan, Isaías A. Comprés Ureña, Ingrid Verbauwhede, Dominik Walter, Thomas Wild, Stefan Wildermann, Mario Wille, Michael Witterauf, and Li Zhang. Invasive Computing. FAU University Press, August 16, 2022. [ DOI ]
[2] Zehang Weng, Fabian Paus, Anastasiia Varava, Hang Yin, Tamim Asfour, and Danica Kragic. Graph-based task-specific prediction models for interactions between deformable and rigid objects. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5453–5460, 2021.
[3] Nael Fasfous, Manoj-Rohit Vemparala, Alexander Frickenstein, Mohamed Badawy, Felix Hundhausen, Julian Höfer, Naveen-Shankar Nagaraja, Christian Unger, Hans-Jörg Vögel, Jürgen Becker, Tamim Asfour, and Walter Stechele. Binary-lorax: Low-power and runtime adaptable xnor classifier for semi-autonomous grasping with prosthetic hands. In International Conference on Robotics and Automation (ICRA), 2021. [ http ]
[4] Fabian Paus and Tamim Asfour. Probabilistic representation of objects and their support relations. In International Symposium on Experimental Robotics (ISER), 2020.
[5] Fabian Paus, Teng Huang, and Tamim Asfour. Predicting pushing action effects on spatial object relations by learning internal prediction models. In IEEE International Conference on Robotics and Automation (ICRA), pages 10584–10590, 2020.
[6] Felix Hundhausen, Denis Megerle, and Tamim Asfour. Resource-aware object classification and segmentation for semi-autonomous grasping with prosthetic hands. In IEEE/RAS International Conference on Humanoid Robots (Humanoids), pages 215–221, 2019.
[7] Akshay Srivatsa, Sven Rheindt, Dirk Gabriel, Thomas Wild, and Andreas Herkersdorf. Cod: Coherence-on-demand – runtime adaptable working set coherence for dsm-based manycore architectures. In Dionisios N. Pnevmatikatos, Maxime Pelcat, and Matthias Jung, editors, Embedded Computer Systems: Architectures, Modeling, and Simulation, pages 18–33, Cham, 2019. Springer International Publishing.
[8] Dirk Gabriel, Walter Stechele, and Stefan Wildermann. Resource-aware parameter tuning for real-time applications. In Martin Schoeberl, Christian Hochberger, Sascha Uhrig, Jürgen Brehm, and Thilo Pionteck, editors, Architecture of Computing Systems – ARCS 2019, pages 45–55. Springer International Publishing, 2019. [ DOI ]
[9] Daniel Krauß, Philipp Andelfinger, Fabian Paus, Nikolaus Vahrenkamp, and Tamim Asfour. Evaluating and optimizing component-based robot architectures using network simulation. In Winter Simulation Conference, Gothenburg, Sweden, December 2018.
[10] Rainer Kartmann, Fabian Paus, Markus Grotz, and Tamim Asfour. Extraction of physically plausible support relations to predict and validate manipulation action effects. IEEE Robotics and Automation Letters (RA-L), 3(4):3991–3998, October 2018. [ DOI ]
[11] Johny Paul. Image Processing on Heterogeneous Multiprocessor System-on-Chip using Resource-aware Programming. Dissertation, Technische Universität München, July 25, 2017.
[12] Fabian Paus, Peter Kaiser, Nikolaus Vahrenkamp, and Tamim Asfour. A combined approach for robot placement and coverage path planning for mobile manipulation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6285–6292, 2017.
[13] Jürgen Teich. Invasive computing – editorial. it – Information Technology, 58(6):263–265, November 24, 2016. [ DOI ]
[14] Stefan Wildermann, Michael Bader, Lars Bauer, Marvin Damschen, Dirk Gabriel, Michael Gerndt, Michael Glaß, Jörg Henkel, Johny Paul, Alexander Pöppl, Sascha Roloff, Tobias Schwarzer, Gregor Snelting, Walter Stechele, Jürgen Teich, Andreas Weichslgartner, and Andreas Zwinkau. Invasive computing for timing-predictable stream processing on MPSoCs. it – Information Technology, 58(6):267–280, September 30, 2016. [ DOI ]
[15] Manfred Kröhnert. A Contribution to Resource-Aware Architectures for Humanoid Robots. Dissertation, High Performance Humanoid Technologies (H2T), KIT-Faculty of Informatics, Karlsruhe Institute of Technology (KIT), Germany, July 22, 2016.
[16] Manfred Kröhnert, Raphael Grimm, Nikolaus Vahrenkamp, and Tamim Asfour. Resource-Aware Motion Planning. In IEEE International Conference on Robotics and Automation (ICRA), pages 32–39, May 2016. [ DOI ]
[17] Mirko Wächter, Simon Ottenhaus, Manfred Kröhnert, Nikolaus Vahrenkamp, and Tamim Asfour. The ArmarX Statechart Concept: Graphical Programming of Robot Behaviour. Frontiers in Robotics and AI, 3(33), 2016. [ DOI ]
[18] Johny Paul, Walter Stechele, Benjamin Oechslein, Christoph Erhardt, Jens Schedel, Daniel Lohmann, Wolfgang Schröder-Preikschat, Manfred Kröhnert, Tamim Asfour, Éricles R. Sousa, Vahid Lari, Frank Hannig, Jürgen Teich, Artjom Grudnitsky, Lars Bauer, and Jörg Henkel. Resource-awareness on heterogeneous MPSoCs for image processing. Journal of Systems Architecture, 61(10):668–680, November 6, 2015. [ DOI ]
[19] N. Vahrenkamp, M. Wächter, M. Kröhnert, K. Welke, and T. Asfour. The robot software framework armarx. Information Technology, 57(2):99–111, 2015.
[20] Johny Paul, Benjamin Oechslein, Christoph Erhardt, Jens Schedel, Manfred Kröhnert, Daniel Lohmann, Walter Stechele, Tamim Asfour, and Wolfgang Schröder-Preikschat. Self-adaptive corner detection on mpsoc through resource-aware programming. Journal of Systems Architecture, 2015. [ DOI ]
[21] Johny Paul, Walter Stechele, Éricles R. Sousa, Vahid Lari, Frank Hannig, Jürgen Teich, Manfred Kröhnert, and Tamim Asfour. Self-adaptive harris corner detector on heterogeneous many-core processor. In Proceedings of the Conference on Design and Architectures for Signal and Image Processing (DASIP). IEEE, October 2014. [ DOI ]
[22] Manfred Kröhnert, Nikolaus Vahrenkamp, Johny Paul, Walter Stechele, and Tamim Asfour. Resource prediction for humanoid robots. In Proceedings of the First Workshop on Resource Awareness and Adaptivity in Multi-Core Computing (Racing 2014), pages 22–28, May 2014. [ arXiv ]
[23] Éricles Sousa, Vahid Lari, Johny Paul, Frank Hannig, Jürgen Teich, and Walter Stechele. Resource-aware computer vision application on heterogeneous multi-tile architecture. Hardware and Software Demo at the University Booth at Design, Automation and Test in Europe (DATE), Dresden, Germany, March 2014.
[24] Johny Paul, Walter Stechele, Manfred Kröhnert, Tamim Asfour, Benjamin Oechslein, Christoph Erhardt, Jens Schedel, Daniel Lohmann, and Wolfgang Schröder-Preikschat. Resource-aware harris corner detection based on adaptive pruning. In Proceedings of the Conference on Architecture of Computing Systems (ARCS), number 8350 in LNCS, pages 1–12. Springer, February 2014. [ DOI ]
[25] Johny Paul, Walter Stechele, Manfred Kröhnert, Tamim Asfour, Benjamin Oechslein, Christoph Erhardt, Jens Schedel, Daniel Lohmann, and Wolfgang Schröder-Preikschat. A resource-aware nearest neighbor search algorithm for K-dimensional trees. In Proceedings of the Conference on Design and Architectures for Signal and Image Processing (DASIP), pages 80–87. IEEE Computer Society Press, October 2013.
[26] Éricles Sousa, Alexandru Tanase, Vahid Lari, Frank Hannig, Jürgen Teich, Johny Paul, Walter Stechele, Manfred Kröhnert, and Tamim Asfour. Acceleration of optical flow computations on tightly-coupled processor arrays. In Proceedings of the 25th Workshop on Parallel Systems and Algorithms (PARS), volume 30 of Mitteilungen – Gesellschaft für Informatik e. V., Parallel-Algorithmen und Rechnerstrukturen, pages 80–89. Gesellschaft für Informatik e.V., April 2013.
[27] Kai Welke, Nikolaus Vahrenkamp, Mirko Wächter, Manfred Kröhnert, and Tamim Asfour. The armarx framework - supporting high level robot programming through state disclosure. In Informatik 2013 Workshop on robot control architectures, 2013.
[28] David Schiebener, Julian Schill, and Tamim Asfour. Discovery, segmentation and reactive grasping of unknown objects. In 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 71–77, November 2012.
[29] Johny Paul, Andreas Laika, Christopher Claus, Walter Stechele, Adam El Sayed Auf, and Erik Maehle. Real-time motion detection based on sw/hw-codesign for walking rescue robots. Journal of Real-Time Image Processing, pages 1–16, 2012. [ DOI ]
[30] Johny Paul, Walter Stechele, Manfred Kröhnert, Tamim Asfour, and Rüdiger Dillmann. Invasive computing for robotic vision. In Proceedings of the 17th Asia and South Pacific Design Automation Conference (ASP-DAC), pages 207–212, January 2012. [ DOI ]
[31] Jürgen Teich, Jörg Henkel, Andreas Herkersdorf, Doris Schmitt-Landsiedel, Wolfgang Schröder-Preikschat, and Gregor Snelting. Invasive computing: An overview. In Michael Hübner and Jürgen Becker, editors, Multiprocessor System-on-Chip – Hardware Design and Tool Integration, pages 241–268. Springer, Berlin, Heidelberg, 2011. [ DOI ]
[32] Jürgen Teich. Invasive algorithms and architectures. it - Information Technology, 50(5):300–310, 2008.