• driveblocks: Mapless Autonomy

    2021 - Present

    Summary

    driveblocks is a company co-founded by myself and a group of highly skilled professionals, primarily PhD students from the Technical University of Munich, who have gained valuable experience and knowledge through their participation in the Indy Autonomous Challenge.

    One of our main areas of focus is the use of sensors to model the environment for autonomous vehicles, rather than relying on high definition maps. This approach allows a vehicle to navigate even in areas where high quality maps may not be available, making our technology more flexible and adaptable to a wider range of situations.

    I am responsible for leading the development of our sensor-based environment modeling system.

    Details

    As a member of the team at driveblocks, a company focused on the development of autonomous vehicle technology, I have a number of responsibilities both on the organisational and technical side of things.

    On the organisational side, I have been involved in setting up the company as a limited liability company (GmbH) in Germany, and have contributed to building our internal structure and defining our business model. Our team is made up of engineers with a variety of expertise, and we work closely together to tackle the challenges of developing self-driving technology. Our business model revolves around the development of our modular autonomous vehicle software to other companies in the industry. We also work to maintain good media and investor relations, sharing our progress and insights with a wider audience and seeking out investors who share our passion for autonomous vehicle technology.

    On the technical side, one of my main tasks is leading the development of our environment model, which fuses information from multiple sensors to represent the environment in which the vehicle is operating. This model is constantly updated in real-time as the vehicle moves through its environment, allowing it to make informed decisions about how to navigate and avoid obstacles. The image below shows two feeds of cameras mounted to a truck, with white dots representing camera features detected by a neural network. Using a stochastic filter, we fit patches (such as the green patches in the image) in order to best align with the features in all of the sensor streams. Once the filters have converged, the resulting driving corridors (as shown in the second image) are established and published within the robot’s network. These corridors can then be used for trajectory planning by the autonomous vehicle.


    Figure 1: Two streams of cameras mounted to a truck.


    Figure 2: Resulting driving corridor model.

  • Indy Autonomous Challenge

    2019 - 2021

    Summary

    The Indy Autonomous Challenge is an autonomous challenge and is seen as a successor to the DARPA grand challenges. It involves university teams from around the world developing the software for an autonomous racing vehicle and competing against each other at high speeds and with multiple vehicles.

    At the Technical University of Munich, we have expanded the Roborace team and are working on the challenge with 15 PhDs. Each team member has his or her core competencies and supervised modules. With an average speed of 218.8 km/h, we were able to take the win on October 23, 2021 at the Indianapolis Motor Speedway (IMS) with the grand $1,000,000 prize.

    Primarily, I was responsible for the initial fundamentals of the dynamic trajectory planning (see “ROBORACE: Autonomous Motorsport”) and contributed to the safety concept.

    Details

    The challenge opened with the registration in November 2019. Our team, which at that time consisted of 7 PhDs, convinced the jury in February 2020 with a white paper outlining previous experiences. At the beginning, 39 university teams worldwide took up the challenge (accepted white papers). Until the Simulation Race on June 30, 2021, the teams further developed their software and competed against each other in sessions organized among themselves or at organized hackatons. During that time, we have expanded the Roborace team and were working on the challenge with 15 PhDs. Each team member owns core competencies and supervised modules. In the Simulation Race itself – a virtual head-to-head race – our TUM team came in second with a $50,000 prize.


    Figure 1: TUM Dallara IL-15 at the Indianaplois Motor Speedway (IMS).

    In September, our real vehicle was completed so that we could proceed with real-world testing for the final race on October 23, 2021. Each team used an identical Dallara IL-15 IAC driverless race car, only the software was developed individually by each team. The final race took place on the 23rd of October 2021 at the Indianapolis Motor Speedway (IMS) in the USA. For the final race, 9 teams have qualified by then. With an average speed of 218.8 km/h, we were able to take the victory with the grand $1,000,000 prize.


    Figure 2: Handover of the $1 Million check.


    Figure 3: Group picture of the winning team.

    Media Coverage (Excerpt)

  • ROBORACE: Autonomous Motorsport

    2018 - 2020

    Summary

    Roborace was founded with the goal of being the first racing series for autonomous electric racing vehicles. The participating teams work on identical vehicles and develop all the software for the autonomous vehicle. As a team from the Technical University of Munich, we operated a vehicle with seven PhDs from different institutes.

    The tests were primarily carried out in England on an abandoned airfield (Figure 1), where any track layout could be marked out with cones. The actual races were then held on racetracks in various countries.

    The goal was to drive at the dynamic limit, with the PhDs focusing on their respective core competencies (Figure 2). My focus was on LIDAR localization, dynamic trajectory planning and later on safety assessment.


    Figure 1: Tests on Roborace Devbot 1 on an abandoned airfield in the UK.


    Figure 2: Track site analysis and development.

    LIDAR localization

    LIDAR localization for autonomous racing vehicles poses special problems. In a scientific paper, I presented various improvements. These include, for example, the targeted selection of laser beams for localization, since the conventional approach with equidistant sampling creates a lot of redundant information to the sides of the vehicle on racetracks with barriers, but generates little information in front of and behind the vehicle. Furthermore, the motion update in the underlying filter must allow for less cross scattering for racing vehicles according to the dynamic limits and at high speeds.

    Dynamic trajectory planning

    Dynamic trajectory planning involves following the global racing line wherever possible and reacting appropriately to dynamic changes (e.g. vehicles). For this purpose, I was core developer of a multilayer graph-based approach that analyzes several edges at runtime with respect to their cost optimality and selects the best variant. Dynamic limits of the vehicle and the iterative feasability/solvabiliy of the respective states were also taken into account. The planner was evaluated on the real vehicle at speeds above 200km/h and presented in a scientific paper. In addition, the core of the source code is published on GitHub. A video of the performance at the Monteblanco track in Spain can be found here (Figure 3).


    Figure 3: Autonomous multi-vehicle race in Monteblanco, Spain.

    Safety assessment

    In the last part, my focus was on online safety assessment for autonomous vehicles – among other things, the topic gained increased attention due to a crash that had occurred earlier. The goal within this context is to safeguard complex and online learning algorithms in accordance with applicable regulations. For this purpose, an online verification for a trajectory planner was developed and evaluated with the real data from the races as well as generated error-injected scenarios. The developed approach was published in a conference paper and a journal article and is available to the general public on GitHub.

    Media Coverage (Excerpt)

    • The German television broadcaster BR covered the first development steps in Germany and England: video documentary (German)
    • A race with two cars in Spain, Monteblanco was documented in three episodes on Youtube: video 1, video 2, video 3.
  • Autonomous 1:10 Vehicle

    2015 - 2019

    2015

    In a first phase, the basic commissioning of the vehicle based on a ROS middleware was carried out in 2015 by a team of three master students. The goal was autonomous navigation on an indoor circuit. In addition, the objective was autonomous LIDAR-based detection and trajectory planning through a course of cones. My main task was the laser (LIDAR) based localization, waypoint and environmental feature recognition as well as path planning.

    2016 – 2018

    In a second phase, as a PhD student at the Chair of Control Engineering, I further optimized the software stack – especially with a more detailed bicicle vehicle model to be able to plan more complex maneuvers in a tighter space. In parallel, simulations of the vehicle were performed in gazebo.

    However, the main goal during this time was introspective fault detection. Previously unknown faults in the drivetrain or the sensor system were to be detected automatically and appropriate measures derived. In this context different approaches were implemented and realized. These include fault detection with parity space, principal component analysis (PCA) or bond graphs (Figure 1). However, the most promising methods were those from the field of machine learning. For example, a random forest was used for outlier detection, which was not limited to only detecting new/unknown data, but was also capable of forming classes for individual recurring error types (Figure 2).


    Figure 1: Bond graph representation of a autonomous vehicle drive train.


    Figure 2: Fault detection and classification targeting unforeseen faults.

    2018 – 2019

    In a third phase as a PhD student at the Department of Automotive Engineering, I implemented a LIDAR localization (published paper) and trajectory planning algorithm (published GitHub repository, published paper). The vehicle was able to successfully drive indoor fast autonomous laps through different parts of buildings. The same algorithms were later used on a real racing vehicle (see project “ROBORACE: Autonomous Motorsport“).

  • Close Proximity Human-Robot-Collaboration

    2015 - 2016

    Summary

    Development of a framework that allows a robot to choose its actions with respect to a human co-worker when collaborating in close vicinity. The proposed approach models a Human-Robot Collaboration scenario as an iterative game and selects the action-strategies for the Human-Robot-Team by finding the Nash Equilibrium of the game-theoretic state.
    – honored with the VDI Award 2016.

    Details

    In order to improve the human-machine interaction, the interactions of two people in a joint construction task (here: implementation of a fixed planned structure) were first observed. For this purpose, 3D motion tracking was used to record upper body, hand and head movements (Figure 1). This recorded data (Figure 2) was then analyzed to learn the behavior of human beings. A human cooperates best with a robot when he or she can anticipate the robot’s movements. For this reason, the robot uses dynamic movement primitives (DMP) to derive its own movements to arbitrary target points based on recorded human training data (Figure 3). The resulting human-like behavior/movement makes it easier for the cooperating human to anticipate the robot’s movements.


    Figure 1: Motion tracking of human-human interaction.

    The robot continuously uses a cost function to form a picture of the efficiency of individual strategies. Using game theory, the robot then selects the action that brings the most benefit for the human-robot-team. In this way, for example, human interference can be avoided. The efficiency of the selected approach was validated in a test person study against alternative action selection strategies.


    Figure 2: Wrist motion during recorded human-human interaction.

     


    Figure 3: Robot generated motion primitives based on trained human motion patterns.

  • Palletizing Robot Perfomance Prediction

    2013 - 2014

    Summary

    Development and evaluation of a tool for the prediction of the output performance of a palletizing robot. With this application, it is now possible to predict the output performance of a machine before actually constructing it. Furthermore, the tool helps to identify performance-limiting components in the overall system. Based on the prediction’s outcome, it can be determined, whether customer requirements are implementable or not.
    – honored with the VDE Award 2014.

    Details

    Due to the industrial competition and confidentiality requirements, further details cannot be provided.