Starting from:

$30

CIVIL459- Final Project - EPFL tandem race 2022 Solved

1           Introduction

In this project, you will learn to use the concepts in the lectures and exercise sessions on a challenging real-world problem. You will develop a framework that is able to detect and track a specific target person. The end goal is to win the EPFL tandem race 2022! (check the previous race on the media here)

In most machine learning projects, a data scientist starts by using popular architectures and modifying them for his/her own task and data. In this project, we will make an intelligent agent for the tandem race using state-of-the-art methods.

Group formation. For this project, you will work in a team of 4 students. If you are still searching for teammates, please use the discussion forum on Moodle. A good data science team combines a diverse set of skills, and greatly benefits from inter-disciplinary backgrounds.

2           Milestones:

The project has three milestones. The milestones will walk you through building the final model step by step. You are highly encouraged to bring your knowledge and creativity to enhance the performance of your models. Note that the race is different from the race on 2019. Please read the guidelines carefully.

2.1          Milestone 1 - Detection, (Evaluation Date: April 28)
The goal in this milestone is to be able to detect a specific person, who you specify by your own approach e.g., first person observed, OR the person with a specific gesture, OR a particular clothes etc. You can use any off-the-shelf detector (pedestrian detector, body pose detector...). Note that the code should have sufficient inference speed such that you can show the demo in real-time.

Deliverable: You will be demonstrating a demo of the system on a laptop webcam to a TA. The demo should be run on Google colab. Each team member should know how code works as questions will be asked about it. Tips:

•    This link provides a detector using webcam.

•    To increase the training speed, you can fine-tune existing models instead of training from scratch.

2.2           Milestone 2 - Detection + Tracking (Evaluation Date: May 12)
Develop a tracker to track a person of interest. This means that if the person of interest moves or other people come inside the frame, it should still detect the person of interest. In case that person quits the screen, no one should be detected. Note that the method should be able to select the person of interest based on the scheme decided in Milestone 1. For instance, the first person viewed is selected as the person of interest.

Deliverable: Similar to the previous milestone, you will be demonstrating a demo of the system on a laptop webcam to a TA. We would assess the robustness of the model in different situations e.g., having multiple people in the camera view or in the situation where the person of interest leaves the view. The demo should be run on Google colab. Each team member should know how code works as questions will be asked about it.

Tips:

•    Start with simpler tracker systems first (e.g., deepSort...).

•    Check Re-ID methods as well.

2.3        Milestone 3 - Tandem Race (June 2)
In the final milestone, you will export your models to Loomo robots. You should check the performance of your method deployed on robot and try to improve it. Note that in the race, the environment is no longer controlled, so many factors can change such as light, number of people, people clothes and so on. We will provide more info on robot deployment and race policies later on Moodle.

Deliverable: We will grade teams based on functionality (checking if all deliverables work), originality (checking if the used methods are state-of-the-art). Bonus points will be given to the top 3 teams performing best in the race. You should provide a short final report (not more than 2 pages) and the code with a README file. Tips:

•    Data augmentation is important for a robust performance.

•    The tracker can help to compensate missed detection.

•    Check the performance of the model on robot in different environmental conditions.

3      Resources

For this project, you can use google colab. Google colab allows you to run your notebook for 12 hours consecutively, thus it is recommended that you save your model to be able to load it and continue the training after the time limit. In case you need more resources, you can use SCITAS clusters. We will provide you access to SCITAS clusters where you can run your code for longer and train bigger models. The instructions for using SCITAS will be given on Moodle. Note that your model should work in real-time in the evaluation phase.

More products