Skip to content

ghosh64/track-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

track-detection

Codebase for Weighted Branch Aggregation based Deep Learning Model for Track Detection in Autonomous Racing@ ICLR 2024, Tiny Papers Track.

Our code is divided into two branches-FOV and webacnn. FOV contains all code related to generating the the upper and lower bound for the Field of Perception including annotated data files. Webacnn contains code related to generating the lane masks used for detecting the lane.

Dataset

The dataset used to train the webacnn model can be found under data/ in this repository. This is a custom dataset that was created by sampling online racing videos. The sampled frames are all compiled into a single dataset that is divided into train-test splits. The videos used can be found at the following links: Video 1 Video 2 Video 3 Video 4

Environment

To reproduce our environment:

conda env create -f environment.yml

Run:

To train and test on our dataset on webacnn:

cd webacnn
python main.py

Run FoP model

The annotated train, test and validation data splits for FoP are available at track-detection/FOV/data. The images are annotated manually using roboflow in a yolo trainable format. Each label contains (class, x_center, y_center, width, height) of the annotated rectange. The upper and lower bounds for the FoP are (y_center-height/2, y_center+height/2) and this is used to train the DNN.

To train and test on FoP Model:

cd FOV
python main.py

About

track detection for racing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages