Better training data for driving by using simulator to guide realistic image synthesis
Better training data for driving by using simulator to guide realistic image synthesis
Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation
arXiv paper abstract https://arxiv.org/abs/2111.07971
arXiv PDF paper https://arxiv.org/pdf/2111.07971.pdf
Project page https://nv-tlabs.github.io/simulation-strategies
Autonomous driving relies on a huge volume of real-world data to be labeled to high precision.
Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations.
However, the domain gap between the synthetic and real data remains
... methods for both sampling and for training on synthetic data such that models transfer well to real world data.
... showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data (cameras, lidar) using an open-source simulator (CARLA), and evaluate the entire framework on a real-world dataset (nuScenes). ...
... demonstrate the effectiveness of our method by training camera-based and lidar-based bird's-eye-view vehicle segmentation models on data sampled from the CARLA simulator and evaluated on real world data from nuScenes.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Bình luận