The CVPR 2024 Vision and Language for Autonomous Driving and Robotics Workshop (https://vision-language-adr.github.io) is expected to center around data-centric autonomous driving, with a particular focus on vision-based methods.
This workshop is intended to:
- Explore potential areas in robotics that vision and language could help
- Encourage the communication and collaboration of vision and language for autonomous agents
- Provide an opportunity for CVPR community to discuss this exciting and growing area of multimodal representations
We welcome paper submissions on all topics related to neural fields for autonomous driving and robotics, including but not limited to:
- Vision and language for autonomous driving
- Language-driven perception
- Language-driven sensor and traffic simulation
- Vision and language representation learning
- Multimodal motion prediction and planning for robotics
- New datasets and metrics for multimodal learning
- Safety: Ensuring that systems can correctly interpret and act upon visual and linguistic inputs in real-world situations to prevent accidents
- Language agents for robotics
- Language-based scene understanding for driving scenarios
- Multi-modal fusion for end-to-end autonomous driving
- Large-Language-Models (LLMs) as task planner
- Other applications of LLMs to driving and robotics