WorldLens
WorldLens copied to clipboard
đ WorldLens: Full-Spectrum Evaluations of Driving World Models in Real World
English | įŽäŊ䏿
WorldLens: Full-Spectrum Evaluations of Driving World Models in Real World
:earth_asia: WorldBench Team
![]() |
|---|
:grey_question: Is your driving world model an all-around player?
- This work presents
WorldLens, a unified benchmark encompassing evaluations on $^1$Generation, $^2$Reconstruction, $^3$Action-Following, $^4$Downstream Task, and $^5$Human Preference, across a total of 24 dimensions spanning visual realism, geometric consistency, functional reliability, and perceptual alignment. - We observe no single model dominates across all axes, highlighting the need for balanced progress toward physically and behaviorally realistic world modeling.
- For additional visual examples, kindly refer to our :earth_asia: Project Page.
:books: Citation
If you find this work helpful for your research, please kindly consider citing our papers:
@article{worldlens,
title = {{WorldLens}: Full-Spectrum Evaluations of Driving World Models in Real World},
author = {Ao Liang and Lingdong Kong and Tianyi Yan and Hongsi Liu and Wesley Yang and Ziqi Huang and Wei Yin and Jialong Zuo and Yixuan Hu and Dekai Zhu and Dongyue Lu and Youquan Liu and Guangfeng Jiang and Linfeng Li and Xiangtai Li and Long Zhuo and Lai Xing Ng and Benoit R. Cottereau and Changxin Gao and Liang Pan and Wei Tsang Ooi and Ziwei Liu},
journal = {arXiv preprint arXiv:2512.10958},
year = {2025}
}
@article{survey_3d_4d_world_models,
title = {{3D} and {4D} World Modeling: A Survey},
author = {Lingdong Kong and Wesley Yang and Jianbiao Mei and Youquan Liu and Ao Liang and Dekai Zhu and Dongyue Lu and Wei Yin and Xiaotao Hu and Mingkai Jia and Junyuan Deng and Kaiwen Zhang and Yang Wu and Tianyi Yan and Shenyuan Gao and Song Wang and Linfeng Li and Liang Pan and Yong Liu and Jianke Zhu and Wei Tsang Ooi and Steven C. H. Hoi and Ziwei Liu},
journal = {arXiv preprint arXiv:2509.07996},
year = {2025}
}
Updates
- [12/2025] - The official :balance_scale: WorldLens Leaderboard is online at HuggingFace Spaces. We invite researchers and practitioners to submit their models for evaluation on the leaderboard, enabling consistent comparison and supporting progress in world model research.
-
[12/2025] - A collection of 3D and 4D world models is avaliable at :hugs:
awesome-3d-4d-world-models. - [12/2025] - The Project Page is online. :rocket:
Outline
- Updates
- Outline
- :earth_asia: WorldLens Benchmark
-
:balance_scale: WorldLens Leaderboard
- Leaderboard
- :gear: Installation
- :hotsprings: Data Preparation
-
:rocket: Getting Started
- Visualizations
- :hugs: WorldLens-26K
- :robot: WorldLens-Agent
- :memo: TODO List
- License
- Acknowledgements
- Related Projects
:earth_asia: WorldLens Benchmark
![]() |
|---|
-
Generative world models must go beyond visual realism to achieve geometric consistency, physical plausibility, and functional reliability.
WorldLensis a unified benchmark that evaluates these capabilities across five complementary aspects - from low-level appearance fidelity to high-level behavioral realism. -
Each aspect is decomposed into fine-grained, interpretable dimensions, forming a comprehensive framework that bridges human perception, physical reasoning, and downstream utility.
For additional details and visual examples, kindly refer to our :books: Paper and :earth_asia: Project Page.
:balance_scale: WorldLens Leaderboard
| Generation | Measuring whether a model can synthesize visually realistic, temporally stable, and semantically consistent scenes. Even state-of-the-art models that achieve low perceptual error (e.g., LPIPS, FVD) often suffer from view flickering or motion instability, revealing the limits of current diffusion-based architectures. | |
| Reconstruction | Probing whether generated videos can be reprojected into a coherent 4D scene using differentiable rendering. Models that appear sharp in 2D frequently collapse when reconstructed, producing geometric "floaters": a gap that exposes how temporal coherence remains weakly coupled in most pipelines. | |
| Action-Following | Testing if a pre-trained action planner can operate safely inside the generated world. High open-loop realism does not guarantee safe closed-loop control; almost all existing world models trigger collisions or off-road drifts, underscoring that photometric realism alone cannot yield functional fidelity. | |
| Downstream Task | Evaluating whether the synthetic data support downstream perception models trained on real-world datasets. Even visually appealing worlds may degrade detection or segmentation accuracy by 30-50%, highlighting that alignment to task distributions, not just image quality, is vital for practical usability. | |
| Human Preference | Capturing subjective scores such as world realism, physical plausibility, and behavioral safety through large-scale human annotations. Our study reveals that models with strong geometric consistency are generally rated as more "real", confirming that perceptual fidelity is inseparable from structural coherence. | |
Leaderboard
An interactive :balance_scale: WorldLens Leaderboard is online at :hugs: HuggingFace Spaces. We invite researchers and practitioners to submit their models for evaluation on the leaderboard, enabling consistent comparison and supporting progress in world model research.
 Benchmarked Models
- [x] MagicDrive, ICLR 2023.
- [x] Panacea, CVPR 2024.
- [x] DreamForge, arXiv 2024.
- [x] DriveDreamer-2, AAAI 2025.
- [x] DrivingSphere, CVPR 2025.
- [x] OpenDWM, CVPR 2025.
- [x] MagicDrive-V2, ICCV 2025.
- [x] DiST-4D, ICCV 2025.
- [x] RLGF, NeurIPS 2025.
- [x] X-Scene, NeurIPS 2025.
- [ ] . . .
:gear: Installation
The WorldLens evaluation toolkit is developed and tested under Python 3.9 + CUDA 11.8. We recommend using Conda to manage the environment.
- Create Environment:
conda create -n worldbench python=3.9.20
conda activate worldbench
- Install PyTorch:
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 \
--index-url https://download.pytorch.org/whl/cu118
- Install MMCV (with CUDA):
cd worldbench/third_party/mmcv-1.6.0
MMCV_WITH_OPS=1 pip install -e .
Note: We modified the C++ standard to C++17 for better compatibility. You may adjust it in worldbench/third_party/mmcv-1.6.0/setup.py based on your system.
- Install MMSegmentation:
pip install https://github.com/open-mmlab/mmsegmentation/archive/refs/tags/v0.30.0.zip
- Install MMDetection:
pip install mmdet==2.28.2
- Install BEVFusion-based MMDet3D:
git clone --recursive https://github.com/worldbench/WorldLens.git
cd worldbench/third_party/bevfusion
python setup.py develop
Additional Notes:
- C++ standard was updated to C++17.
- We modified the sparse convolution import logic at
worldbench/third_party/bevfusion/mmdet3d/ops/spconv/conv.py.
- Install MMDetection3D (v1.0.0rc6):
cd worldbench/third_party/mmdetection3d-1.0.0rc6
pip install -v -e .
Required dependency versions:
numpy == 1.23.5
numba == 0.53.0
- Pretrained Models
WorldLens relies on several pretrained models (e.g., CLIP, segmentation, depth networks). Please download them from HuggingFace and place them under:
./pretrained_models/
:hotsprings: Data Preparation
Here we take nuScenes as an example. Required Files:
- nuScenes official dataset
- 12 Hz interpolated annotations from ECCV 2024 Workshop â CODA Track 2
- Tracking & temporal .pkl files from HuggingFace â WorldLens Data Preparation
Final Directory Structure
data
âââ nuscenes
â âââ can_bus
â âââ lidarseg
â âââ maps
â âââ occ3d
â âââ samples
â âââ sweeps
â âââ v1.0-mini
â âââ v1.0-trainval
âââ nuscenes_map_aux_12Hz_interp
â âââ val_200x200_12Hz_interp.h5
âââ nuscenes_mmdet3d-12Hz
â âââ nuscenes_interp_12Hz_dbinfos_train.pkl
â âââ nuscenes_interp_12Hz_infos_track2_eval.pkl
â âââ nuscenes_interp_12Hz_infos_train.pkl
â âââ nuscenes_interp_12Hz_infos_val.pkl
âââ nuscenes_mmdet3d-12Hz_description
â âââ nuscenes_interp_12Hz_updated_description_train.pkl
â âââ nuscenes_interp_12Hz_updated_description_val.pkl
âââ nuscenes_mmdet3d_2
â âââ nuscenes_infos_temporal_val_3keyframes.pkl
âââ nuscenes_track
âââ ada_track_infos_train.pkl
âââ ada_track_infos_val.pkl
:rocket: Getting Started
- Configure Metrics:
All evaluation metrics are defined in a unified YAML format under tools/configs/.
Example: Temporal (Depth) Consistency:
temporal_consistency:
- name: temporal_consistency
method_name: ${method_name}
need_preprocessing: true
repeat_times: 1
local_save_path: pretrained_models/clip/ViT-B-32.pt
- Run Evaluation:
bash tools/scripts/evaluate.sh $TASK $METHOD_NAME
- Example: evaluating MagicDrive (video-based world model)
bash tools/scripts/evaluate.sh videogen magicdrive
Visualizations
- Prepare Generated Results: Download model outputs from HuggingFace and move them to:
./generated_results
âââ dist4d
âââ dreamforge
âââ drivedreamer2
âââ gt
âââ magicdrive
âââ opendwm
âââ xscene
âââ video_submission
-
Visualization Tools
- Multi-view Panorama Viewer (Cross-view Consistency):
python tools/showcase/video_multi_view_app.py- Method-to-Method Comparison:
python tools/showcase/video_method_compare_app.py- GIF-based Comparison:
python tools/showcase/gif_method_compare_app.py
:hugs: WorldLens-26K
To be updated.
:robot: WorldLens-Agent
To be updated.
:memo: TODO List
- [x] Initial release. đ
- [ ] Release the WorldLens-26K dataset.
- [ ] Support additional datasets (Waymo, Argoverse, and more)
- [ ] Add agent-based automatic evaluators
- [ ] . . .
License
This work is under the Apache License Version 2.0, while some specific implementations in this codebase might be under other licenses. Kindly refer to LICENSE.md for a more careful check, if you are using our code for commercial matters.
Acknowledgements
To be added.
Related Projects
| :sunglasses: Awesome | Projects |
|---|---|
![]() |
3D and 4D World Modeling: A Survey [GitHub Repo] - [Project Page] - [Paper] |
![]() |
VBench: Comprehensive Benchmark Suite for Video Generative Models [GitHub Repo] - [Project Page] - [Paper] |
![]() |
VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models [GitHub Repo] - [Project Page] - [Paper] |
![]() |
LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences [GitHub Repo] - [Project Page] - [Paper] |
![]() |
3EED: Ground Everything Everywhere in 3D [GitHub Repo] - [Project Page] - [Paper] |
![]() |
Are VLMs Ready for Autonomous Driving? A Study from Reliability, Data & Metric Perspectives [GitHub Repo] - [Project Page] - [Paper] |
![]() |
Perspective-Invariant 3D Object Detection [GitHub Repo] - [Project Page] - [Paper] |
![]() |
DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes [GitHub Repo] - [Project Page] - [Paper] |









