A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/VCIP-RGBD/DFormer below:

VCIP-RGBD/DFormer: [CVPR 2025]DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation && [ICLR 2024] DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation

DFormer for RGBD Semantic Segmentation

非常荣幸我们收到3D视觉工坊的邀请,我们在6月19日晚上19:00开展了关于DFormerv2的论文直播,有兴趣的同学可以观看直播回放,有问题欢迎在这个项目下提issue交流讨论,直播用到的PPT可以在这里下载BaiduNetDisk

This repository contains the official implementation of the following papers:

DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
Bowen Yin, Xuying Zhang, Zhongyu Li, Li Liu, Ming-Ming Cheng, Qibin Hou*
ICLR 2024. Paper Link | Homepage | 公众号解读(集智书童) | DFormer-SOD | Jittor-Version(国产框架) |

DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation
Bo-Wen Yin, Jiao-Long Cao, Ming-Ming Cheng, Qibin Hou*
CVPR 2025. Paper Link | 中文版 | 直播回放 | PPT | Geometry prior demo | Jittor-Version(国产框架) |

🤖RGB-D ImageNet and Pretrain(You can train your own encoders)

Application to new datasets(添加新数据集)

We provide the geometry prior generation manner in DFormerv2, and you can further develope it and enhance the depth-related reasearch. We provide the RGBD pretraining code in RGBD-Pretrain. You can pretrain more powerful RGBD encoders and contribute to the RGBD research.

We invite all to contribute in making it more acessible and useful. If you have any questions about our work, feel free to contact us via e-mail (bowenyin@mail.nankai.edu.cn, caojiaolong@mail.nankai.edu.cn). If you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).


Figure 1: Comparisons between the existing methods and our DFormer (RGB-D Pre-training).


Figure 2: Comparisons among the main RGBD segmentation pipelines and our approach. (a) Use dual encoders to encode RGB and depth respectively and design fusion modules to fusion them, like CMX and GeminiFUsion; (b) Adopt an unified RGBD encoder to extract and fuse RGBD features, like DFormer; (c) DFormerv2 use depth to form a geometry prior of the scene and then enhance the visual features.


Figure 2: The geometry attention map in our DFormerv2 and the effect of other attention mechanisms. Our geometry attention is endowed with the 3D geometry perception ability and can focus on the related regions of the whole scene. A simple visualization demo is provided at https://huggingface.co/spaces/bbynku/DFormerv2.

0. Install

conda create -n dformer python=3.10 -y
conda activate dformer

# CUDA 11.8
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia

pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu118/torch2.1/index.html

pip install tqdm opencv-python scipy tensorboardX tabulate easydict ftfy regex

1. Download Datasets and Checkpoints.

By default, you can put datasets into the folder 'datasets' or use 'ln -s path_to_data datasets'.

Compred to the original datasets, we map the depth (.npy) to .png via 'plt.imsave(save_path, np.load(depth), cmap='Greys_r')', reorganize the file path to a clear format, and add the split files (.txt).

ImageNet-1K Pre-trained and NYUDepth or SUNRGBD trained DFormer-T/S/B/T and DFormerv2-S/B/L can be downloaded at:


Orgnize the checkpoints and dataset folder in the following structure:

<checkpoints>
|-- <pretrained>
    |-- <DFormer_Large.pth.tar>
    |-- <DFormer_Base.pth.tar>
    |-- <DFormer_Small.pth.tar>
    |-- <DFormer_Tiny.pth.tar>
    |-- <DFormerv2_Large_pretrained.pth>
    |-- <DFormerv2_Base_pretrained.pth>
    |-- <DFormerv2_Small_pretrained.pth>
|-- <trained>
    |-- <NYUDepthv2>
        |-- ...
    |-- <SUNRGBD>
        |-- ...
<datasets>
|-- <DatasetName1>
    |-- <RGB>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
        ...
    |-- <Depth>
        |-- <name1>.<DepthFormat>
        |-- <name2>.<DepthFormat>
    |-- train.txt
    |-- test.txt
|-- <DatasetName2>
|-- ...

2. Train.

You can change the `local_config' files in the script to choose the model for training.

After training, the checkpoints will be saved in the path `checkpoints/XXX', where the XXX is depends on the training config.

3. Eval.

You can change the `local_config' files and checkpoint path in the script to choose the model for testing.

4. Visualize.

5. FLOPs & Parameters.

PYTHONPATH="$(dirname $0)/..":$PYTHONPATH python benchmark.py --config local_configs.NYUDepthv2.DFormer_Large

6. Latency.

PYTHONPATH="$(dirname $0)/..":$PYTHONPATH python utils/latency.py --config local_configs.NYUDepthv2.DFormer_Large

ps: The latency highly depends on the devices. It is recommended to compare the latency on the same devices.


Table 1: Comparisons between the existing methods and our DFormer.


Table 2: Comparisons between the existing methods and our DFormerv2.

We invite all to contribute in making it more acessible and useful. If you have any questions or suggestions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn) or raise an issue.

You may want to cite:

@inproceedings{yin2024dformer,
  title={DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation},
  author={Yin, Bowen and Zhang, Xuying and Li, Zhong-Yu and Liu, Li and Cheng, Ming-Ming and Hou, Qibin},
  booktitle={ICLR},
  year={2024}
}

@inproceedings{yin2025dformerv2,
  title={DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation},
  author={Yin, Bo-Wen and Cao, Jiao-Long and Cheng, Ming-Ming and Hou, Qibin},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={19345--19355},
  year={2025}
}

Our implementation is mainly based on mmsegmentaion, CMX and CMNext. Thanks for their authors.

Code in this repo is for non-commercial use only.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4