A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/hustvl/MIMDet below:

hustvl/MIMDet: [ICCV 2023] You Only Look at One Partial Sequence

Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection

Yuxin Fang1 *, Shusheng Yang1 *, Shijie Wang1 *, Yixiao Ge2, Ying Shan2, Xinggang Wang1 📧,

1 School of EIC, HUST, 2 ARC Lab, Tencent PCG.

(*) equal contribution, (📧) corresponding author.

ICCV 2023 [paper]

This repo provides code and pretrained models for MIMDet (Masked Image Modeling for Detection).

Model Sample Ratio Schedule Aug Box AP Mask AP #params config model / log MIMDet-ViT-B 0.5 3x [480-800, 1333] w/crop 51.7 46.2 127.96M config model / log MIMDet-ViT-L 0.5 3x [480-800, 1333] w/crop 54.3 48.2 349.33M config model / log Benchmarking-ViT-B - 25ep [1024, 1024] LSJ(0.1-2) 48.0 43.0 118.67M config model / log Benchmarking-ViT-B - 50ep [1024, 1024] LSJ(0.1-2) 50.2 44.9 118.67M config model / log Benchmarking-ViT-B - 100ep [1024, 1024] LSJ(0.1-2) 50.4 44.9 118.67M config model / log

Notes:

git clone https://github.com/hustvl/MIMDet.git
cd MIMDet
conda create -n mimdet python=3.9
conda activate mimdet

MIMDet is built upon detectron2, so please organize dataset directory in detectron2's manner. We refer users to detectron2 for detailed instructions. The overall hierachical structure is illustrated as following:

MIMDet
├── datasets
│   ├── coco
│   │   ├── annotations
│   │   ├── train2017
│   │   ├── val2017
│   │   ├── test2017
│   ├── ...
├── ...

Download the full MAE pretrained (including the decoder) ViT-B Model and ViT-L Model checkpoint. See MAE repo-issues-8.

# single-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> mae_checkpoint.path=<MAE_MODEL_PATH>

# multi-machine training
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --num-machines <MACHINE_NUM> --master_addr <MASTER_ADDR> --master_port <MASTER_PORT> mae_checkpoint.path=<MAE_MODEL_PATH>
# inference
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH>

# inference with 100% sample ratio (please refer to our paper for detailed analysis)
python lazyconfig_train_net.py --config-file <CONFIG_FILE> --num-gpus <GPU_NUM> --eval-only train.init_checkpoint=<MODEL_PATH> model.backbone.bottom_up.sample_ratio=1.0

This project is based on MAE, Detectron2 and timm. Thanks for their wonderful works.

MIMDet is released under the MIT License.

If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📝 :)

@article{MIMDet,
  title={Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection},
  author={Fang, Yuxin and Yang, Shusheng and Wang, Shijie and Ge, Yixiao and Shan, Ying and Wang, Xinggang},
  journal={arXiv preprint arXiv:2204.02964},
  year={2022}
}

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4