A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mmclassification.readthedocs.io/en/latest/tutorials/data_pipeline.html below:

Custom Data Pipelines — MMClassification 0.25.0 documentation

Note

You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.

Tutorial 4: Custom Data Pipelines Design of Data pipelines

Following typical conventions, we use Dataset and DataLoader for data loading with multiple workers. Indexing Dataset returns a dict of data items corresponding to the arguments of models forward method.

The data preparation pipeline and the dataset is decomposed. Usually a dataset defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict. A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.

The operations are categorized into data loading, pre-processing and formatting.

Here is an pipeline example for ResNet-50 training on ImageNet.

img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='RandomResizedCrop', size=224),
    dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='ImageToTensor', keys=['img']),
    dict(type='ToTensor', keys=['gt_label']),
    dict(type='Collect', keys=['img', 'gt_label'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='Resize', size=256),
    dict(type='CenterCrop', crop_size=224),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='ImageToTensor', keys=['img']),
    dict(type='Collect', keys=['img'])
]

For each operation, we list the related dict fields that are added/updated/removed. At the end of the pipeline, we use Collect to only retain the necessary items for forward computation.

Data loading

LoadImageFromFile

By default, LoadImageFromFile loads images from disk but it may lead to IO bottleneck for efficient small models. Various backends are supported by mmcv to accelerate this process. For example, if the training machines have setup memcached, we can revise the config as follows.

memcached_root = '/mnt/xxx/memcached_client/'
train_pipeline = [
    dict(
        type='LoadImageFromFile',
        file_client_args=dict(
            backend='memcached',
            server_list_cfg=osp.join(memcached_root, 'server_list.conf'),
            client_cfg=osp.join(memcached_root, 'client.conf'))),
]

More supported backends can be found in mmcv.fileio.FileClient.

Pre-processing

Resize

RandomFlip

RandomCrop

Normalize

Formatting

ToTensor

ImageToTensor

Collect

For more information about other data transformation classes, please refer to Data Transformations

Extend and use custom pipelines
  1. Write a new pipeline in any file, e.g., my_pipeline.py, and place it in the folder mmcls/datasets/pipelines/. The pipeline class needs to override the __call__ method which takes a dict as input and returns a dict.

    from mmcls.datasets import PIPELINES
    
    @PIPELINES.register_module()
    class MyTransform(object):
    
        def __call__(self, results):
            # apply transforms on results['img']
            return results
    
  2. Import the new class in mmcls/datasets/pipelines/__init__.py.

    ...
    from .my_pipeline import MyTransform
    
    __all__ = [
        ..., 'MyTransform'
    ]
    
  3. Use it in config files.

    img_norm_cfg = dict(
        mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
    train_pipeline = [
        dict(type='LoadImageFromFile'),
        dict(type='RandomResizedCrop', size=224),
        dict(type='RandomFlip', flip_prob=0.5, direction='horizontal'),
        dict(type='MyTransform'),
        dict(type='Normalize', **img_norm_cfg),
        dict(type='ImageToTensor', keys=['img']),
        dict(type='ToTensor', keys=['gt_label']),
        dict(type='Collect', keys=['img', 'gt_label'])
    ]
    
Pipeline visualization

After designing data pipelines, you can use the visualization tools to view the performance.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4