Functions for calling the Deep Learning Tools.
Data Preparation Methods export_training_dataFunction is designed to generate training sample image chips from the input imagery data with labeled vector data or classified images. The output of this service tool is the data store string where the output image chips, labels and metadata files are going to be stored.
Note
This function is supported with ArcGIS Enterprise (Image Server)
Parameter
Description
input_raster
Required ImageryLayer
/Raster
/Item
/String (URL). Raster layer that needs to be exported for training.
input_class_data
Labeled data, either a feature layer or image layer. Vector inputs should follow a training sample format as generated by the ArcGIS Pro Training Sample Manager. Raster inputs should follow a classified raster format as generated by the Classify Raster tool.
chip_format
Optional string. The raster format for the image chip outputs.
TIFF
: TIFF format
PNG
: PNG format
JPEG
: JPEG format
MRF
: MRF (Meta Raster Format)
tile_size
Optional dictionary. The size of the image chips.
Example: {âxâ: 256, âyâ: 256}
stride_size
Optional dictionary. The distance to move in the X and Y when creating the next image chip. When stride is equal to the tile size, there will be no overlap. When stride is equal to half of the tile size, there will be 50% overlap.
Example: {âxâ: 128, âyâ: 128}
metadata_format
Optional string. The format of the output metadata labels. There are 4 options for output metadata labels for the training data, KITTI Rectangles, PASCAL VOCrectangles, Classified Tiles (a class map) and RCNN_Masks. If your input training sample data is a feature class layer such as building layer or standard classification training sample file, use the KITTI or PASCAL VOC rectangle option. The output metadata is a .txt file or .xml file containing the training sample data contained in the minimum bounding rectangle. The name of the metadata file matches the input source image name. If your input training sample data is a class map, use the Classified Tiles as your output metadata format option.
KITTI_rectangles
: The metadata follows the same format as the Karlsruhe Institute of Technology and Toyota echnological Institute (KITTI) Object Detection Evaluation dataset. The KITTI dataset is a vision benchmark suite. This is the default.The label files are plain text files. All values, both numerical or strings, are separated by spaces, and each row corresponds to one object. This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.
PASCAL_VOC_rectangles
: The metadata follows the same format as the Pattern Analysis, Statistical Modeling and Computational Learning, Visual Object Classes (PASCAL_VOC) dataset. The PASCAL VOC dataset is a standardized image data set for object class recognition.The label files are XML files and contain information about image name, class value, and bounding box(es). This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.
Classified_Tiles
: This option will output one classified image chip per input image chip. No other meta data for each image chip. Only the statistics output has more information on the classes such as class names, class values, and output statistics. This format can be used with BDCNEdgeDetector, DeepLab, HEDEdgeDetector, MultiTaskRoadExtractor, PSPNetClassifier and UnetClassifier models.
RCNN_Masks
: This option will output image chips that have a mask on the areas where the sample exists. The model generates bounding boxes and segmentation masks for each instance of an object in the image. This format can be used with MaskRCNN model.
Labeled_Tiles
: This option will label each output tile with a specific class. This format is used for image classification. This format can be used with FeatureClassifier model.
MultiLabeled_Tiles
: Each output tile will be labeled with one or more classes. For example, a tile may be labeled agriculture and also cloudy. This format is used for object classification. This format can be used with FeatureClassifier model.
Export_Tiles
: The output will be image chips with no label. This format is used for image enhancement techniques such as Super Resolution and Change Detection. This format can be used with ChangeDetector, CycleGAN, Pix2Pix and SuperResolution models.
CycleGAN
: The output will be image chips with no label. This format is used for image translation technique CycleGAN, which is used to train images that do not overlap.
Imagenet
: Each output tile will be labeled with a specific class. This format is used for object classification; however, it can also be used for object tracking when the Deep Sort model type is used during training.
Panoptic_Segmentation
: The output will be one classified image chip and one instance per input image chip. The output will also have image chips that mask the areas where the sample exists; these image chips will be stored in a different folder. This format is used for both pixel classification and instance segmentation, therefore there will be two output labels folders.
classvalue_field
Optional string. Specifies the field which contains the class values. If no field is specified, the system will look for a âvalueâ or âclassvalueâ field. If this feature does not contain a class field, the system will presume all records belong the 1 class.
buffer_radius
Optional integer. Specifies a radius for point feature classes to specify training sample area.
output_location
This is the output location for training sample data. It can be the server data store path or a shared file system path.
Example:
Server datastore path -
/fileShares/deeplearning/rooftoptrainingsamples
/rasterStores/rasterstorename/rooftoptrainingsamples
File share path -
\\servername\deeplearning\rooftoptrainingsamples
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
exportAllTiles - Choose if the image chips with overlapped labeled data will be exported.
True - Export all the image chips, including those that do not overlap labeled data.
False - Export only the image chips that overlap the labelled data. This is the default.
startIndex - Allows you to set the start index for the sequence of image chips. This lets you append more image chips to an existing sequence. The default value is 0.
cellSize - cell size can be set using this key in context parameter
extent - Sets the processing extent used by the function
Setting context parameter will override the values set using arcgis.env variable for this particular function.(cellSize, extent)
Example:
{âexportAllTilesâ : False, âstartIndexâ: 0 }
input_mask_polygons
Optional FeatureLayer
. The feature layer that delineates the area where image chips will be created. Only image chips that fall completely within the polygons will be created.
rotation_angle
Optional float. The rotation angle that will be used to generate additional image chips.
An image chip will be generated with a rotation angle of 0, which means no rotation. It will then be rotated at the specified angle to create an additional image chip. The same training samples will be captured at multiple angles in multiple image chips for data augmentation. The default rotation angle is 0.
reference_system
Optional string. Specifies the type of reference system to be used to interpret the input image. The reference system specified should match the reference system used to train the deep learning model.
MAP_SPACE : The input image is in a map-based coordinate system. This is the default.
IMAGE_SPACE : The input image is in image space, viewed from the direction of the sensor that captured the image, and rotated such that the tops of buildings and trees point upward in the image.
PIXEL_SPACE : The input image is in image space, with no rotation and no distortion.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
blacken_around_feature
Optional bool. Specifies whether to blacken the pixels around each object or feature in each image tile. This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.
False : Pixels surrounding objects or features will not be blackened. This is the default.
True : Pixels surrounding objects or features will be blackened.
fix_chip_size
Optional bool. Specifies whether to crop the exported tiles such that they are all the same size. This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.
True : Exported tiles will be the same size and will center on the feature. This is the default.
False : Exported tiles will be cropped such that the bounding geometry surrounds only the feature in the tile.
additional_input_raster
Optional ImageryLayer
/Raster
/Item
/String (URL). An additional input imagery source that will be used for image translation methods.
This parameter is valid when the metadata_format parameter is set to Classified_Tiles, Export_Tiles, or CycleGAN.
input_instance_data
Optional. The training sample data collected that contains classes for instance segmentation.
The input can also be a point feature without a class value field or an integer raster without any class information.
This parameter is only valid when the metadata_format parameter is set to Panoptic_Segmentation.
instance_class_value_field
Optional string. The field that contains the class values for instance segmentation. If no field is specified, the tool will use a value or class value field, if one is present. If the feature does not contain a class field, the tool will determine that all records belong to one class.
This parameter is only valid when the metadata_format parameter is set to Panoptic_Segmentation.
min_polygon_overlap_ratio
Optional float. The minimum overlap percentage for a feature to be included in the training data. If the percentage overlap is less than the value specified, the feature will be excluded from the training chip, and will not be added to the label file.
The percent value is expressed as a decimal. For example, to specify an overlap of 20 percent, use a value of 0.2. The default value is 0, which means that all features will be included.
This parameter improves the performance of the tool and also improves inferencing. The speed is improved since less training chips are created. Inferencing is improved since the model is trained to only detect large patches of objects and ignores small corners of features.
This parameter is honoured only when the input_class_data parameter value is a feature service.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
Output string containing the location of the exported training data
Note: This function has been deprecated starting from ArcGIS API for Python version 1.9.0. Export data using Prepare Point Cloud Training Data tool available in 3D Analyst Extension from ArcGIS Pro 2.8 onwards.
Function to calculate estimated batch size based on GPU capacity, size of model and data.
Parameter
Description
model
Required arcgis.learn imagery model. Model instance for which batch size should be estimated. Not supported for text, tabular, timeseries or tracking models such as FullyConnectedNetwork, MLModel, TimeSeriesModel, SiamMask, PSETAE and EfficientDet models.
mode
Optional string. Default train. The mode for which batch size is estimated. Supported âtrainâ and âevalâ mode for calculating batch size in training mode and evaluation mode respectively. Note: max_batchsize is capped at 1024 for train and eval mode and recommended_batchsize is capped at 64 for train mode.
Named tuple of recommended_batchsize and max_batchsize
Prepares a data object from training sample exported by the Export Training Data tool in ArcGIS Pro or Image Server, or training samples in the supported dataset formats. This data object consists of training and validation data sets with the specified transformations, chip size, batch size, split percentage, etc.
Parameter
Description
path
Required string. Path to data directory or a list of paths in case of multi-folder training.
class_mapping
Optional dictionary. Mapping from id to its string label. Not supported for MaskRCNN model.
chip_size
Optional integer, default 224. Size of the image to train the model. Images are cropped to the specified chip_size. If image size is less than chip_size, the image size is used as chip_size. A chip size that is a multiple of 32 pixels is recommended. Not supported for SuperResolution, SiamMask, WNet_cGAN, Pix2Pix and CycleGAN.
val_split_pct
Optional float. Percentage of training data to keep as validation.
batch_size
Optional integer. Default 64. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors). Batch size is required to be greater than 1. If None is provided, a recommended batch size is used. This is estimated based on GPU capacity, size of model and data. To explicitly find the recommended batch_size, use arcgis.learn.estimate_batch_size() method.
transforms
Optional tuple. Fast.ai transforms for data augmentation of training and validation datasets respectively (We have set good defaults which work for satellite imagery well). If transforms is set to False no transformation will take place and chip_size parameter will also not take effect. If the dataset_type is âPointCloudâ and âPointCloudODâ, use Transform3d
.
collate_fn
Optional function. Passed to PyTorch to collate data into batches(usually default works).
seed
Optional integer. Random seed for reproducible train-validation split.
dataset_type
Optional string. prepare_data()
function will infer the dataset_type on its own if it contains a map.txt file. If the path does not contain the map.txt file pass one of âPASCAL_VOC_rectanglesâ, âKITTI_rectanglesâ, âImagenetâ. This parameter is mandatory for dataset âPointCloudâ, âPointCloudODâ, âImageCaptioningâ, âChangeDetectionâ, âWNet_cGANâ and âObjectTrackingâ. Note: For details on dataset_type please refer to this link.
resize_to
Optional integer or tuple of integers. A tuple should be of the form (height, width). Resize the images to a given size. Works only for âPASCAL_VOC_rectanglesâ, âLabelled_Tilesâ, âsuperresâ and âImagenetâ.First resizes the image to the given size and then crops images of size equal to chip_size. Note: If resize_to is less than chip_size, the resize_to is used as chip_size.
working_dir
Optional string. Sets the default path to be used as a prefix for saving trained models and checkpoints.
Keyword Arguments
Parameter
Description
n_masks
Optional int. Default value is 30. Required for MaXDeepLab panoptic segmentation model. It represents the max number of class labels and instances any image can contain. To compute the exact value for your dataset, use the compute_n_masks()
method available with MaXDeepLab model.
downsample_factor
Optional float. Factor to downsample the images for image SuperResolution. for example: if value is 2 and image size 256x256, it will create label images of size 128x128. Default is 4
min_points
For dataset_type=âPointCloudâ and âPointCloudODâ: Optional int. Filtering based on minimum number of points in a block. Set min_points=1000 to filter out blocks with less than 1000 points.
For dataset_type=âPSETAEâ: Optional int. Number of pixels equal to or multiples of 64 to sample from the each masked region of training data i.e. 64, 128 etc.
extra_features
Optional List. Contains a list of strings which mentions extra features to be used for training, applicable with dataset_type âPointCloudâ and âPointCloudODâ. By default only x, y, and z are considered for training irrespective of what features were exported. For example: [âintensityâ, ânumberOfReturnsâ, âreturnNumberâ, âredâ, âgreenâ, âblueâ, ânearInfraredâ].
remap_classes
Optional dictionary {int:int}. Mapping from class values to user defined values, in both training and validation data.
For dataset_type=âPointCloudâ: It will remap LAS classcode structure. For example: {1:3, 2:4} will remap LAS classcode 1 to 3 and classcode 2 to 4.
For dataset_type=âPointCloudODâ: It will remap object class ids. When this parameter is set as remap_classes={5:3, 2:4}, then â5â and 2 class values will be considered as â3â, and â4â, respectively.
classes_of_interest
Optional list of int.
For dataset_type=âPointCloudâ: This will filter training blocks based on classes_of_interest. If we have â1, 3, 5, 7â LAS classcodes in our dataset, but we are mainly interested in 1 and 3 classcodes, Set classes_of_interest=[1,3]. Only those blocks will be considered for training which either have 1 or 3 LAS classcodes in them, rest of the blocks will be filtered out. If remapping of rest of the classcodes is required, set background_classcode to some value.
For dataset_type=âPointCloudODâ: This will filter training blocks based on classes_of_interest. If we have â2, 3, 10, 16â object classes in the 3d feature class, but we are mainly interested in 2 and 10 object classes, Set classes_of_interest=[2,10]. Only those blocks will be considered for training which either have 2 or 10 object classes in them, the rest of the blocks will be filtered out. Set background_classcode as True to discard other classes.
Note: classes_of_interest is applied on the remapped class structure, if remap_classes is also used.
background_classcode
This parameter is only applicable when classes_of_interest is specified.
For dataset_type=âPointCloudâ: Optional int. Default: None. This will remap other class values, except classes_of_interest to background_classcode.
For dataset_type=âPointCloudODâ: Optional Bool. Default: False. If set to âTrueâ, only classes_of_interest class values will be considered and rest of the class values will be discarded.
stratify
Optional boolean, default False. If True, prepare_data will try to maintain the class proportion in train and validation data according to the val_split_pct. Default value feature classification is True. Default value pixel classification is False.
Note: Applies to single label feature classification, object detection and pixel classification.
timesteps_of_interest
Optional list. List of time steps of interest. This will filter multi-temporal timesereis based on timesteps_of_interest. If the dataset have time-steps [0, 1, 2, 3], but we are mainly interested in 0, 1 and 2, Set timesteps_of_interest=[0,1,2]. Only those time-steps will be considered for training, rest of the time-steps will be filtered out. Applicable only for dataset_type=âPSETAEâ.
channels_of_interest
Optional list. List of spectral bands/channels of interest. This will filter out bands from rasters of multi-temporal timeseries based on channels_of_interest list. If we have bands [0,1,2,3,4] in our dataset, but we are mainly interested in 0, 1 and 2, Set channels_of_interest=[0,1,2]. Only those spectral bands will be considered for training. Applicable only for dataset_type=âPSETAEâ.
n_temporal
Required int. Number of temporal observations or time steps. Applicable only for dataset_type=âPSETAEâ.
n_temporal_dates
Required list of strings. The dates of that observations will be used for the positional encoding and should be stored as a list of dates strings in YYYY-MM-DD format. For example, If we have stacked imagery of n bands each from two dates then, [âYYYY-MM-DDâ,âYYYY-MM-DDâ]. Applicable only for dataset_type=âPSETAEâ.
num_workers
Optional int. Default 0
. number of subprocesses to use for data loading on the Windows operating system. 0
means that the data will be loaded in the main process.
forecast_timesteps
Required int. Default set to 1. How far the model should forecast into the future. A forecast timestep is the interval at which predictions are made, For example, If we have 8-hourly data point and we want to make a 8 hr, 16 hr, 24 hr forecast, forecast timesteps is set to 1, 2, 3 respectively and so on. In case of hourly and monthly data point, for forecasts of 1, 2, 3 hr/month, forecast timestep is set to 1, 2, 3 respectively and so on. Applicable only for climaX model architecuture.
hrs_each_step
Optional int. Default set to 1 (hrs). Number of hours in which data is collected, for example, if you have 8-hourly, hourly, montly, daily then, hrs_each_step is to be set to 8, 1, 720 (30 days * 24), 24 hrs respectively. Applicable only for climaX model architecuture.
data object
Prepares a tabular data object from input_features and optionally rasters.
Parameter
Description
input_features
Optional FeatureLayer
Object or spatially enabled dataframe. This contains features denoting the value of the dependent variable. Leave empty for using rasters with MLModel.
variable_predict
Optional String or List, denoting the field_names of the variable to predict. Keep none for unsupervised training using ML Model. For timeseries it will work for continuous variable. As of now we support only binary classification in fairness evaluation.
explanatory_variables
Optional list containing field names from input_features By default the field type is continuous. To override field type to categorical, pass a 2-sized tuple in the list containing:
field to be taken as input from the input_features.
2. True/False denoting Categorical/Continuous variable. If the field is text, the value should be âtextâ
and if the field is image path, the value should be âimageâ.
For example:
[âField_1â, (âField_2â, True)] or [âField_1â, (âField_3â, âtextâ)]
Here Field_1 is treated as continuous and Field_2 as categorical and Field_3 as Text
explanatory_rasters
Optional list containing Raster objects. By default the rasters are continuous. To mark a raster categorical, pass a 2-sized tuple containing:
Raster object.
True/False denoting Categorical/Continuous variable.
For example:
[raster_1, (raster_2, True)]
Here raster_1 is treated as continuous and raster_2 as categorical. To select only specific bands of raster, pass 2/3 sized tuple containing:
Raster object.
True/False denoting Categorical/Continuous variable.
Tuple holding the indexes of the bands to be used.
For example:
[raster_1, (raster_2, True,(0,)),(raster_3, (0,1,2))]
Here bands with indexes 0 will be chosen from raster_2 and it will be treated as categorical variable, bands with indexes 0,1,2 will be chosen from raster_3 and they will be treated as continuous.
date_field
Optional field_name. This field contains the date in the input_features. The field type can be a string or date time field. If specified, the field will be split into Year, month, week, day, dayofweek, dayofyear, is_month_end, is_month_start, is_quarter_end, is_quarter_start, is_year_end, is_year_start, hour, minute, second, elapsed and these will be added to the prepared data as columns. All fields other than elapsed and dayofyear are treated as categorical.
cell_sizes
Size of H3 cells (specified as H3 resolution) for spatially aggregating input features and passing in the cell ids as additional explanatory variables to the model. If a spatial dataframe is passed as input_features, ensure that the spatial reference is 4326, and the geometry type is Point. Not applicable when explanatory_rasters are provided. Not applicable for MLModel.
distance_features
Optional list of FeatureLayer
objects. Distance is calculated from features in these layers to features in input_features. Nearest distance to each feature is added in the prepared data. Field names in the prepared data added are âNEAR_DIST_1â, âNEAR_DIST_2â etc.
preprocessors
For FullyConnectedNetworks: All the transforms are applied by default and hence users need not pass any additional transforms/preprocessors. For MLModel which uses Scikit-learn transforms:
Supply a column transformer object.
Supply a list of tuple,
For example:
[(âCol_1â, âCol_2â, Transform1()), (âCol_3â, Transform2())]
Categorical data is by default encoded. If nothing is specified, default transforms are applied to fill missing values and normalize categorical data. For Raster use raster.name for the first band, raster.name_1 for 2nd band, raster.name_2 for 3rd and so on.
val_split_pct
Optional float. Percentage of training data to keep as validation. By default 10% data is kept for validation.
seed
Optional integer. Random seed for reproducible train-validation split. Default value is 42.
batch_size
Optional integer. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors). Default value is 64.
index_field
Optional string. Field Name in the input features which will be used as index field for the data. Used for Time Series, to visualize values on the x-axis.
working_dir
Optional string. Sets the default path to be used as a prefix for saving trained models and checkpoints.
Keyword Arguments
Parameter
Description
stratify
Optional boolean. If True, prepare_tabulardata will try to maintain the class proportion in train and validation data according to the val_split_pct. Default value is False.
Note
Applies to classification problems.
random_split
Optional boolean. sets the behaviour of train and validation split to random or last n steps. If set to True then random sampling will be performed. Otherwise, last n steps will be used as validation. val_split_pct will determine the number the records for validation. Default value is True
Note
Applies to timeseries
TabularData object
Prepares a text data object from the files present at data folder
Parameter
Description
path
Required directory path. The directory path where the training and validation files are present.
task
Required string. The task for which the dataset is prepared. Available choice at this point is âclassificationâ, âsequence_translationâ or âentity_recognitionâ.
text_columns
Optional string. This parameter is mandatory when task is âclassificationâ or âsequence_translationâ. This parameter is mandatory when task is entity_recognition task with input dataset_type as csv. The column that will contain the input text.
label_columns
Optional list. This parameter is mandatory when task is âclassificationâ or âsequence_translationâ. The list of columns denoting the class label/translated text to predict. Provide a list of columns in case of multi-label classification problem.
train_file
Optional string. The file name containing the training data. Supported file formats/extensions are .csv and .tsv Default value is train.csv
valid_file
Optional string. The file name containing the validation data. Supported file formats/extensions are .csv and .tsv. Default value is None. If None then some portion of the training data will be kept for validation (based on the value of val_split_pct parameter)
val_split_pct
Optional float. Percentage of training data to keep as validation. By default 10% data is kept for validation.
seed
Optional integer. Random seed for reproducible train-validation split. Default value is 42.
batch_size
Optional integer. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors). Default value is 16.
process_labels
Optional boolean. If true, default processing functions will be called on label columns as well. Default value is False.
remove_html_tags
Optional boolean. If true, remove html tags from text. Default value is False.
remove_urls
Optional boolean. If true, remove urls from text. Default value is False.
working_dir
Optional string. Sets the default path to be used as a prefix for saving trained models and checkpoints.
dataset_type
Optional list. This parameter is mandatory when task is âentity_recognitionâ Accepted data format for this model are - âner_jsonâ,âBIOâ or âLBIOUâ, âcsvâ For csv dataset type. If an entity has multiple values. It should be separated by ,.
class_mapping
Optional dictionary. This parameter is optional and can only be used when the task is entity recognition. The dictionary specifies the location entity. Use the format: class_mapping={âaddress_tagâ: âlocationâ}. The value linked to the âaddress_tagâ key will be identified as a location entity. If the model extracts multiple location entities from a single document, each location will be listed separately in the results.
Keyword Arguments
Parameter
Description
stratify
Optional boolean. If True, prepare_textdata will try to maintain the class proportion in train and validation data according to the val_split_pct. The default value is True.
Note
Applies only to single-label text classification.
encoding
Optional string. Applicable only when task is entity_recognition: The encoding to read the csv/json file. Default is âUTF-8â
TextData object
Create transformations for 3D datasets, that can be used in prepare_data()
to apply data augmentation with a 50% probability. Applicable for dataset_type=âPointCloudâ and dataset_type=âPointCloudODâ.
Parameter
Description
rotation
An optional list of float. It defines a value in degrees for each X, Y, and Z, dimensions which will be used to rotate a block around the X, Y, and Z, axes.
Example: A value of [2, 3, 180] means a random value for each X, Y, and Z will be selected between, [-2, 2], [-3, 3], and [-180, 180], respectively. The block will rotate around the respective axis as per the selected random value.
Note: For dataset_type=âPointCloudODâ, rotation around the X and Y axes will not be considered. Default: [2.5, 2.5, 45]
scaling
An optional float. It defines a percentage value, that will be used to apply scaling transformation to a block.
Example: A value of 5 means, for each X, Y, and Z, dimensions a random value will be selected within the range of [0, 5], where the block might be scaled up or scaled down randomly, in the respective dimension.
Note: For dataset_type=âPointCloudODâ, the same scale percentage in all three directions is considered. Default: 5
jitter
Optional float within [0, 1]. It defines a value in meters, which is used to add random variations in X, Y, and Z of all points.
Example: if the value provided is 0.1 then within the range of [-0.1, 0.1] a random value is selected, The selected value is then added to the pointâs X coordinate. Similarly, it is applied for Y and Z coordinates.
Note: Only applicable for dataset_type=âPointCloudâ. Default: 0.0.
Transform3d
object
Automates the process of model selection, training and hyperparameter tuning of machine learning models within a specified time limit. Based upon MLJar(https://github.com/mljar/mljar-supervised/) and scikit-learn.
Note that automated machine learning support is provided only for supervised learning. Refer https://supervised.mljar.com/
Parameter
Description
data
Required TabularDataObject. Returned data object from prepare_tabulardata()
function.
total_time_limit
Optional Int. The total time limit in seconds for AutoML training. Default is 3600 (1 Hr)
mode
Optional Str. Can be {Basic, Intermediate, Advanced}. This parameter defines the goal of AutoML and how intensive the AutoML search will be.
Basic : To to be used when the user wants to explain and understand the data. Uses 75%/25% train/test split. Uses the following models: Baseline, Linear, Decision Tree, Random Trees, XGBoost, Neural Network, and Ensemble. Has full explanations in reports: learning curves, importance plots, and SHAP plots. Intermediate : To be used when the user wants to train a model that will be used in real-life use cases. Uses 5-fold CV (Cross-Validation). Uses the following models: Linear, Random Trees, LightGBM, XGBoost, CatBoost, Neural Network, and Ensemble. Has learning curves and importance plots in reports.
Advanced : To be used for machine learning competitions (maximum performance). Uses 10-fold CV (Cross-Validation). Uses the following models: Decision Tree, Random Trees, Extra Trees, XGBoost, CatBoost, Neural Network, Nearest Neighbors, Ensemble, and Stacking.It has only learning curves in the reports. Default is Basic
algorithms
Optional. List of str. The list of algorithms that will be used in the training. The algorithms can be: Linear, Decision Tree, Random Trees, Extra Trees, LightGBM, Xgboost, Neural Network
eval_metric
Optional Str. The metric to be used to compare models. Possible values are: For binary classification - logloss (default), auc, f1, average_precision, accuracy. For multiclass classification - logloss (default), f1, accuracy For regression - rmse (default), mse, mae, r2, mape, spearman, pearson
Note - If there are only 2 unique values in the target, then binary classification is performed, If number of unique values in the target is between 2 and 20 (included), then multiclass classification is performed, In all other cases, regression is performed on the dataset.
n_jobs
Optional. Int. Number of CPU cores to be used. By default, it is set to 1.Set it to -1 to use all the cores.
kwargs
sensitive_variables
Optional. List of strings. Variables in the feature class/dataframe which are sensitive and prone to model bias. Ex - [âsexâ,âraceâ] or [ânationalityâ]
fairness_metric
Optional. String. Name of fairness metric based on which fairness optimization should be done on the evaluated models. Available metrics for binary classification are âdemographic_parity_differenceâ , âdemographic_parity_ratioâ, âequalized_odds_differenceâ, âequalized_odds_ratioâ. âdemographic_parity_ratioâ is the default. Available metrics for regression are âgroup_loss_ratioâ (Default) and âgroup_loss_differenceâ.
fairness_threshold
Optional. Float. Required when the chosen metric is group_loss_difference The threshold value for fairness metric. Default values are as follows: - for demographic_parity_difference the metric value should be below 0.25, - for demographic_parity_ratio the metric value should be above 0.8, - for equalized_odds_difference the metric value should be below 0.25, - for equalized_odds_ratio the metric value should be above 0.8. - for group_loss_ratio the metric value should be above 0.8. - for group_loss_difference the metric value should be below 0.25,
privileged_groups
Optional. List. List of previleged groups in the sensitive attribute. For example, in binary classification task, a privileged group is the one with the highest selection rate. Example value: [{âsexâ: âMaleâ}]
underprivileged_groups
Optional. List. List of underprivileged groups in the sensitive attribute. For example, in binary classification task, an underprivileged group is the one with the lowest selection rate. Example value: [{âsexâ: âFemaleâ}]
AutoML
Object
Shows sample results for the model.
tuple/dataframe
Fits the AutoML model.
Creates an AutoML Model Object from an Esri Model Definition (EMD) file. The model object created can only be used for inference on a new dataset and cannot be retrained.
Parameter
Description
emd_path
Required string. Path to Esri Model Definition file.
AutoML
Object
Predict on data from feature layer, dataframe and or raster data.
Parameter
Description
input_features
Optional FeatureLayer
or spatial dataframe. Required if prediction_type=âfeaturesâ. Contains features with location and some or all fields required to infer the dependent variable value.
explanatory_rasters
Optional list. Required if prediction_type=ârasterâ. Contains a list of raster objects containing some or all fields required to infer the dependent variable value.
datefield
Optional string. Field name from feature layer that contains the date, time for the input features. Same as prepare_tabulardata()
.
cell_sizes
Size of H3 cells (specified as H3 resolution) for spatially aggregating input features and passing in the cell ids as additional explanatory variables to the model. If a spatial dataframe is passed as input_features, ensure that the spatial reference is 4326, and the geometry type is Point. Not applicable when explanatory_rasters are provided.
distance_features
Optional List of FeatureLayer
objects. These layers are used for calculation of field âNEAR_DIST_1â, âNEAR_DIST_2â etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata()
.
output_layer_name
Optional string. Used for publishing the output layer.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
prediction_type
Optional String. Set âfeaturesâ or âdataframeâ to make output feature layer predictions. With this feature_layer argument is required.
Set ârasterâ, to make prediction raster. With this rasters must be specified.
output_raster_path
Optional path. Required when prediction_type=ârasterâ, saves the output raster to this path.
match_field_names
Optional dictionary. Specify mapping of field names from prediction set to training set. For example:
{
âField_Name_1â: âField_1â,
âField_Name_2â: âField_2â
}
confidence
Optional Bool. Set confidence to True to get prediction confidence for classification use cases.Default is True.
FeatureLayer
if prediction_type=âfeaturesâ, dataframe for prediction_type=âdataframeâ else creates an output raster.
output from AutoMLâs model.predict_proba() with prediction probability for the training data
a report of the different models trained by AutoML along with their performance.
Saves the model in the path specified. Creates an Esri Model and a dlpk. Uses pickle to save the model and transforms.
path
output from AutoMLâs model.score(), R2 score in case of regression and Accuracy in case of classification.
Shows sample results for the model.
dataframe
Automates the process of model selection, training and hyperparameter tuning of arcgis.learn supported deep learning models within a specified time limit.
Parameter
Description
data
Required ImageryDataObject. Returned data object from prepare_data()
function.
total_time_limit
Optional Int. The total time limit in hours for AutoDL training. Default is 2 Hr.
mode
Optional String. Can be âbasicâ or âadvancedâ.
basic : To be used when the user wants to train all selected networks.
advanced : To be used when the user wants to tune hyper parameters of two
best performing models from basic mode.
network
Optional List of str. The list of models that will be used in the training. For eg: Supported Object Detection models: [âSingleShotDetectorâ, âRetinaNetâ, âFasterRCNNâ, âYOLOv3â, âMaskRCNNâ, âDETRegâ ,âRTDetrV2â,âATSSâ, âCARAFEâ, âCascadeRCNNâ, âCascadeRPNâ, âDCNâ, âDetectorsâ, âDoubleHeadsâ, âDynamicRCNNâ, âEmpiricalAttentionâ, âFCOSâ, âFoveaBoxâ, âFSAFâ, âGHMâ, âLibraRCNNâ, âPaFPNâ, âPISAâ, âRegNetâ,âRepPointsâ, âRes2Netâ, âSABLâ, âVFNetâ] Supported Pixel Classification models: [âDeepLabâ, âUnetClassifierâ, âPSPNetClassifierâ, âSamLoRAâ, âANNâ, âAPCNetâ, âCCNetâ, âCGNetâ, âHRNetâ, âDeepLabV3Plusâ, âMask2Formerâ, âDMNetâ, âDNLNetâ, âFastSCNNâ, âFCNâ, âGCNetâ, âMobileNetV2â, âNonLocalNetâ, âPSANetâ, âSemFPNâ, âUperNetâ]
verbose
Optional Boolean. To be used to display logs while training the models. Default is True.
AutoDL
Object
Calculates the average of the âaverage precision scoreâ of all classes for selected networks
Train the selected networks for the specified number of epochs and using the specified learning rates
Calculates the mIOU of all classes for selected networks
returns a HTML report of the different models trained by AutoDL along with their performance.
returns output from AutoDLâs model.score(), âaverage precision scoreâ in case of detection and accuracy in case of classification.
Shows sample results for the model.
Parameter
Description
rows
Optional number of rows. By default, 5 rows are displayed.
dataframe
Supported classification models.
Supported detection models.
Imagery Model is used to fine tune models trained using AutoDL
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit
method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Train the model for the specified number of epochs while using the specified learning rates
Parameter
Description
epochs
Optional integer. Number of cycles of training on the data. Increase it if the model is underfitting. Default value is 10.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics
to list the available metrics to set here.
Loads a compatible saved model for inferencing or fine tuning from the disk, which can be used to further fine tune the models saved using AutoDL.
Parameter
Description
path
Required string. Path to Esri Model Definition(EMD) or DLPK file.
data
Required ImageryDataObject. Returned data object from prepare_data()
function.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the progress bar if True.
dict if mean is False otherwise float
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
, FeatureClassifier
and RetinaNet
. torchscript
format is supported by SiamMask
. For usage of SiamMask model in ArcGIS Pro 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
Displays the results of a trained model on a part of the validation set.
rows
Optional int. Number of rows of results to be displayed.
Unfreezes the earlier layers of the model for fine-tuning.
Creates an image classifier to classify the area occupied by a geographical feature based on the imagery it overlaps with.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet34
by default. Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
mixup
Optional boolean. If set to True, it creates new training images by randomly mixing training set images.
The default is set to False.
oversample
Optional boolean. If set to True, it oversamples unbalanced classes of the dataset during training. Not supported with MultiLabel dataset.
backend
Optional string. Controls the backend framework to be used for this model, which is âpytorchâ by default.
valid options are âpytorch
â, âtensorflow
â
wavelengths
Optional list. A list of central wavelengths corresponding to each data band (in micrometers).
FeatureClassifier
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Deprecated since version 1.7.1: Please use classify_objects()
instead
Deprecated since version 1.7.1: Please use classify_objects()
instead
Categorizes each feature by classifying its attachments or an image of its geographical area (using the provided Imagery Layer) and updates the feature layer with the prediction results in the output_label_field
. Deprecated, Use the Classify Objects Using Deep Learning tool or classify_objects()
Parameter
Description
feature_layer
Required. Public FeatureLayer
or path of local feature class for classification with read, write, edit permissions.
raster
Optional. ImageryLayer
or path of local raster to be used for exporting image chips. (Requires arcpy)
class_value_field
Required string. Output field to be added in the layer, containing class value of predictions.
class_name_field
Required string. Output field to be added in the layer, containing class name of predictions.
confidence_field
Optional string. Output column name to be added in the layer which contains the confidence score.
cell_size
Optional float. Cell size to be used for exporting the image chips.
coordinate_system
Optional. Cartographic Coordinate System to be used for exporting the image chips.
predict_function
Optional list of tuples. Used for calculation of final prediction result when each feature has more than one attachment. The predict_function
takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence.
batch_size
Optional integer. The no of images or tiles to process in a single go.
The default value is 64.
overwrite
Optional boolean. If set to True the output fields will be overwritten by new values.
The default value is False.
Boolean : True if operation is successful, False otherwise
Deprecated in ArcGIS version 1.9.1 and later: Use the Classify Objects Using Deep Learning tool or classify_objects()
Classifies the exported images and updates the feature layer with the prediction results in the output_label_field
. Works with RGB images only.
Parameter
Description
feature_layer
Required. FeatureLayer
for classification.
labeled_tiles_directory
Required. Folder structure containing images and labels folder. The chips should have been generated using the export training data tool in the Labeled Tiles format, and the labels should contain the OBJECTIDs of the features to be classified.
input_label_field
Required. Value field name which created the labeled tiles. This field should contain the OBJECTIDs of the features to be classified. In case of attachments this field is not used.
output_label_field
Required. Output column name to be added in the layer which contains predictions.
confidence_field
Optional. Output column name to be added in the layer which contains the confidence score.
predict_function
Optional. Used for calculation of final prediction result when each feature has more than one attachment. The predict_function
takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence
Boolean : True/False if operation is successful
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Supported list of foundation model backbones for this model.
Creates a Feature classifier from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
FeatureClassifier
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plots a confusion matrix of the model predictions to evaluate accuracy kwargs
Parameter
Description
thresh
confidence score threshold for multilabel predictions, defaults to 0.5
Plots the hard examples with their heatmaps.
Parameter
Description
num_examples
Number of hard examples to plot prepare_data()
function.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. Works with RGB images only.
Parameter
Description
img_path
Required. Path to the image file to make the predictions on.
visualize
Optional: Set this parameter to True to visualize the image being predicted.
gradcam
Optional: Set this parameter to True to get gradcam visualization to help with explanability of the prediction. If set to True, visualize parameter must also be set to True.
prediction label and confidence
Predicts on images present in the given folder and creates a feature layer. The images stored in the folder contain GPS information as part of EXIF metadata. Works with RGB images only.
Parameter
Description
folder
Required String. Folder containing images to inference on.
feature_layer_name
Required String. The name of the feature layer used to publish.
gis
Optional GIS
Object, the GIS on which this tool runs. If not specified, the active GIS is used.
prediction_field
Optional String. The field name to use to add predictions.
confidence_field
Optional String. The field name to use to add confidence.
FeatureCollection
Object
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
gradcam
Optional boolean. Setting this to True for labelled tiles will enable the âexplainability_mapâ parameter in the Classify Object Using Deep Learning tool in ArcGIS Pro/Online. The explainability_map parameter can be used to visualize the Grad-CAM from the tool. Setting this to True will also save Explainability Map in the saved folder Default is set to False. This feature works only with RGB images.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/abs/1506.01497. Creates a FasterRCNN
object detection model, based on https://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet50 by default. Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
FasterRCNN
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a FasterRCNN
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
FasterRCNN
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Returns a tuple with predictions, labels and optionally confidence scores if return_scores=True. The predicted bounding boxes are returned as a list of lists containing the xmin, ymin, width and height of each predicted object in each image. The labels are returned as a list of class values and the confidence scores are returned as a list of floats indicating the confidence of each prediction.
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a RetinaNet Object Detector with the specified zoom scales and aspect ratios. Based on the Fast.ai notebook
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
scales
Optional list of float values. Zoom scales of anchor boxes.
ratios
Optional list of float values. Aspect ratios of anchor boxes.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet50 by default. Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
RetinaNet
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a RetinaNet Object Detector from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
RetinaNet
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and displays the results of a trained model on a single image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
batch_size
Optional int. Batch size to be used during tiled inferencing. Default value 1.
âListâ of xmin, ymin, width, height of predicted bounding boxes on the given image
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported list of backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a YOLOv3 object detector.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function. YOLOv3 only supports image sizes in multiples of 32 (e.g. 256, 416, etc.)
pretrained_path
Optional string. Path where pre-trained model is saved.
YOLOv3
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision. Defaults to 0.1. To be modified according to the dataset and training.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a YOLOv3 Object Detector from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
YOLOv3
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and displays the results of a trained model on a single image. The image size should at least be 416x416px if using COCO pretrained weights. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid. Defaults to 0.1. To be modified according to the dataset and training.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
batch_size
Optional int. Batch size to be used during tiled inferencing. Default value 1.
âListâ of xmin, ymin, width, height of predicted bounding boxes on the given image
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered. Defaults to 0.1. To be modified according to the dataset and training.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid. Defaults to 0.1. To be modified according to the dataset and training.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a Single Shot Detector with the specified grid sizes, zoom scales and aspect ratios. Based on Fast.ai MOOC Version2 Lesson 9.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
grids
Required list. Grid sizes used for creating anchor boxes.
zooms
Optional list. Zooms of anchor boxes.
ratios
Optional list of tuples. Aspect ratios of anchor boxes.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet34 by default. Supported backbones: ResNet, DenseNet, VGG families and specified Timm models(experimental support) from backbones()
.
dropout
Optional float. Dropout probability. Increase it to reduce overfitting.
bias
Optional float. Bias for SSD head.
focal_loss
Optional boolean. Uses Focal Loss if True.
pretrained_path
Optional string. Path where pre-trained model is saved.
location_loss_factor
Optional float. Sets the weight of the bounding box loss. This should be strictly between 0 and 1. This is default None which gives equal weight to both location and classification loss. This factor adjusts the focus of model on the location of bounding box.
ssd_version
Optional int within [1,2]. Use version=1 for arcgis v1.6.2 or earlier
backend
Optional string. Controls the backend framework to be used for this model, which is âpytorchâ by default.
valid options are âpytorchâ, âtensorflowâ
wavelengths
Optional list. A list of central wavelengths corresponding to each data band (in micrometers).
SingleShotDetector
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Single Shot Detector from an Esri Model Definition (EMD) file.
Parameter
Description
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
emd_path
Required string. Path to Esri Model Definition file.
SingleShotDetector
Object
Creates a Single Shot Detector from an Esri Model Definition (EMD) file.
Note: Only supported for Pytorch models.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
SingleShotDetector
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
batch_size
Optional int. Batch size to be used during tiled inferencing. Default value 1.
âListâ of xmin, ymin, width, height of predicted bounding boxes on the given image
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/pdf/2407.17140. Creates a RTDetrV2
object detection model, based on https://github.com/lyuwenyu/RT-DETR/tree/main.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet50 by default. Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
RTDetrV2
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a RTDetrV2
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
RTDetrV2
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Returns a tuple with predictions, labels and optionally confidence scores if return_scores=True. The predicted bounding boxes are returned as a list of lists containing the xmin, ymin, width and height of each predicted object in each image. The labels are returned as a list of class values and the confidence scores are returned as a list of floats indicating the confidence of each prediction.
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported list of backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a MaskRCNN
Instance segmentation object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MaskRCNN
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and displays the results of a trained model on a single image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
tta_prediction
Optional bool. Perform test time augmentation while predicting
kwargs
Parameter
Description
batch_size
Optional int. Batch size to be used during tiled inferencing
min_obj_size
Optional int. Minimum object size to be detected.
âListâ of xmin, ymin, width, height, labels, scores, of predicted bounding boxes on the given image
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
mode
bbox
- For visualizing only bounding boxes.
mask
- For visualizing only mask
bbox_mask
- For visualizing both mask and bounding boxes.
mask_threshold
Optional float. The probability above which a pixel will be considered mask.
box_threshold
Optional float. The probability above which a detection will be considered valid.
tta_prediction
Optional bool. Perform test time augmentation while predicting
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
model
Required model name or path to the configuration file from MMDetection
repository. The list of the supported models can be queried using supported_models
.
model_weight
Optional path of the model weight from MMDetection
repository.
pretrained_path
Optional string. Path where pre-trained model is saved.
MMDetection
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a MMDetection
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MMDetection
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Returns a tuple with predictions, labels and optionally confidence scores if return_scores=True. The predicted bounding boxes are returned as a list of lists containing the xmin, ymin, width and height of each predicted object in each image. The labels are returned as a list of class values and the confidence scores are returned as a list of floats indicating the confidence of each prediction.
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported dataset types for this model.
List of models supported by this class.
List of transformer models supported by this class.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/abs/2106.04550. Creates a DETReg
object detection model, based on https://github.com/amirbar/DETReg.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction. resnet50 is the only backbone that is currently supported. resnet50 is used by default.
pretrained_path
Optional string. Path where pre-trained model is saved.
DETReg
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a DETReg
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
DETReg
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Runs prediction on an Image. This method is only supported for RGB images.
Parameter
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Returns a tuple with predictions, labels and optionally confidence scores if return_scores=True. The predicted bounding boxes are returned as a list of lists containing the xmin, ymin, width and height of each predicted object in each image. The labels are returned as a list of class values and the confidence scores are returned as a list of floats indicating the confidence of each prediction.
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Parameter
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported list of backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a EfficientDet model for Object Detection. Supports RGB -JPEG imagery. Based on TFLite Model Maker
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function. Only (JPEG+PASCAL_VOC_rectangles) format supported.
backbone
Optional String. Backbone convolutional neural network model used for EfficientDet, which is efficientdet_lite0 by default.
pretrained_path
Optional String. Path where a compatible pre-trained model is saved. Accepts a Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
EfficientDet
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation set for each class.
Argument
Description
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
dict if mean is False otherwise float
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.Recommended to set to False.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
Creates a EfficientDet
object from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
EfficientDet
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and displays the results of a trained model on a single image. This method is only supported for RGB images.
Argument
Description
image_path
Required. Path to the image file to make the predictions on.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
âListâ of xmin, ymin, width, height, labels, scores, of predicted bounding boxes on the given image
Runs prediction on a video and appends the output VMTI predictions in the metadata file. This method is only supported for RGB images.
Argument
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.avi. Supports only AVI and MP4 formats.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
Supported torchvision backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a Unet like classifier based on given pretrained encoder.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet34 by default. Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
backend
Optional string. Controls the backend framework to be used for this model, which is âpytorchâ by default.
valid options are âpytorchâ, âtensorflowâ
kwargs
Parameter
Description
class_balancing
Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.
mixup
Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False
focal_loss
Optional boolean. If True, it will use focal loss Default: False
dice_loss_fraction
Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss. E.g. for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice Default: 0
dice_loss_average
Optional str. micro: Micro dice coefficient will be used for loss calculation. macro: Macro dice coefficient will be used for loss calculation. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: âmicroâ
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
UnetClassifier
Object
Computes per pixel accuracy on validation set.
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Unet like classifier from an Esri Model Definition (EMD) file.
Parameter
Description
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
emd_path
Required string. Path to Esri Model Definition file.
UnetClassifier
Object
Creates a Unet like classifier from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
UnetClassifier
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the progress bar if True.
dict if mean is False otherwise float
Computer per class precision, recall and f1-score on validation set.
Parameter
Description
self
segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
Returns per class precision, recall and f1 scores
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/abs/1612.01105. Creates a PSPNet Image Segmentation/ Pixel Classification model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet50 by default. Supported backbones: ResNet, DenseNet, VGG families and specified Timm models(experimental support) from backbones()
.
use_unet
Optional Bool. Specify whether to use Unet-Decoder or not, Default True.
pyramid_sizes
Optional List. The sizes at which the feature map is pooled at. Currently set to the best set reported in the paper, i.e, (1, 2, 3, 6)
pretrained
Optional Bool. If True, use the pretrained backbone
pretrained_path
Optional string. Path where pre-trained PSPNet model is saved.
unet_aux_loss
Optional. Bool If True will use auxiliary loss for PSUnet. Default set to False. This flag is applicable only when use_unet is True.
pointrend
Optional boolean. If True, it will use PointRend architecture on top of the segmentation head. Default: False. PointRend architecture from https://arxiv.org/pdf/1912.08193.pdf.
kwargs
Parameter
Description
class_balancing
Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.
mixup
Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False
focal_loss
Optional boolean. If True, it will use focal loss. Default: False
dice_loss_fraction
Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss.
Example:
for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice
Default: 0
dice_loss_average
Optional str.
âmicro
â: Micro dice coefficient will be used for loss calculation.
âmacro
â: Macro dice coefficient will be used for loss calculation.
A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: âmicroâ
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
keep_dilation
Optional boolean. When PointRend architecture is used, keep_dilation=True can potentially improve accuracy at the cost of memory consumption. Default: False
PSPNetClassifier
Object
Computes per pixel accuracy.
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Freezes the pretrained backbone.
Creates a PSPNet classifier from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
PSPNetClassifier
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the progress bar if True.
dict if mean is False otherwise float
Computer per class precision, recall and f1-score on validation set.
Parameter
Description
self
segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
Returns per class precision, recall and f1 scores
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/abs/1706.05587. Creates a DeepLab
Image Segmentation/ Pixel Classification model, based on https://github.com/pytorch/vision/tree/master/torchvision/models/segmentation.
Parameter
Description
data
Required fastai Databunch. Returned data object from function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is resnet101 by default since it is pretrained in torchvision. Supported backbones: ResNet, DenseNet, VGG family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
pointrend
Optional boolean. If True, it will use PointRend architecture on top of the segmentation head. Default: False. PointRend architecture from https://arxiv.org/pdf/1912.08193.pdf.
kwargs
Parameter
Description
class_balancing
Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.
mixup
Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False
focal_loss
Optional boolean. If True, it will use focal loss. Default: False
dice_loss_fraction
Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss. E.g. for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice Default: 0
dice_loss_average
Optional str.
micro: Micro dice coefficient will be used for loss calculation.
macro: Macro dice coefficient will be used for loss calculation.
A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: âmicroâ
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
keep_dilation
Optional boolean. When PointRend architecture is used, keep_dilation=True can potentially improves accuracy at the cost of memory consumption. Default: False
wavelengths
Optional list. A list of central wavelengths corresponding to each data band (in micrometers).
DeepLab
Object
Computes per pixel accuracy on validation set.
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a DeepLab
semantic segmentation object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
DeepLab
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the progress bar if True.
dict if mean is False otherwise float
Computer per class precision, recall and f1-score on validation set.
Parameter
Description
self
segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
Returns per class precision, recall and f1 scores
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/pdf/1902.10903.pdf. Creates a BDCNEdgeDetector
model
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is vgg19 by default. Supported backbones: ResNet, Vgg family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
BDCNEdgeDetector
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Computes precision, recall and f1 score on validation set.
Parameter
Description
thresh
Optional float. The probability on which the detection will be considered edge pixel.
buffer
Optional int. pixels in neighborhood to consider true detection.
dict
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a BDCNEdgeDetector
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
BDCNEdgeDetector
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Supported list of backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/pdf/1504.06375.pdf. Creates a HEDEdgeDetector
model
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone convolutional neural network model used for feature extraction, which is vgg19 by default. Supported backbones: ResNet, Vgg family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional string. Path where pre-trained model is saved.
wavelengths
Optional list. A list of central wavelengths corresponding to each data band (in micrometers).
HEDEdgeDetector
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Computes precision, recall and f1 score on validation set.
Parameter
Description
thresh
Optional float. The probability on which the detection will be considered edge pixel.
buffer
Optional int. pixels in neighborhood to consider true detection.
dict
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a HEDEdgeDetector
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
HEDEdgeDetector
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Supported list of transformer backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a Multi-Task Learning model for binary segmentation of roads. Supports RGB and Multispectral Imagery. Implementation based on https://doi.org/10.1109/CVPR.2019.01063 .
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional String. Backbone convolutional neural network model used for feature extraction. If hourglass is chosen as the mtl_model (Architecture), then this parameter is ignored as hourglass uses a special customised architecture. This parameter is used with linknet model. Default: âresnet34â Supported backbones: ResNet family and specified Timm models(experimental support) from backbones()
.
pretrained_path
Optional String. Path where a compatible pre-trained model is saved. Accepts a Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
kwargs
Parameter
Description
mtl_model
Optional String. It is used to create model from linknet or hourglass based neural architectures. Supported: âlinknetâ, âhourglassâ. Default: âhourglassâ
gaussian_thresh
Optional float. Sets the gaussian threshold which allows to set the required road width. Range: 0.0 to 1.0 Default: 0.76
orient_bin_size
Optional Int. Sets the bin size for orientation angles. Default: 20
orient_theta
Optional Int. Sets the width of orientation mask. Default: 8
MultiTaskRoadExtractor
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Multi-Task Learning model for binary segmentation from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MultiTaskRoadExtractor
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the prgress bar if True.
dict if mean is False otherwise float
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Shows the ground truth and predictions of model side by side.
kwargs
Parameter
Description
rows
Number of rows of data to be displayed, if batch size is smaller, then the rows will display the value provided for batch size.
alpha
Optional Float. Opacity parameter for label overlay on image. Float [0.0 - 1.0] Default: 0.6
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a ConnectNet model for binary segmentation of linear features. Supports RGB and Multispectral Imagery. Implementation based on https://doi.org/10.1109/CVPR.2019.01063 .
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional String. Backbone CNN model to be used for creating the base. If hourglass is chosen as the mtl_model (Architecture), then this parameter is ignored as hourglass uses a special customised architecture. This parameter is to be used with linknet architecture. Default: âresnet34â
Use supported_backbones property to get the list of all the supported backbones.
pretrained_path
Optional String. Path where a compatible pre-trained model is saved. Accepts a Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
kwargs
Parameter
Description
mtl_model
Optional String. It is used to create model from linknet or hourglass based neural architectures. Supported: âlinknetâ, âhourglassâ. Default: âhourglassâ
gaussian_thresh
Optional float. Sets the gaussian threshold which allows to set the required width of the linear feature. Range: 0.0 to 1.0 Default: 0.76
orient_bin_size
Optional Int. Sets the bin size for orientation angles. Default: 20
orient_theta
Optional Int. Sets the width of orientation mask. Default: 8
ConnectNet
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Supported list of backbones for this model.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Multi-Task Learning model for binary segmentation from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MultiTaskRoadExtractor
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean IOU on the validation set for each class.
Parameter
Description
mean
Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.
show_progress
Optional bool. Displays the prgress bar if True.
dict if mean is False otherwise float
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Shows the ground truth and predictions of model side by side.
kwargs
Parameter
Description
rows
Number of rows of data to be displayed, if batch size is smaller, then the rows will display the value provided for batch size.
alpha
Optional Float. Opacity parameter for label overlay on image. Float [0.0 - 1.0] Default: 0.6
Supported list of backbones for this model.
Supported dataset types for this model.
Supported list of torchgeo backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a Change Detection model.
A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection - https://www.mdpi.com/2072-4292/12/10/1662
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional function. Backbone CNN model to be used for creating the encoder of the ChangeDetector
, which is resnet18 by default. It supports the ResNet family of backbones.
attention_type
Optional string. Itâs value can be either be âPAMâ (Pyramid Attention Module) or âBAMâ (Basic Attention Module). Defaults to âPAMâ.
pretrained_path
Optional string. Path where pre-trained model is saved.
ChangeDetector
object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a ChangeDetector model from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Optional fastai Databunch. Returned data object from prepare_data()
function or None for inferencing.
ChangeDetector
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Computes precision, recall and f1 score.
Predict on a pair of images.
Parameter
Description
before_image
Required string. Path to image from before.
after_image
Required string. Path to image from later.
Kwargs
Parameter
Description
crop_predict
Optional Boolean. If True, It will predict using a sliding window strategy. Typically, used when image size is larger than the chip_size the model is trained on. Default False.
visualize
Optional Boolean. If True, It will plot the predictions on the notebook. Default False.
save
Optional Boolean. If true will write the prediction file on the disk. Default False.
PyTorch Tensor of the change mask.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on the validation set.
Supported torchvision backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
model
Required model name or path to the configuration file from MMSegmentation
repository. The list of the supported models can be queried using supported_models
model_weight
Optional path of the model weight from MMSegmentation
repository.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
class_balancing
Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
seq_len
Optional int. Number of timestamp bands. Applicable for prithvi100m model only. Default: 1
MMSegmentation
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a MMSegmentation
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MMSegmentation
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Supported dataset types for this model.
List of models supported by this class.
List of transformer based models supported by this class.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a MaXDeepLab
panoptic segmentation model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function. MaXDeepLab only supports image sizes in multiples of 16 (e.g. 256, 416, etc.).
pretrained_path
Optional string. Path where pre-trained model is saved.
MaXDeepLab
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes the maximum number of class labels and masks in any chip in the entire dataset. Note: It might take long time for larger datasets.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a MaXDeepLab Panoptic Segmentation
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MaXDeepLab Panoptic Segmentation Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Supported backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Default: vit_b Backbone model architecture. Supported backbones: Vision Transformers (huge, large, and base) pretrained by Meta. Use supported_backbones property to get the list of all the supported backbones.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
class_balancing
Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.
ignore_classes
Optional list. It will contain the list of class values on which model will not incur loss. Default: []
SamLoRA
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a SamLoRA
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
SamLoRA
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional Integer. Number of rows of results to be displayed.
kwargs
Parameter
Description
alpha
Optional Float. Default value is 0.5. Opacity of the lables for the corresponding images. Values range between 0 and 1, where 1 means opaque.
Supported list of backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a model object which generates images of type A from type B or type B from type A.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
pretrained_path
Optional string. Path where pre-trained model is saved.
gen_blocks
Optional integer. Number of ResNet blocks to use in generator.
lsgan
Optional boolean. If True, it will use Mean Squared Error else it will use Binary Cross Entropy.
CycleGAN
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes Frechet Inception Distance (FID) on validation set.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a CycleGAN
object from an Esri Model Definition (EMD) file.
Parameter
Description
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
CycleGAN
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and display the image.
Parameter
Description
img_path
Required path of an image.
convert_to
âAâ if we want to generate image of type âAâ from type âBâ or âBâ if we want to generate image of type âBâ from type âAâ where A and B are the domain specifications that were used while training.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
kwargs
rgb_bands
Optional list of integers (band numbers) to be considered for rgb visualization.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a model object which generates fake images of type B from type A.
Parameter
Description
data
Required fastai Databunch with image chip sizes in multiples of 256. Returned data object from prepare_data()
function.
pretrained_path
Optional string. Path where pre-trained model is saved.
backbone
Optional function. Backbone CNN model to be used for creating the base of the Pix2Pix
, which is UNet with vanilla encoder by default. Compatible backbones as encoder: âresnet18â, âresnet34â, âresnet50â, âresnet101â, âresnet152â, âresnext50_32x4dâ, âwide_resnet50_2â
perceptual_loss
Optional boolean. True when Perceptual loss is used. Default set to False.
Pix2Pix
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set. Additionally, computes Frechet Inception Distance (FID) for RGB imagery only.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Pix2Pix
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
Pix2Pix
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and display the image.
Parameter
Description
img_path
Required path of an image.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
kwargs
rgb_bands
Optional list of integers (band numbers) to be considered for rgb visualization.
Supported backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a model object which generates fake images of type B from type A.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
n_gen_filters
Optional int. Number of gen filters in first conv layer. Default: 64
gen_network
Optional string (global/local). Selects model to use for generator. Use global if gpu memory is less. Default: âlocalâ
n_downsample_global
Optional int. Number of downsampling layers in gen_network Default: 4
n_blocks_global
Optional int. Number of residual blocks in the global generator network. Default: 9
n_local_enhancers
Optional int. Number of local enhancers to use. Default: 1
n_blocks_local
Optional int. number of residual blocks in the local enhancer network. Default: 3
norm
Optional string. instance normalization or batch normalization Default: âinstanceâ
lsgan
Optional bool. Use least square GAN, if True, use vanilla GAN. Default: True
n_dscr_filters
Optional int. number of discriminator filters in first conv layer. Default: 64
n_layers_dscr
Optional int. only used if which_model_net_dscr==n_layers. Default: 3
n_dscr
Optional int. number of discriminators to use. Default: 2
feat_loss
Optional bool. if âTrueâ, use discriminator feature matching loss. Default: True
vgg_loss
Optional bool. if âTrueâ, use VGG feature matching loss. Default: True (supported for 3 band imagery only).
lambda_feat
Optional int. weight for feature matching loss. Default: 10
lambda_l1
Optional int. weight for feature matching loss. Default: 100 (not supported for 3 band imagery)
Pix2PixHD
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set. Additionally, computes Frechet Inception Distance (FID) for RGB imagery only.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a Pix2PixHD
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
Pix2PixHD
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and display the image.
Parameter
Description
img_path
Required path of an image.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
kwargs
rgb_bands
Optional list of integers (band numbers) to be considered for rgb visualization.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a model object which generates images of type C from type A and type B.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
pretrained_path
Optional string. Path where pre-trained model is saved.
WNet_cGAN
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a WNet_cGAN
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
WNet_cGAN
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and display the image. This method is only supported for RGB images.
Parameter
Description
img_path1
Required path of an image 1.
img_path2
Required path of an image 2.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a model object which increases the resolution and improves the quality of images. Based on Fast.ai MOOC Lesson 7 and https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional string. Backbone CNN model to be used for creating the base of the SuperResolution
, which is resnet34 by default. Compatible backbones: âSR3â, âSR3_UViTâ, âresnet18â, âresnet34â, âresnet50â, âresnet101â, âresnet152â.
pretrained_path
Optional string. Path where pre-trained model is saved.
In addition to explicitly named parameters, the SuperResolution model with âSR3â backbone supports the optional key word arguments:
kwargs
Parameter
Description
inner_channel
Optional int. Channel dimension. Default: 64.
norm_groups
Optional int. Group normalization. Default: 32
channel_mults
Optional list. Depth or channel multipliers. Default: [1, 2, 4, 4, 8, 8]
attn_res
Optional int. Number of attention in residual blocks. Default: 16
res_blocks
Optional int. Number of resnet block. Default: 3
dropout
Optional float. Dropout. Default: 0
schedule
Optional string. Type of noise schedule. Available types are âlinearâ, âwarmup10â, âwarmup50â, âconstâ, âjsdâ, âcosineâ. Default: âlinearâ
n_timestep
Optional int. Number of time-steps. Default: 1000
linear_start
Optional float. Schedule start. Default: 1e-06
linear_end
Optional float. Schedule end. Default: 1e-02
And, with âSR3_UViTâ backbone supports the below optional key word arguments:
patch_size
Optional int. Patch size for generating patch embeddings. Default: 16
embed_dim
Optional int. Dimension of embeddings. Default: 768
depth
Optional int. Depth of model. Default: 17
num_heads
Optional int. Number of attention heads. Default: 12
mlp_ratio
Optional float. Ratio of MLP. Default: 4.0
qkv_bias
Optional bool. Addition of bias in QK Vector. Default: False
SuperResolution
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a SuperResolution object from an Esri Model Definition (EMD) file.
Parameter
Description
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
emd_path
Required string. Path to Esri Model Definition file.
SuperResolution
Object
Creates a SuperResolution
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
SuperResolution
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predicts and display the image.
Parameter
Description
img_path
Required path of an image.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
kwargs
sampling_type
Optional string. Type of sampling. Default: âddimâ. keyword arguments applicable for SR3 model type only.
n_timestep
Optional int. Number of time-steps for the sampling process. Default: 200
Supported backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates an Image Captioning model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
backbone
Optional function. Backbone CNN model to be used for creating the encoder of the ImageCaptioner
, which is resnet34 by default. It supports the ResNet family of backbones.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
Parameter
Description
decoder_params
Optional dictionary. The keys of the dictionary are embed_size, hidden_size, attention_size, teacher_forcing, dropout and pretrained_embeddings.
Default values:
decoder_params={
âembed_sizeâ:100,
âhidden_sizeâ:100,
âattention_sizeâ:100,
âteacher_forcingâ:1,
âdropoutâ:0.1,
âpretrained_embâ:False
}
Parameter Explanation:
âembed_sizeâ: Size of embedding to be used during training.
âhidden_sizeâ: Size of hidden layer.
âattention_sizeâ: Size of intermediate attention layer.
âteacher_forcingâ: Probability of teacher forcing.
âdropoutâ: Dropout probability.
âpretrained_embâ: If true, it will use fasttext embeddings.
ImageCaptioner
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes bleu score over validation set.
kwargs
Parameter
Description
beam_width
Optional int. The size of beam to be used during beam search decoding. Default is 5.
max_len
Optional int. The maximum length of the sentence to be decoded. Default is 20.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a ImageCaptioner model from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Optional fastai Databunch. Returned data object from prepare_data()
function or None for inferencing.
ImageCaptioner
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Shows the ground truth and predictions of model side by side.
kwargs
Parameter
Description
beam_width
Optional int. The size of beam to be used during beam search decoding. Default is 3.
max_len
Optional int. The maximum length of the sentence to be decoded. Default is 15.
Supported torchvision backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Model architecture from https://arxiv.org/abs/1801.07791. Creates a Point Cloud classification model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
pretrained_path
Optional String. Path where pre-trained model is saved.
kwargs
Parameter
Description
encoder_params
Optional dictionary. The keys of the dictionary are out_channels, P, K, D and m.
Examples:
{âout_channelsâ:[16, 32, 64, 96],
âPâ:[-1, 768, 384, 128],
âKâ:[12, 16, 16, 16],
âDâ:[1, 1, 2, 2],
âmâ:8
}
Length of out_channels, P, K, D should be same. The length denotes the number of layers in encoder.
Parameter Explanation
âout_channelsâ: Number of channels produced by each layer,
âPâ: Number of points in each layer,
âKâ: Number of K-nearest neighbor in each layer,
âDâ: Dilation in each layer,
âmâ: Multiplier which is multiplied by each element of out_channel.
dropout
Optional float. This parameter will control overfitting. The range of this parameter is [0,1).
sample_point_num
Optional integer. The number of points that the model will actually process.
focal_loss
Optional boolean. If True, it will use focal loss. Default: False
PointCNN
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes precision, recall and f1-score on the validation sets.
Train the model for the specified number of epochs and using the specified learning rates. The precision, recall and f1 scores shown in the training table are macro averaged over all classes.
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
kwargs
Parameter
Description
iters_per_epoch
Optional integer. The number of iterations to run during the training phase.
Creates an PointCNN model object from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
PointCNN
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
Path where files are dumped.
Note: This method has been deprecated starting from ArcGIS API for Python version 1.9.0. Use Classify Points Using Trained Model tool available in 3D Analyst extension from ArcGIS Pro 2.8 onwards.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results from your model on the validation set with ground truth on the left and predictions on the right. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional rows. Number of rows to show. Default value is 2 and maximum value is the batch_size passed in prepare_data()
.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from class value to RGB values. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
mask_class
Optional list of integers. Array containing class values to mask. Use this parameter to display the classes of interest. Default value is []. Example: All the classes are in [0, 1, 2] to display only class 0 set the mask class parameter to be [1, 2]. List of all classes can be accessed from data.classes attribute where data is the Databunch object returned by prepare_data()
function.
width
Optional integer. Width of the plot. Default value is 750.
height
Optional integer. Height of the plot. Default value is 512.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
Not implemented for this model as none of the layers are frozen by default.
Model architecture from https://arxiv.org/pdf/1911.11236v3.pdf. Creates RandLANet point cloud segmentation model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
pretrained_path
Optional String. Path where pre-trained model is saved.
kwargs
Parameter
Description
encoder_params
Optional dictionary. The keys of the dictionary are out_channels, sub_sampling_ratio, k_n.
- Examples:
{âout_channelsâ:[16, 64, 128, 256], âsub_sampling_ratioâ:[4, 4, 4, 4], âk_nâ:16 }
Length of out_channels and sub_sampling_ratio should be same. The length denotes the number of layers in encoder.
- Parameter Explanation
âout_channelsâ: Number of channels produced by each layer,
âsub_sampling_ratioâ: Sampling ratio of random sampling at each layer,
âk_nâ: Number of K-nearest neighbor for a point.
focal_loss
Optional boolean. If True, it will use focal loss. Default: False
RandLANet Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes precision, recall and f1-score on the validation sets.
Train the model for the specified number of epochs and using the specified learning rates. The precision, recall and f1 scores shown in the training table are macro averaged over all classes.
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
kwargs
Parameter
Description
iters_per_epoch
Optional integer. The number of iterations to run during the training phase.
Creates an RandLANet model object from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
RandLANet
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
Path where files are dumped.
Note: This method has been deprecated starting from ArcGIS API for Python version 1.9.0. Use Classify Points Using Trained Model tool available in 3D Analyst extension from ArcGIS Pro 2.8 onwards.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results from your model on the validation set with ground truth on the left and predictions on the right. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional rows. Number of rows to show. Default value is 2 and maximum value is the batch_size passed in prepare_data()
.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from class value to RGB values. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
mask_class
Optional list of integers. Array containing class values to mask. Use this parameter to display the classes of interest. Default value is []. Example: All the classes are in [0, 1, 2] to display only class 0 set the mask class parameter to be [1, 2]. List of all classes can be accessed from data.classes attribute where data is the Databunch object returned by prepare_data()
function.
width
Optional integer. Width of the plot. Default value is 750.
height
Optional integer. Height of the plot. Default value is 512.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
Not implemented for this model as none of the layers are frozen by default.
Model architecture from https://arxiv.org/pdf/2104.04891.pdf. Creates SQNSeg point cloud segmentation model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
pretrained_path
Optional String. Path where pre-trained model is saved.
kwargs
Parameter
Description
encoder_params
Optional dictionary. The keys of the dictionary are out_channels, sub_sampling_ratio, k_n.
- Examples:
{âout_channelsâ:[16, 64, 128, 256], âsub_sampling_ratioâ:[4, 4, 4, 4], âk_nâ:16 }
Length of out_channels and sub_sampling_ratio should be same. The length denotes the number of layers in encoder.
- Parameter Explanation
âout_channelsâ: Number of channels produced by each layer,
âsub_sampling_ratioâ: Sampling ratio of random sampling at each layer,
âk_nâ: Number of K-nearest neighbor for a point.
focal_loss
Optional boolean. If True, it will use focal loss. Default: False
SQNSeg Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes precision, recall and f1-score on the validation sets.
Train the model for the specified number of epochs and using the specified learning rates. The precision, recall and f1 scores shown in the training table are macro averaged over all classes.
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
kwargs
Parameter
Description
iters_per_epoch
Optional integer. The number of iterations to run during the training phase.
Creates an SQNSeg model object from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
SQNSeg
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
Path where files are dumped.
Note: This method has been deprecated starting from ArcGIS API for Python version 1.9.0. Use Classify Points Using Trained Model tool available in 3D Analyst extension from ArcGIS Pro 2.8 onwards.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results from your model on the validation set with ground truth on the left and predictions on the right. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional rows. Number of rows to show. Default value is 2 and maximum value is the batch_size passed in prepare_data()
.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from class value to RGB values. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
mask_class
Optional list of integers. Array containing class values to mask. Use this parameter to display the classes of interest. Default value is []. Example: All the classes are in [0, 1, 2] to display only class 0 set the mask class parameter to be [1, 2]. List of all classes can be accessed from data.classes attribute where data is the Databunch object returned by prepare_data()
function.
width
Optional integer. Width of the plot. Default value is 750.
height
Optional integer. Height of the plot. Default value is 512.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
Not implemented for this model as none of the layers are frozen by default.
Model architecture from https://arxiv.org/pdf/2312.10035. Creates PTv3Seg point cloud segmentation model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
pretrained_path
Optional String. Path where pre-trained model is saved.
kwargs
Parameter
Description
sub_sampling_ratio
Optional int. Sampling ratio of points in each layer. Default: 2.
seq_len
Optional int. Sequence length for transformer. Default: 1024.
voxel_size
Optional float. Defines the size of voxels in meters for a block. Default: 0.02.
focal_loss
Optional boolean. If True, it will use focal loss. Default: False.
PTv3Seg Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes precision, recall and f1-score on the validation sets.
Train the model for the specified number of epochs and using the specified learning rates. The precision, recall and f1 scores shown in the training table are macro averaged over all classes.
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
kwargs
Parameter
Description
iters_per_epoch
Optional integer. The number of iterations to run during the training phase.
Creates an PTv3Seg model object from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
PTv3Seg
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
Path where files are dumped.
Note: This method has been deprecated starting from ArcGIS API for Python version 1.9.0. Use Classify Points Using Trained Model tool available in 3D Analyst extension from ArcGIS Pro 2.8 onwards.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results from your model on the validation set with ground truth on the left and predictions on the right. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional rows. Number of rows to show. Default value is 2 and maximum value is the batch_size passed in prepare_data()
.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from class value to RGB values. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
mask_class
Optional list of integers. Array containing class values to mask. Use this parameter to display the classes of interest. Default value is []. Example: All the classes are in [0, 1, 2] to display only class 0 set the mask class parameter to be [1, 2]. List of all classes can be accessed from data.classes attribute where data is the Databunch object returned by prepare_data()
function.
width
Optional integer. Width of the plot. Default value is 750.
height
Optional integer. Height of the plot. Default value is 512.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
Not implemented for this model as none of the layers are frozen by default.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
model
Required model name or path to the configuration file from MMDetection3D
repository. The list of the supported models can be queried using supported_models
.
pretrained_path
Optional string. Path where pre-trained model is saved.
kwargs
Parameter
Description
voxel_parms
Optional dictionary. The keys of the dictionary are voxel_size, voxel_points, and max_voxels. The default value of voxel_size,`voxel_points`, and max_voxels are automatically calculated based on the âblock sizeâ, âobject sizeâ and âaverage no. of points per blockâ of the exported data.
{âvoxel_sizeâ: [0.05, 0.05, 0.1],
âvoxel_pointsâ: 10,
âmax_voxelsâ:[20000, 40000],
}
Parameter Explanation:
âvoxel_sizeâ: List of voxel dimensions in meter [x,y,z],
âvoxel_pointsâ: An Int, that decides the maximum number of points per voxel,
âmax_voxelsâ: List of maximum number of voxels in [training, validation].
Default: None.
MMDetection3D
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation/train set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision. Default: 0.3.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive. Default: 0.1.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive. Default: 0.01.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision. Default: False.
kwargs
Parameter
Description
view_type
Optional string. Dataset type to display the results.
valid
- For validation set.
train
- For training set.
Default: âvalidâ.
dict if mean is False otherwise float
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a MMDetection3D
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
MMDetection3D
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
detect_thresh
Optional float. The probability above which a detection will be considered valid. Default: 0.1.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive. Default: 0.6.
Path where files are dumped.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of the trained model on a part of validation/train set. Colors of the PointCloud are only used for better visualization, and it does not depict the actual classcode colors. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
detect_thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from object id to RGB values. Colors of the PointCloud via color_mapping are only used for better visualization, and it does not depict the actual classcode colors. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
view_type
Optional string. Dataset type to display the results.
valid
- For validation set.
train
- For training set.
Default: âvalidâ.
List of models supported by this class.
Not implemented for this model as none of the layers are frozen by default.
Model architecture from https://arxiv.org/pdf/2312.10035. Creates PTv3Det point cloud detection model.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
pretrained_path
Optional String. Path where pre-trained model is saved.
kwargs
Parameter
Description
voxel_parms
Optional dictionary. The keys of the dictionary are voxel_size, voxel_points, and max_voxels. The default value of voxel_size,`voxel_points`, and max_voxels are automatically calculated based on the âblock sizeâ, âobject sizeâ and âaverage no. of points per blockâ of the exported data.
{âvoxel_sizeâ: [0.05, 0.05, 0.1],
âvoxel_pointsâ: 10,
âmax_voxelsâ:[20000, 40000],
}
Parameter Explanation:
âvoxel_sizeâ: List of voxel dimensions in meter [x,y,z],
âvoxel_pointsâ: An Int, that decides the maximum number of points per voxel,
âmax_voxelsâ: List of maximum number of voxels in [training, validation].
Default: None.
seq_len
Optional int. Sequence length for transformer. Default: 1024.
PTv3Det Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes average precision on the validation/train set for each class.
Parameter
Description
detect_thresh
Optional float. The probability above which a detection will be considered for computing average precision. Default: 0.3.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive. Default: 0.1.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive. Default: 0.01.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision. Default: False.
kwargs
Parameter
Description
view_type
Optional string. Dataset type to display the results.
valid
- For validation set.
train
- For training set.
Default: âvalidâ.
dict if mean is False otherwise float
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a PTv3Det
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
PTv3Det
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
Plot validation and training losses after fitting the model.
This method is used for infrencing using HDF file.
Parameter
Description
path
Required string. The path to folder where the HDF files which needs to be predicted are present.
output_path
Optional string. The path to folder where to dump the resulting HDF files. Defaults to results folder in input path.
kwargs
Parameter
Description
batch_size
Optional integer. The number of blocks to process in one batch. Default is set to 1.
detect_thresh
Optional float. The probability above which a detection will be considered valid. Default: 0.1.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive. Default: 0.6.
Path where files are dumped.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of the trained model on a part of validation/train set. Colors of the PointCloud are only used for better visualization, and it does not depict the actual classcode colors. Visualization of data, exported in a geographic coordinate system is not yet supported.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
detect_thresh
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
kwargs
Parameter
Description
color_mapping
Optional dictionary. Mapping from object id to RGB values. Colors of the PointCloud via color_mapping are only used for better visualization, and it does not depict the actual classcode colors. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.
max_display_point
Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.
view_type
Optional string. Dataset type to display the results.
valid
- For validation set.
train
- For training set.
Default: âvalidâ.
Not implemented for this model as none of the layers are frozen by default.
Creates a SiamMask
object.
Parameter
Description
data
Optional fastai Databunch. Returned data object from prepare_data()
function with dataset_type as âObjectTrackingâ and data format as âYouTube-VOSâ. Default value is None.
SiamMask
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes mean IOU and f-measure on validation set.
Parameter
Description
iou_thresh
Optional float. The intersection over union threshold with the ground truth mask, above which a predicted mask will be considered a true positive.
dict with mean IOU and F-Measure
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Freezes the pretrained backbone.
Creates a SiamMask
Object tracker from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
SiamMask
Object
Initializes the position of the object in the frame/Image using detections.
Parameter
Description
frame
Required numpy array. frame is used to initialize the objects to track.
detections
Required list. A list of bounding boxes.
labels
Optional list. A list of labels corresponding to the bounding boxes.
Track
list
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Removes the tracks from the track list using track_ids
Parameter
Description
track_ids
Required List. List of track ids to be removed from the track list.
Updated track list
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set
Parameter
Description
rows
Optional int. Number of rows to display.
Supported torchvision backbones for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Tracks the position of the object in the frame/Image
Parameter
Description
frame
Required numpy array. frame is used to update the object track.
kwargs
Parameter
Description
detections
Optional list. A list of bounding boxes.
labels
Optional list. A list of labels.
Updated track list
Creates a DeepSort
object.
Parameter
Description
data
Fastai Databunch. Returned data object from prepare_data()
function with dataset_type=Imagenet. Default value is None. DeepSort only supports image size of (3, 128, 64)
DeepSort
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a DeepSort Object tracker from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
DeepSort
Object
Initializes the DeepSort
tracker for inference.
Parameter
Description
frame
Required numpy array. Frame is used to initialize the tracker.
detections
Required list. A list of bounding boxes corresponding to the detections.
labels
Optional list. A list of labels corresponding to the detections.
scores
Optional list. A list of scores corresponding to the detections.
Track
list
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Removes the tracks from the track list using track_ids.
Parameter
Description
track_ids
Required list. list of track ids to be removed from the track list.
Updated track list
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
Supported torchvision backbones for this model.
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Updates the DeepSort
tracker.
Parameter
Description
frame
Required numpy array. Frame is used to update the tracker.
detections
Required list. A list of bounding boxes corresponding to the detections. bounding box = [xmin, ymin, width, height]
labels
Optional list. A list of labels corresponding to the detections.
scores
Optional list. A list of scores corresponding to the detections.
Track
list
Creates ObjectTracker
Object.
Parameter
Description
tracker
Required. Returned tracker object from from_model API of object tracking models.
detector
Optional. Returned detector object from from_model API of object detection models.
tracker_options
Optional dictionary. A dictionary with keys as parameter names and values as parameter values.
âenable_post_processing
â - refers to the flag which enables/disables post_processing of tracks internal to ObjectTracker module. For DeepSort, itâs recommended to keep this flag as False. Default - True
âdetection_interval
â - refers to the interval in frames at which the detector is invoked. It should be >= 1
âdetection_threshold
â - refers to the lower threshold for selecting the detections.
âdetect_track_failure
â - refers to the flag which enables/disables the logic to detect whether the object appearance has changed detection.
ârecover_track
â - refers to the flag which enables/disables track recovery post failure.
âstab_period
â - refers to the number of frames after which post processing starts.
âdetect_fail_interval
â - refers to the number of frames after which to detect track failure.
âmin_obj_size
â - refers to the size in pixels below which tracking is assumed to have failed.
âtemplate_history
â - refers to the number of frames before the current frame at which template image is fetched.
âstatus_history
â - refers to the number of frames over which status of the track is used to detect track failure.
âstatus_fail_threshold
â - refers to the threshold for the ratio between number of frames for which object is searched for and the total number of frames which needs to be crossed for track failure detection.
âsearch_period
â - refers to the number of frames for which object is searched for before declaring object is lost.
âknn_distance_ratio
â - refers to the threshold for ratio of the distances between template descriptor and the two best matched detection descriptor, used for filtering best matches.
ârecover_conf_threshold
â - refers to the minimum confidence value over which recovery logic is enabled.
recover_iou_threshold
- refers to the minimum overlap between template and detection for successful recovery.
ObjectTracker
Object
Initializes tracks based on the detections returned by detector/ manually fed to the function.
Parameter
Description
frame
Required numpy array. frame is used to initialize the objects to track.
detections
Optional list. A list of bounding box to intialize the tracks.
labels
Optional list. A list of labels corresponding to the detections.
reset
Optional flag. Indicates whether to reset the tracker and remove all existing tracks before initialization.
list of active track objects
Removes the tracks corresponding to track_ids parameter.
Parameter
Description
tracks_ids
Required list. List of track ids to be removed.
Tracks the position of the object in the frame/Image.
Parameter
Description
frame
Required numpy array. frame is the current frame to be used to track the objects.
list of active track objects
Creates a Track object, used to maintain the state of a track
Parameter
Description
id
Required int. ID for each track initialized
label
Required String. label/class name of the track
bbox
Required list. Bounding box of the track
mask
Required numpy array. Mask for the tack
Track
Object
Creates the object for ScannedMapDigitizer
class
Parameter
Description
input_folder
Path to the folder that contains extracted maps
output_folder
Path to the folder where intermediate results should get generated
Generates the binary masked images
Parameter
Description
color_list
A list containing different color inputs in list/tuple format [(r, g, b)]. For eg: [[110,10,200], [210,108,11]].
color_delta
A value which defines the range around the threshold value for a specific color used for creating the mask images. Default value is 60.
kernel_size
A list of 2 integers corresponding to size of the morphological filter operations closing and opening respectively.
kernel_type
A string value defining the type/shape of the kernel. kernel type can be ârectâ, âellipticalâ or âcrossâ. Default value is ârectâ.
show_result
A boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
This method generates templates and color masks from scanned maps which are used in the subsequent step of template matching.
Parameter
Description
color
A list containing r, g, b value representing land color. The color parameter is required for extracting the land region and generating the binary mask.
color_delta
A value which defines the range around the threshold value for a specific color used for creating the mask images. Default value is 60.
kernel_size
An integer corresponding to size of kernel used for dilation(morphological operation).
show_result
A Boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
This method is the final step in the pipeline that maps the species regions on the search image using the computed transformations. Also, it generates the shapefiles for the species region that can be visualized using ArcGIS Pro and further edited.
Parameter
Description
show_result
A Boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
This method estimates the control point pairs by traversing the contours of template image and finding the corresponding matches on the search region ROI image
Parameter
Description
padding_param
A tuple that contains x-padding and y-padding at 0th and 1st index respectively.
show_result
A Boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
Getter function for search region extent
This method finds the location of the best match of a smaller image (template) in a larger image(search image) assuming it exists in the larger image.
Parameter
Description
min_scale
An integer representing the minimum scale at which template matching is performed.
max_scale
An integer representing maximum scale at which template matching is performed.
num_scales
An integer representing the number of scales at which template matching is performed.
show_result
A Boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
This method prepares the search region in which the prepared templates are to be searched.
Parameter
Description
search_image
Path to the bigger image/shapefile.
color
A list containing r, g, b value representing water color. For Eg: [173, 217, 219].
extent
Extent defines the extreme longitude/latitude of the search region.
image_height
Height of the search region.
image_width
Width of the search region.
show_result
A boolean value. Set to âTrueâ to visualize results and set to âFalseâ otherwise.
Creates the object for ScannedMapDigitizer
class
Parameter
Description
extent
Extent defines the extreme longitude/latitude of the search region.
Creates a FullyConnectedNetwork
Object. Based on the Fast.aiâs Tabular Learner
Parameter
Description
data
Required TabularDataObject. Returned data object from prepare_tabulardata
function.
layers
Optional list, specifying the number of nodes in each layer. Default: [500, 100] is used. 2 layers each with nodes 500 and 100 respectively.
emb_szs
Optional dict, variable name with embedding size for categorical variables. If not specified, then calculated using fastai.
FullyConnectedNetwork
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
the global feature importance summary plot from SHAP.Feature is temporarily disabled.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a FullyConnectedNetwork
Object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_tabulardata
function or None for inferencing.
FullyConnectedNetwork
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predict on data from feature layer, dataframe and or raster data.
Parameter
Description
input_features
Optional FeatureLayer
or spatially enabled dataframe. Required if prediction_type=âfeaturesâ. Contains features with location and some or all fields required to infer the dependent variable value.
explanatory_rasters
Optional list of Raster Objects. If prediction_type=ârasterâ, must contain all rasters required to make predictions.
datefield
Optional string. Field name from feature layer that contains the date, time for the input features. Same as prepare_tabulardata()
.
distance_features
Optional List of FeatureLayer
objects. These layers are used for calculation of field âNEAR_DIST_1â, âNEAR_DIST_2â etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata()
.
output_layer_name
Optional string. Used for publishing the output layer.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
prediction_type
Optional String. Set âfeaturesâ or âdataframeâ to make output feature layer predictions. With this feature_layer argument is required.
Set ârasterâ, to make prediction raster. With this rasters must be specified.
output_raster_path
Optional path. Required when prediction_type=ârasterâ, saves the output raster to this path.
match_field_names
Optional dictionary. Specify mapping of field names from prediction set to training set. For example:
{
âField_Name_1â: âField_1â,
âField_Name_2â: âField_2â
}
explain
Optional Bool. Setting this parameter to true generates prediction explaination plot. Plot is generated using model interpretability library called SHAP. (https://github.com/slundberg/shap). Feature is temporarily disabled.
explain_index
Optional Int. The index of the dataframe passed to the predict function for which model interpretability is desired. If the parameter is not passed and if the explain parameter is set to true, the SHAP plot will be generated for a random index of the dataframe.
Feature Layer if prediction_type=âfeaturesâ, dataframe for prediction_type=âdataframeâ else creates an output raster.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Folder path to save the model.
framework
Optional string. Defines the framework of the model. (Only supported by SingleShotDetector
, currently.) If framework used is TF-ONNX
, batch_size
can be passed as an optional keyword argument.
Framework choice: âPyTorchâ and âTF-ONNXâ
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
R2 score for regression model and Accuracy for classification model.
Prints the rows of the dataframe with target and prediction columns.
Parameter
Description
rows
Optional Integer. Number of rows to print.
dataframe
Unfreezes the earlier layers of the model for fine-tuning.
Creates a machine learning model based on its implementation from scikit-learn, xgboost, lightgbm, catboost. For supervised learning: Refer scikit-learn, xgboost, lightgbm , catboost .
For unsupervised learning: 1. Clustering Models 2. Gaussian Mixture Models 3. Novelty and outlier detection Refer https://scikit-learn.org/stable/unsupervised_learning.html
MLModel
Object
output from scikit-learnâs model.decision_function()
Shows sample fairness score and plots for the model.
dataframe
the global feature importance summary plot from SHAP. Most of the sklearn models are supported by this method.
Creates a MLModel
Object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required TabularDataObject or None. Returned data object from prepare_tabulardata
function or None for inferencing.
MLModel
Object
output from scikit-learnâs model.kneighbors()
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Esri Model Definition(EMD) file.
output from scikit-learnâs model.mahalanobis()
Predict on data from feature layer, dataframe and or raster data.
Parameter
Description
input_features
Optional FeatureLayer
or spatial dataframe. Required if prediction_type=âfeaturesâ. Contains features with location and some or all fields required to infer the dependent variable value.
explanatory_rasters
Optional list. Required if prediction_type=ârasterâ. Contains a list of raster objects containing some or all fields required to infer the dependent variable value.
datefield
Optional string. Field name from feature layer that contains the date, time for the input features. Same as prepare_tabulardata()
.
distance_features
Optional List of FeatureLayer
objects. These layers are used for calculation of field âNEAR_DIST_1â, âNEAR_DIST_2â etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata()
.
output_layer_name
Optional string. Used for publishing the output layer.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
prediction_type
Optional String. Set âfeaturesâ or âdataframeâ to make output feature layer predictions. With this feature_layer argument is required.
Set ârasterâ, to make prediction raster. With this rasters must be specified.
output_raster_path
Optional path. Required when prediction_type=ârasterâ, saves the output raster to this path.
match_field_names
Optional dictionary. Specify mapping of field names from prediction set to training set. For example:
{
âField_Name_1â: âField_1â,
âField_Name_2â: âField_2â
}
explain
Optional Bool. Setting this parameter to true generates prediction explanation plot. Plot is generated using model interpretability library called SHAP. (https://github.com/slundberg/shap)
explain_index
Optional Int. The index of the dataframe passed to the predict function for which model interpretability is desired. If the parameter is not passed and if the explain parameter is set to true, the SHAP plot will be generated for a random index of the dataframe.
FeatureLayer
if prediction_type=âfeaturesâ, dataframe for prediction_type=âdataframeâ else creates an output raster.
output from scikit-learnâs model.predict_proba()
Saves the model, creates an Esri Model Definition. Uses pickle to save the model. Using protocol level 2. Protocol level is backward compatible.
dataframe
output from scikit-learnâs model.score(), R2 score in case of regression and Accuracy in case of classification.
For KMeans returns Opposite of the value of X on the K-means objective.
Shows sample results for the model.
dataframe
Creates a TimeSeriesModel
Object. Based on the Fast.aiâs https://github.com/timeseriesAI/timeseriesAI
Parameter
Description
data
Required TabularDataObject. Returned data object from prepare_tabulardata
function.
seq_len
Required Integer. Sequence Length for the series. In case of raster only, seq_len = number of rasters, any other passed value will be ignored.
model_arch
Optional string. Model Architecture. Allowed âInceptionTimeâ, âResCNNâ, âResnetâ, âFCNâ, âTimeSeriesTransformerâ, âLSTMâ. âLSTMâ supports both âLSTMâ and âBi-LSTMâ. âBi-LSTMâ is enabled by passing bidirectional=True in kwargs.
location_var
Optional string. Location variable in case of NetCDF dataset.
multistep
Optional string. It will set the model to generate more than one time-step as output in multivariate scenario. Compared to current auto-regressive fashion, it will generate multi-step output in single pass. This option is only applicable in multivariate scenario. Univariate implementation will ignore this flag. Default value is False
**kwargs
Optional kwargs.
TimeSeriesModel
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a TimeSeriesModel
Object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_tabulardata
function or None for inferencing.
TimeSeriesModel
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Predict on data from feature layer and or raster data.
Parameter
Description
input_features
Optional FeatureLayer
or spatially enabled dataframe. Contains features with location of the input data. Required if prediction_type is âfeaturesâ or âdataframeâ
explanatory_rasters
Optional list of Raster Objects. Required if prediction_type is ârastersâ
datefield
Optional field_name. This field contains the date in the input_features. The field type can be a string or date time field. If specified, the field will be split into Year, month, week, day, dayofweek, dayofyear, is_month_end, is_month_start, is_quarter_end, is_quarter_start, is_year_end, is_year_start, hour, minute, second, elapsed and these will be added to the prepared data as columns. All fields other than elapsed and dayofyear are treated as categorical.
distance_features
Optional List of FeatureLayer
objects. These layers are used for calculation of field âNEAR_DIST_1â, âNEAR_DIST_2â etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata()
.
output_layer_name
Optional string. Used for publishing the output layer.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
prediction_type
Optional String. Set âfeaturesâ or âdataframeâ to make output predictions.
output_raster_path
Optional path. Required when prediction_type=ârasterâ, saves the output raster to this path.
match_field_names
Optional string. Specify mapping of the original training set with prediction set.
number_of_predictions
Optional int for univariate time series. Specify the number of predictions to make, adds new rows to the dataframe. For multivariate or if None, it expects the dataframe to have empty rows. if multi-step is set to True during training then it does not need empty rows. If multi-step is set to False then dataframe needs to have rows with NA values in variable predict and non-NA values in explnatory_varibles For prediction_type=ârasterâ, a new raster is created.
FeatureLayer
/dataframe if prediction_type=âfeaturesâ/âdataframeâ, else returns True and saves output
raster at the specified path.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Folder path to save the model.
framework
Optional string. Defines the framework of the model. (Only supported by SingleShotDetector
, currently.) If framework used is TF-ONNX
, batch_size
can be passed as an optional keyword argument.
Framework choice: âPyTorchâ and âTF-ONNXâ
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
R2 score for regression model and Accuracy for classification model.
Prints the graph with predictions.
Experimental support for multivariate timeseries.
Parameter
Description
rows
Optional Integer. Number of rows to print.
Unfreezes the earlier layers of the model for fine-tuning.
Creates a Pixel-Set encoder + Temporal Attention Encoder sequence classifier.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
pretrained_path
Optional string. Path where pre-trained model is saved.
Keyword Arguments
Parameter
Description
mlp1
Optional list. Dimensions of the successive feature spaces of MLP1. default set to [32, 64]
pooling
Optional string. Pixel-embedding pooling strategy, can be chosen in (âmeanâ,âstdâ,âmaxâ,âminâ). default set to âmeanâ
mlp2
Optional list. Dimensions of the successive feature spaces of MLP2. default set to [128, 128]
n_head
Optional integer. Number of attention heads. default set to 4
d_k
Optional integer. Dimension of the key and query vectors. default set to 32
dropout
Optional float. dropout. default set to 0.2
T
Optional integer. Period to use for the positional encoding. default set to 1000
mlp4
Optional list. dimensions of decoder mlp .default set to [64, 32]
PSETAE Object
Computes overall accuracy (OA) on validation set.
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes mean intersection over union (mIOU) and overall accuracy (OA) on validation set.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a PSETAE object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
PSETAE Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Computes mean intersection over union (mIOU) on validation set.
Computes IoU, Precision, Recall, F1-score for all classes.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
kwargs
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Creates ClimaX model object: a foundational model for weather and climate forecasting tasks.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
backbone
Optional string. pretrained foundational models as backbone. Compatible backbones: â5.625degâ, â1.40625degâ. Default set to â5.625degâ.
pretrained_path
Optional string. Path where pre-trained model is saved.
Keyword Arguments
Parameter
Description
patch_size
Optional int. Patch size for generating patch embeddings. Default: 4
embed_dim
Optional int. Dimension of embeddings. Default: 1024
depth
Optional int. Depth of model. Default: 8
num_heads
Optional int. Number of attention heads. Default: 16
mlp_ratio
Optional float. Ratio of MLP. Default: 4.0
decoder_depth
Optional int. Depth of decoder. Default: 2
drop_path
Optional float. stochastic depth or randomly drops entire layers. Default: 0.1
drop_rate
Optional float. randomly drops neurons. Default: 0.1
parallel_patch_embed
Optional bol. parallel embdedding of patches. Default: True
ClimaX Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Computes latitude weighted root mean squared error on validation set.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a ClimaX object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
ClimaX Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Displays the results of a trained model on a part of the validation set.
Parameter
Description
rows
Optional int. Number of rows of results to be displayed.
total_sample_size
Optional int. Number of rows of results to be displayed.
variable_no
Optional int. variable count to be displayed
Supported dataset types for this model.
Unfreezes the earlier layers of the model for fine-tuning.
Function can be used to generate feature service that contains polygons on detected objects found in the imagery data using the designated deep learning model. Note that the deep learning library needs to be installed separately, in addition to the serverâs built in Python 3.x library.
Note
This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.
Parameter
Description
input_raster
Required. raster layer that contains objects that needs to be detected.
model
Required Model
object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {âname1â:âvalue1â, âname2â: âvalue2â}
output_name
Optional. If not provided, a FeatureLayer
is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
run_nms
Optional bool. Default value is False. If set to True, runs the Non Maximum Suppression tool.
confidence_score_field
Optional string. The field in the feature class that contains the confidence scores as output by the object detection method. This parameter is required when you set the run_nms to True
class_value_field
Optional string. The class value field in the input feature class. If not specified, the function will use the standard class value fields Classvalue and Value. If these fields do not exist, all features will be treated as the same object class. Set only if run_nms is set to True
max_overlap_ratio
Optional integer. The maximum overlap ratio for two overlapping features. Defined as the ratio of intersection area over union area. Set only if run_nms is set to True
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is â80%â
mask: Only cells that fall within the analysis mask will be considered in the operation.
Eg: {âmaskâ: {âurlâ: â<feature_service_url>â}}
processorType - Sets the processor type. âCPUâ or âGPUâ
Eg: {âprocessorTypeâ : âCPUâ}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online.
The output feature layer item containing the detected objects
Function can be used to output feature service with assigned class label for each feature based on information from overlapped imagery data using the designated deep learning model.
Note
This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.
Parameter
Description
input_raster
Required. raster layer that contains objects that needs to be classified.
model
Required Model
object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {âname1â:âvalue1â, âname2â: âvalue2â}
input_features
Optional FeatureLayer
. The point, line, or polygon input feature layer that identifies the location of each object to be classified and labelled. Each row in the input feature layer represents a single object.
If no input feature layer is specified, the function assumes that each input image contains a single object to be classified. If the input image or images use a spatial reference, the output from the function is a feature layer, where the extent of each image is used as the bounding geometry for each labelled feature layer. If the input image or images are not spatially referenced, the output from the function is a table containing the image ID values and the class labels for each image.
class_label_field
Optional str. The name of the field that will contain the classification label in the output feature layer.
If no field name is specified, a new field called ClassLabel will be generated in the output feature layer.
âClassLabelâ
process_all_raster_items
Optional bool.
If set to False, all raster items in the image service will be mosaicked together and processed. This is the default.
If set to True, all raster items in the image service will be processed as separate images.
output_name
Optional. If not provided, a FeatureLayer
is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is â80%â
processorType - Sets the processor type. âCPUâ or âGPUâ
Eg: {âprocessorTypeâ : âCPUâ}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online
The output feature layer item containing the classified objects
Function to classify input imagery data using a deep learning model. Note that the deep learning library needs to be installed separately, in addition to the serverâs built in Python 3.x library.
Note
This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.
Parameter
Description
input_raster
Required. raster layer that needs to be classified.
model
Required Model
object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {âname1â:âvalue1â, âname2â: âvalue2â}
output_name
Optional. If not provided, an imagery layer is created by the method and used as the output . You can pass in an existing Image Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Image Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
outSR - (Output Spatial Reference) Saves the result in the specified spatial reference
snapRaster - Function will adjust the extent of output rasters so that they match the cell alignment of the specified snap raster.
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is â80%â
processorType - Sets the processor type. âCPUâ or âGPUâ
{âoutSRâ : {spatial reference}}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online.
tiles_only
Keyword only parameter. Optional boolean. In ArcGIS Online, the default output image service for this function would be a Tiled Imagery Layer. To create Dynamic Imagery Layer as output in ArcGIS Online, set tiles_only parameter to False.
Function will not honor tiles_only parameter in ArcGIS Enterprise and will generate Dynamic Imagery Layer by default.
The classified imagery layer item
Function can be used to calculate the accuracy of a deep learning model by comparing the detected objects from the detect_objects function to ground truth data. Function available in ArcGIS Image Server 10.9 and higher (not available in ArcGIS Online).
Parameter
Description
detected_features
Required. The input polygon feature layer containing the objects detected from the detect_objects function.
ground_truth_features
Required. The polygon feature layer containing ground truth data.
detected_class_value_field
Optional dictionary. The field in the detected objects feature class that contains the class names or class values.
If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.
The class values or class names must match those in the ground truth feature class exactly.
Syntax: A string describing the detected class value field.
Example: âclassâ
ground_truth_class_value_field
The field in the ground truth feature class that contains the class names or class values.
If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.
The class values or class names must match those in the detected objects feature class exactly.
Example: âclassâ
min_iou
The Intersection over Union (IoU) ratio to use as a threshold to evaluate the accuracy of the object-detection model. The numerator is the area of overlap between the predicted bounding box and the ground truth bounding box. The denominator is the area of union or the area encompassed by both bounding boxes.
min_IoU value should be in the range 0 to 1. [0,1] Example:
0.5
mask_features
Optional FeatureLayer
. A polygon feature service layer that delineates the area where accuracy will be computed. Only the image area that falls completely within the polygons will be assessed for accuracy.
out_accuracy_table_name
Optional. Name of the output accuracy table item to be created. If not provided, a random name is generated by the method and used as the output name.
out_accuracy_report_name
Optional. Accuracy report can either be added as an item to the portal. or can be written to a datastore. To add as an item, specify the name of the output report item (pdf item) to be created. Example:
âaccuracyReportâ
In order to write accuracy report to datastore, specify the datastore path as value to uri key.
â/fileShares/yourFileShareFolderName/accuracyReportâ
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is â80%â
processorType - Sets the processor type. âCPUâ or âGPUâ
Eg: {âprocessorTypeâ : âCPUâ}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online
The output accuracy table item or/and accuracy report item (or datastore path to accuracy report)
# Usage Example: This example generates an accuracy table for a specified minimum IoU value. compute_accuracy_op = compute_accuracy_for_object_detection(detected_features=detected_features, ground_truth_features=ground_truth_features, detected_class_value_field="ClassValue", ground_truth_class_value_field="Class", min_iou=0.5, mask_features=None, out_accuracy_table_name="accuracy_table", out_accuracy_report_name="accuracy_report", gis=gis)
Runs a trained deep learning model to detect change between two rasters. Function available in ArcGIS Image Server 11.1 and higher.
Argument
Description
from_raster
Required ImageryLayer object. The previous raster to use for change detection.
to_raster
Required ImageryLayer object. The recent raster to use for change detection.
model
Required. The deep learning model to be used for the change detection. It can be passed as a dlpk portal item, datastore path to the Esri Model Definition (EMD) file or the EMD JSON string.
output_classified_raster
Optional String. If not provided, an Image Service is created by the method and used as the output raster. You can pass in an existing Image Service Item from your GIS to use that instead.
Alternatively, you can pass in the name of the output Image Service that should be created by this method to be used as the output for the tool.
A RuntimeError is raised if a service by that name already exists.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {âname1â:âvalue1â, âname2â: âvalue2â}
context
Context contains additional settings that affect task execution.
context parameter overwrites values set through arcgis.env parameter
This function has the following settings:
Cell size (cellSize) - Set the output raster cell size, or resolution
Output Spatial Reference (outSR): The output raster will be
projected into the output spatial reference.
- Example:
{âoutSRâ: {spatial reference}}
Extent (extent): A bounding box that defines the analysis area.
- Example:
{âextentâ: {âxminâ: -122.68, âyminâ: 45.53, âxmaxâ: -122.45, âymaxâ: 45.6, âspatialReferenceâ: {âwkidâ: 4326}}}
Parallel Processing Factor (parallelProcessingFactor): controls
Raster Processing (CPU) service instances.
- Example:
Syntax example with a specified number of processing instances:
{âparallelProcessingFactorâ: â2â}
Syntax example with a specified percentage of total processing instances:
{âparallelProcessingFactorâ: â60%â}
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional Boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
estimate
Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float. Available only on ArcGIS Online
folder
Keyword only parameter. Optional str or dict. Creates a folder in the portal, if it does not exist, with the given folder name and persists the output in this folder. The dictionary returned by the gis.content.create_folder() can also be passed in as input.
{âusernameâ: âuser1â, âidâ: â6a3b77c187514ef7873ba73338cf1af8â, âtitleâ: âtrialâ}
The output imagery layer item
# Usage Example 1: from_raster = gis.content.search("from_raster", item_type="Imagery Layer")[0].layers[0] to_raster = gis.content.search("to_raster", item_type="Imagery Layer")[0].layers[0] change_detection_model = gis.content.search("my_detection_model")[0] detect_change_op = detect_change_using_deep_learning(from_raster=from_raster, to_raster=to_raster, model=change_detection_model, gis=gis)
Creates an Embeddings
Object. This object is capable of giving embeddings for text as well as images. The image embeddings are currently supported for RGB images only
Parameter
Description
dataset_type
Required string. The type of data for which we would like to get the embedding vectors. Valid values are text & image. Default is set to image.
Note
The image embeddings are currently supported for RGB images only.
backbone
Optional string. Specify the backbone/model-name to be used to get the embedding vectors. Default backbone for image dataset-type is resnet34 and for text dataset-type is sentence-transformers/distilbert-base-nli-stsb-mean-tokens
To learn more about the available models for for getting text embeddings, kindly visit:- https://huggingface.co/sentence-transformers
kwargs
Parameter
Description
working_dir
Option str. Path to a directory on local filesystem. If directory is not present, it will be created. This directory is used as the location to save the model.
Embeddings
Object
Method to get the embedding vectors for the image/text items.
Parameter
Description
text_or_list
Required string or List. String containing directory path or list of directory paths where image/text files are present for which the user wants to get the embedding vectors.
batch_size
Optional integer. The number of items to process in one batch. Default is set to 32.
show_progress
Optional boolean. If set to True, will display a progress bar depicting the items processed so far. Default is set to True.
return_embeddings
Optional boolean. If set to True, a dataframe containing the embeddings will be returned. If set to False, they will be saved in a h5 file. Default is set to False.
kwargs
Parameter
Description
normalize
Optional boolean. If set to true, will normalize the image with imagenet-stats (mean and std-deviation for each color channel in RGB image). This argument is valid only for dataset-type image. Default is set to True.
file_extensions
Optional String or List. The file extension(s) for which the user wish to get embedding vectors for. Allowed values for dataset-type image are - [âpngâ, âjpgâ, âjpegâ, âtiffâ, âtifâ, âbmpâ] Allowed values for dataset-type text are - [âcsvâ, âtxtâ, âjsonâ]
Note
For json files, if we have nested json structures, then text will be extracted only from the 1st level.
chip_size
Optional integer. Resize the image to chip_size X chip_size pixels. This argument is valid only for dataset-type image. Default is set to 224
encoding
Optional string. The encoding to read the text/csv/ json file. Applicable only for dataset-type text. Default is UTF-8
text_column
Optional string. The column that will be used to get the text content from csv or json file types. This argument is valid only for dataset-type text. Default is set to text
remove_urls
Optional boolean. If true, remove urls from text. This argument is valid only for dataset-type text. Default value is False.
remove_html_tags
Optional boolean. If true, remove html tags from text. This argument is valid only for dataset-type text. Default value is False.
pooling_strategy
Optional string. The transformer model gives embeddings for each word/token present in the text. The type of pooling to be done on those word/token vectors in order to form the text embeddings. Allowed values are - [âmeanâ, âmaxâ, âfirstâ] This argument is valid only for dataset-type text. Default value is mean.
The path of the H5 file where items & corresponding embeddings are saved.
Load the extracted embeddings from the H5 file
Parameter
Description
file_path
Required string. The path to the H5 file which gets auto generated after the call to the get method of the Embeddings
class
load_to_memory
Optional Bool. whether or not to load the entire content of the H5 file to memory. Loading very large H5 files into the memory takes up lot of RAM space. Use this parameter with caution for large H5 files. Default is set to True.
When load_to_memory param is True - A 2 item tuple containing the numpy arrays of extracted embeddings and items When load_to_memory param is False - A 3 item tuple containing the H5 file handler & 2 H5 dataset object of extracted embeddings and items
Get available backbones/model-name for the given dataset-type
Parameter
Description
dataset_type
Required string. The type of data for which we would like to get the embedding vectors. Valid values are text & image. Default is set to image
a list containing the available models for the given dataset-type
Method to visualize the embedding vectors for the image/text items. This method uses the K-Means clustering algorithm to partition the embeddings vectors into n-clusters. This requires the loading the entire content of the H5 file to RAM. Loading very large H5 files into the memory takes up lot of RAM space. Use this method with caution for large H5 files.
Parameter
Description
file_path
Required string. The path to the H5 file which gets auto generated after the call to the get method of the Embeddings
class.
visualize_with_items
Optional Bool. Whether or not to visualize the embeddings with items. Default is set to True.
n_clusters
Optional integer. The number of clusters to create for the embedding vectors. This value will be passed to the KMeans algorithm to generate the clusters. Default is set to 5.
dimensions
Optional integer. The number of dimensions to project the embedding vectors for visualization purpose. Allowed values are 2 & 3 Default is set to 3.
Function is used to initialize Model object from model definition JSON
# Usage example >>> model = Model() >>> model.from_json({"Framework" :"TensorFlow", "ModelConfiguration":"DeepLab", "InferenceFunction":"``[functions]System\DeepLearning\ImageClassifier.py``", "ModelFile":"``\\folder_path_of_pb_file\frozen_inference_graph.pb``", "ExtractBands":[0,1,2], "ImageWidth":513, "ImageHeight":513, "Classes": [ { "Value":0, "Name":"Evergreen Forest", "Color":[0, 51, 0] }, { "Value":1, "Name":"Grassland/Herbaceous", "Color":[241, 185, 137] }, { "Value":2, "Name":"Bare Land", "Color":[236, 236, 0] }, { "Value":3, "Name":"Open Water", "Color":[0, 0, 117] }, { "Value":4, "Name":"Scrub/Shrub", "Color":[102, 102, 0] }, { "Value":5, "Name":"Impervious Surface", "Color":[236, 236, 236] } ] })
Function is used to initialize Model object from url of model package or path of model definition file
# Usage Example #1: >>> model = Model() >>> model.from_model_path("https://xxxportal.esri.com/sharing/rest/content/items/<itemId>") # Usage Example #2: >>> model = Model() >>> model.from_model_path("\\sharedstorage\sharefolder\findtrees.emd")
Function is used to install the uploaded model package (*.dlpk). Optionally after inferencing the necessary information using the model, the model can be uninstalled by uninstall_model()
Parameter
Description
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
Path where model is installed
Function is used to extract the deep learning model specific settings from the model package item or model definition file.
Parameter
Description
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
The key model information in dictionary format that describes what the settings are essential for this type of deep learning model.
Function is used to uninstall the uploaded model package that was installed using the install_model() This function will delete the named deep learning model from the server but not the portal item.
Parameter
Description
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
itemId of the uninstalled model package item
Creates a ModelExtension object, to train the model for object detection, semantic segmentation, and edge detection.
Parameter
Description
data
Required fastai Databunch. Returned data object from prepare_data()
function.
model_conf
A class definition contains the following methods:
get_model(self, data, backbone=None, **kwargs)
: for model definition,
on_batch_begin(self, learn, model_input_batch, model_target_batch, **kwargs)
: for feeding input to the model during training,
transform_input(self, xb)
: for feeding input to the model during inferencing/validation,
transform_input_multispectral(self, xb)
: for feeding input to the model during inferencing/validation in case of multispectral data,
loss(self, model_output, *model_target)
: to return loss value of the model
post_process(self, pred, nms_overlap, thres, chip_size, device)
: to post-process the output of the object-detection model.
post_process(self, pred, thres)
: to post-process the output of the segmentation model.
backbone
Optional function. If custom model requires any backbone.
pretrained_path
Optional string. Path where pre-trained model is saved.
ModelExtension
Object
List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.
Train the model for the specified number of epochs and using the specified learning rates
Parameter
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If lr=None
, an optimal learning rate is automatically deduced for training the model.
one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to âTrueâ training will stop if parameter monitor value stops improving for 5 epochs. A minimum difference of 0.001 is required for it to be considered an improvement.
checkpoint
Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to âallâ, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.
tensorboard
Optional boolean. Parameter to write the training log. If set to âTrueâ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1
The default value is âFalseâ.
Note
Not applicable for Text Models
monitor
Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to âvalid_lossâ. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision training. If set to True, model training will be done in mixed precision mode. Only Pytorch based models are supported. This feature is experimental. The default value is âFalseâ.
Creates a ModelExtension
object from an Esri Model Definition (EMD) file.
Parameter
Description
emd_path
Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
data
Required fastai Databunch or None. Returned data object from prepare_data()
function or None for inferencing.
ModelExtension
Object
Loads a compatible saved model for inferencing or fine tuning from the disk.
Parameter
Description
name_or_path
Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.
Keyword Arguments
Parameter
Description
strict
Optional boolean, default True. Whether to strictly enforce the keys of file`s state dict match with the model `Module.state_dict.
Runs the Learning Rate Finder. Helps in choosing the optimum learning rate for training the model.
Parameter
Description
allow_plot
Optional boolean. Display the plot of losses against the learning rates and mark the optimal value of the learning rate on the plot. The default value is âTrueâ.
mixed_precision
Optional boolean. Parameter to enable/disable mixed precision. If set to True, optimum learning rate will be derived in mixed precision mode. Only Pytorch based models are supported. The default value is âFalseâ.
Plot validation and training losses after fitting the model.
Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Parameter
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Exports the model in the specified framework format (âPyTorchâ, âtfliteâ âtorchscriptâ, and âTF-ONXXâ (deprecated)). Only models saved with the default framework (PyTorch) can be loaded using from_model. tflite
framework (experimental support) is supported by SingleShotDetector
- tensorflow backend only, RetinaNet
- tensorflow backend only.``torchscript`` format is supported by SiamMask
, MaskRCNN
, SingleShotDetector
, YOLOv3
and RetinaNet
. For usage of SiamMask model in ArcGIS Pro >= 2.8, load the PyTorch
framework saved model and export it with torchscript
framework using ArcGIS API for Python >= v1.8.5. For usage of SiamMask model in ArcGIS Pro 2.9, set framework to torchscript
and use the model files additionally generated inside âtorch_scriptsâ folder. If framework is TF-ONNX
(Only supported for SingleShotDetector
), batch_size
can be passed as an optional keyword argument.
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS
Object. Used for publishing the item. If not specified then active gis user is taken.
compute_metrics
Optional boolean. Used for computing model metrics.
save_optimizer
Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False
save_inference_file
Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.
kwargs
Optional Parameters.
Unfreezes the earlier layers of the model for fine-tuning.
Function is used to list all the installed deep learning models.
Note
This function is supported with ArcGIS Enterprise (Image Server)
Parameter
Description
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
list of deep learning models installed
Function can be used to train a deep learning model using the output from the export_training_data function. It generates the deep learning model package (*.dlpk) and adds it to your enterprise portal. train_model function performs the training using the Raster Analytics server.
Note
This function is supported with ArcGIS Enterprise (Image Server)
Parameter
Description
input_folder
Required string or list. This is the input location for the training sample data. It can be the path of output location on the file share raster data store or a shared file system path. The training sample data folder needs to be the output of export_training_data function, containing âimagesâ and âlabelsâ folder, as well as the JSON model definition file written out together by the function.
/rasterStores/yourRasterStoreFolderName/trainingSampleData
/fileShares/yourFileShareFolderName/trainingSampleData
serverNamedeepLearning rainingSampleData
The function also support multiple input folders. In this case, specify the list of input folders
[â/rasterStores/yourRasterStoreFolderName/trainingSampleDataAâ, â/rasterStores/yourRasterStoreFolderName/trainingSampleDataBâ]
[â/fileShares/yourFileShareFolderName/trainingSampleDataAâ, â/fileShares/yourFileShareFolderName/trainingSampleDataBâ]
[âserverNamedeepLearning rainingSampleDataAâ, âserverNamedeepLearning rainingSampleDataBâ]
Multiple input folders are supported when all the following conditions are met:
The metadata format must be one of the following types: Classified_Tiles, Labeled_Tiles, Multi-labeled Tiles, PASCAL_VOC_rectangles, or RCNN_Masks.
All training data must have the same metadata format.
All training data must have the same number of bands.
All training data must have the same tile size.
model_type
Required string. The model type to use for training the deep learning model. Possible values:
SSD - The Single Shot Detector (SSD) is used for object detection.
UNET - U-Net is used for pixel classification.
FEATURE_CLASSIFIER - The Feature Classifier is used for object classification.
PSPNET - The Pyramid Scene Parsing Network (PSPNET) is used for pixel classification.
RETINANET - The RetinaNet is used for object detection.
MASKRCNN - The MarkRCNN is used for object detection
YOLOV3 - The YOLOv3 approach will be used to train the model. YOLOv3 is used for object detection.
DeepLabV3 - The DeepLabV3 approach will be used to train the model. DeepLab is used for pixel classification.
FASTERRCNN - The FasterRCNN approach will be used to train the model. FasterRCNN is used for object detection.
BDCN_EDGEDETECTOR - The Bi-Directional Cascade Network (BDCN) architecture will be used to train the model. The BDCN Edge Detector is used for pixel classification. This approach is useful to improve edge detection for objects at different scales.
HED_EDGEDETECTOR - The Holistically-Nested Edge Detection (HED) architecture will be used to train the model. The HED Edge Detector is used for pixel classification. This approach is useful to in edge and object boundary detection.
MULTITASK_ROADEXTRACTOR - The Multi Task Road Extractor architecture will be used to train the model. The Multi Task Road Extractor is used for pixel classification. This approach is useful for road network extraction from satellite imagery.
CONNECTNET - The ConnectNet architecture will be used to train the model. ConnectNet is used for pixel classification. This approach is useful for road network extraction from satellite imagery.
PIX2PIX - The Pix2Pix approach will be used to train the model. Pix2Pix is used for image-to-image translation. This approach creates a model object that generates images of one type to another. The input training data for this model type uses the Export Tiles metadata format.
CYCLEGAN - The CycleGAN approach will be used to train the model. CycleGAN is used for image-to-image translation. This approach creates a model object that generates images of one type to another. This approach is unique in that the images to be trained do not need to overlap. The input training data for this model type uses the CycleGAN metadata format.
SUPERRESOLUTION - The Super-resolution approach will be used to train the model. Super-resolution is used for image-to-image translation. This approach creates a model object that increases the resolution and improves the quality of images. The input training data for this model type uses the Export Tiles metadata format.
CHANGEDETECTOR - The Change detector approach will be used to train the model. Change detector is used for pixel classification. This approach creates a model object that uses two spatial-temporal images to create a classified raster of the change. The input training data for this model type uses the Classified Tiles metadata format.
IMAGECAPTIONER - The Image captioner approach will be used to train the model. Image captioner is used for image-to-text translation. This approach creates a model that generates text captions for an image.
SIAMMASK - The Siam Mask approach will be used to train the model. Siam Mask is used for object detection in videos. The model is trained using frames of the video and detects the classes and bounding boxes of the objects in each frame. The input training data for this model type uses the MaskRCNN metadata format.
MMDETECTION - The MMDetection approach will be used to train the model. MMDetection is used for object detection. The supported metadata formats are PASCAL Visual Object Class rectangles and KITTI rectangles.
MMSEGMENTATION - The MMSegmentation approach will be used to train the model. MMDetection is used for pixel classification. The supported metadata format is Classified Tiles.
DEEPSORT - The Deep Sort approach will be used to train the model. Deep Sort is used for object detection in videos. The model is trained using frames of the video and detects the classes and bounding boxes of the objects in each frame. The input training data for this model type uses the Imagenet metadata format. Where Siam Mask is useful while tracking an object, Deep Sort is useful in training a model to track multiple objects.
PIX2PIXHD - The Pix2PixHD approach will be used to train the model. Pix2PixHD is used for image-to-image translation. This approach creates a model object that generates images of one type to another. The input training data for this model type uses the Export Tiles metadata format.
MAXDEEPLAB - The MAXDEEPLAB approach will be used to train the model. It is used for Panoptic Segmentation.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
{âname1â:âvalue1â, âname2â: âvalue2â}
batch_size
Optional int. The number of training samples to be processed for training at one time. If the server has a powerful GPU, this number can be increased to 16, 36, 64, and so on.
4
max_epochs
Optional int. The maximum number of epochs that the model should be trained. One epoch means the whole training dataset will be passed forward and backward through the deep neural network once.
20
learning_rate
Optional float. The rate at which the weights are updated during the training. It is a small positive value in the range between 0.0 and 1.0. If learning rate is set to 0, it will extract the optimal learning rate from the learning curve during the training process.
0.0
backbone_model
Optional string. Specifies the preconfigured neural network to be used as an architecture for training the new model. Possible values: DENSENET121 , DENSENET161 , DENSENET169 , DENSENET201 , MOBILENET_V2 , RESNET18 , RESNET34 , RESNET50 , RESNET101 , RESNET152 , VGG11 , VGG11_BN , VGG13 , VGG13_BN , VGG16 , VGG16_BN , VGG19 , VGG19_BN , DARKNET53 , REID_V1 , REID_V2
RESNET34
validation_percent
Optional float. The percentage (in %) of training sample data that will be used for validating the model.
10
pretrained_model
Optional dlpk portal item.
The pretrained model to be used for fine tuning the new model. It is a deep learning model package (dlpk) portal item.
stop_training
Optional bool. Specifies whether early stopping will be implemented.
True - The model training will stop when the model is no longer improving, regardless of the maximum epochs specified. This is the default.
False - The model training will continue until the maximum epochs is reached.
freeze_model
Optional bool. Specifies whether to freeze the backbone layers in the pretrained model, so that the weights and biases in the backbone layers remain unchanged.
True - The predefined weights and biases will not be altered in the backboneModel. This is the default.
False - The weights and biases of the backboneModel may be altered to better fit your training samples. This may take more time to process but usually could get better results.
overwrite_model
Optional bool. Overwrites an existing deep learning model package (.dlpk) portal item with the same name.
If the output_name parameter uses the file share data store path, this overwriteModel parameter is not applied.
True - The portal .dlpk item will be overwritten.
False - The portal .dlpk item will not be overwritten. This is the default.
output_name
Optional. trained deep learning model package can either be added as an item to the portal or can be written to a datastore.
To add as an item, specify the name of the output deep learning model package (item) to be created.
âtrainedModelâ
In order to write the dlpk to fileshare datastore, specify the datastore path.
â/fileShares/filesharename/folderâ
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is â80%â
processorType - Sets the processor type. âCPUâ or âGPUâ
{âprocessorTypeâ : âCPUâ}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
gis
Optional GIS
. The GIS on which this tool runs. If not specified, the active GIS is used.
Returns the dlpk portal item that has properties for title, type, filename, file, id and folderId.
Provides helper methods to read and access AI Service Connection Files.
Parameter
Description
connection_file_path
Required String. Path to the AI Service Connection File.
AIServiceConnection
Object
Returns a dictionary representation of the object with all the connection properties.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4