torchvision.ops¶
torchvision.ops
implements operators that are specific for Computer Vision.
Note
All operators have native support for TorchScript.
-
torchvision.ops.
nms
(boxes, scores, iou_threshold)[source]¶ Performs non-maximum suppression (NMS) on the boxes according to their intersection-over-union (IoU).
NMS iteratively removes lower scoring boxes which have an IoU greater than iou_threshold with another (higher scoring) box.
If multiple boxes have the exact same score and satisfy the IoU criterion with respect to a reference box, the selected box is not guaranteed to be the same between CPU and GPU. This is similar to the behavior of argsort in PyTorch when repeated values are present.
- Parameters
- Returns
keep – int64 tensor with the indices of the elements that have been kept by NMS, sorted in decreasing order of scores
- Return type
-
torchvision.ops.
roi_align
(input, boxes, output_size, spatial_scale=1.0, sampling_ratio=-1, aligned=False)[source]¶ Performs Region of Interest (RoI) Align operator described in Mask R-CNN
- Parameters
input (Tensor[N, C, H, W]) – input tensor
boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from. If a single Tensor is passed, then the first column should contain the batch index. If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i in a batch
output_size (int or Tuple[int, int]) – the size of the output after the cropping is performed, as (height, width)
spatial_scale (float) – a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
sampling_ratio (int) – number of sampling points in the interpolation grid used to compute the output value of each pooled output bin. If > 0, then exactly sampling_ratio x sampling_ratio grid points are used. If <= 0, then an adaptive number of grid points are used (computed as ceil(roi_width / pooled_w), and likewise for height). Default: -1
aligned (bool) – If False, use the legacy implementation. If True, pixel shift it by -0.5 for align more perfectly about two neighboring pixel indices. This version in Detectron2
- Returns
output (Tensor[K, C, output_size[0], output_size[1]])
-
torchvision.ops.
ps_roi_align
(input, boxes, output_size, spatial_scale=1.0, sampling_ratio=-1)[source]¶ Performs Position-Sensitive Region of Interest (RoI) Align operator mentioned in Light-Head R-CNN.
- Parameters
input (Tensor[N, C, H, W]) – input tensor
boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from. If a single Tensor is passed, then the first column should contain the batch index. If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i in a batch
output_size (int or Tuple[int, int]) – the size of the output after the cropping is performed, as (height, width)
spatial_scale (float) – a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
sampling_ratio (int) – number of sampling points in the interpolation grid used to compute the output value of each pooled output bin. If > 0 then exactly sampling_ratio x sampling_ratio grid points are used. If <= 0, then an adaptive number of grid points are used (computed as ceil(roi_width / pooled_w), and likewise for height). Default: -1
- Returns
output (Tensor[K, C, output_size[0], output_size[1]])
-
torchvision.ops.
roi_pool
(input, boxes, output_size, spatial_scale=1.0)[source]¶ Performs Region of Interest (RoI) Pool operator described in Fast R-CNN
- Parameters
input (Tensor[N, C, H, W]) – input tensor
boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from. If a single Tensor is passed, then the first column should contain the batch index. If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i in a batch
output_size (int or Tuple[int, int]) – the size of the output after the cropping is performed, as (height, width)
spatial_scale (float) – a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
- Returns
output (Tensor[K, C, output_size[0], output_size[1]])
-
torchvision.ops.
ps_roi_pool
(input, boxes, output_size, spatial_scale=1.0)[source]¶ Performs Position-Sensitive Region of Interest (RoI) Pool operator described in R-FCN
- Parameters
input (Tensor[N, C, H, W]) – input tensor
boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) format where the regions will be taken from. If a single Tensor is passed, then the first column should contain the batch index. If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i in a batch
output_size (int or Tuple[int, int]) – the size of the output after the cropping is performed, as (height, width)
spatial_scale (float) – a scaling factor that maps the input coordinates to the box coordinates. Default: 1.0
- Returns
output (Tensor[K, C, output_size[0], output_size[1]])
-
torchvision.ops.
deform_conv2d
(input, offset, weight, bias=None, stride=(1, 1), padding=(0, 0), dilation=(1, 1))[source]¶ Performs Deformable Convolution, described in Deformable Convolutional Networks
- Parameters
input (Tensor[batch_size, in_channels, in_height, in_width]) – input tensor
(Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width, (offset) – out_height, out_width]): offsets to be applied for each position in the convolution kernel.
weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]) – convolution weights, split into groups of size (in_channels // groups)
bias (Tensor[out_channels]) – optional bias of shape (out_channels,). Default: None
stride (int or Tuple[int, int]) – distance between convolution centers. Default: 1
padding (int or Tuple[int, int]) – height/width of padding of zeroes around each image. Default: 0
dilation (int or Tuple[int, int]) – the spacing between kernel elements. Default: 1
- Returns
result of convolution
- Return type
output (Tensor[batch_sz, out_channels, out_h, out_w])
- Examples::
>>> input = torch.rand(1, 3, 10, 10) >>> kh, kw = 3, 3 >>> weight = torch.rand(5, 3, kh, kw) >>> # offset should have the same spatial size as the output >>> # of the convolution. In this case, for an input of 10, stride of 1 >>> # and kernel size of 3, without padding, the output size is 8 >>> offset = torch.rand(5, 2 * kh * kw, 8, 8) >>> out = deform_conv2d(input, offset, weight) >>> print(out.shape) >>> # returns >>> torch.Size([1, 5, 8, 8])
-
class
torchvision.ops.
RoIAlign
(output_size, spatial_scale, sampling_ratio, aligned=False)[source]¶ See roi_align
-
class
torchvision.ops.
PSRoIAlign
(output_size, spatial_scale, sampling_ratio)[source]¶ See ps_roi_align
-
class
torchvision.ops.
DeformConv2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ See deform_conv2d
-
class
torchvision.ops.
MultiScaleRoIAlign
(featmap_names, output_size, sampling_ratio)[source]¶ Multi-scale RoIAlign pooling, which is useful for detection with or without FPN.
It infers the scale of the pooling via the heuristics present in the FPN paper.
- Parameters
Examples:
>>> m = torchvision.ops.MultiScaleRoIAlign(['feat1', 'feat3'], 3, 2) >>> i = OrderedDict() >>> i['feat1'] = torch.rand(1, 5, 64, 64) >>> i['feat2'] = torch.rand(1, 5, 32, 32) # this feature won't be used in the pooling >>> i['feat3'] = torch.rand(1, 5, 16, 16) >>> # create some random bounding boxes >>> boxes = torch.rand(6, 4) * 256; boxes[:, 2:] += boxes[:, :2] >>> # original image size, before computing the feature maps >>> image_sizes = [(512, 512)] >>> output = m(i, [boxes], image_sizes) >>> print(output.shape) >>> torch.Size([6, 5, 3, 3])
-
class
torchvision.ops.
FeaturePyramidNetwork
(in_channels_list, out_channels, extra_blocks=None)[source]¶ Module that adds a FPN from on top of a set of feature maps. This is based on “Feature Pyramid Network for Object Detection”.
The feature maps are currently supposed to be in increasing depth order.
The input to the model is expected to be an OrderedDict[Tensor], containing the feature maps on top of which the FPN will be added.
- Parameters
in_channels_list (list[int]) – number of channels for each feature map that is passed to the module
out_channels (int) – number of channels of the FPN representation
extra_blocks (ExtraFPNBlock or None) – if provided, extra operations will be performed. It is expected to take the fpn features, the original features and the names of the original features as input, and returns a new list of feature maps and their corresponding names
Examples:
>>> m = torchvision.ops.FeaturePyramidNetwork([10, 20, 30], 5) >>> # get some dummy data >>> x = OrderedDict() >>> x['feat0'] = torch.rand(1, 10, 64, 64) >>> x['feat2'] = torch.rand(1, 20, 16, 16) >>> x['feat3'] = torch.rand(1, 30, 8, 8) >>> # compute the FPN on top of x >>> output = m(x) >>> print([(k, v.shape) for k, v in output.items()]) >>> # returns >>> [('feat0', torch.Size([1, 5, 64, 64])), >>> ('feat2', torch.Size([1, 5, 16, 16])), >>> ('feat3', torch.Size([1, 5, 8, 8]))]