Computer Vision System Toolbox

Feature Detection, Extraction, and Matching

Computer Vision System Toolbox provides a suite of feature detectors and descriptors. Additionally, the system toolbox provides functionality to match two sets of feature vectors and visualize the results.

When combined into a single workflow, feature detection, extraction, and matching can be used to solve many computer vision design challenges, such as image registration, stereo vision, object detection, and tracking.

Feature Detection and Extraction

A feature is an interesting part of an image, such as a corner, blob, edge, or line. Feature extraction enables you to derive a set of feature vectors, also called descriptors, from a set of detected features. Computer Vision System Toolbox offers capabilities for feature detection and extraction that include:

  • Corner detection, including Shi & Tomasi, Harris, and FAST methods
  • BRISK, MSER, and SURF detection for blobs and regions
  • Extraction of BRISK, FREAK, SURF, and simple pixel neighborhood descriptors
  • Histogram of Oriented Gradients (HOG) feature extraction
  • Visualization of feature location, scale, and orientation
SURF, MSER, and corner detection with Computer Vision System Toolbox.
SURF (left), MSER (center), and corner detection (right) with Computer Vision System Toolbox. Using the same image, the three different feature types are detected and results are plotted over the original image.
Histogram of Oriented Gradients (HOG) feature extraction of image
Histogram of Oriented Gradients (HOG) feature extraction of image (top). Feature vectors of different sizes are created to represent the image by varying cell size.

Feature Matching

Feature matching is the comparison of two sets of feature descriptors obtained from different images to provide point correspondences between images. Computer Vision System Toolbox offers functionality for feature matching that includes:

  • Configurable matching metrics, including SAD, SSD, and normalized cross-correlation
  • Hamming distance for binary features
  • Matching methods including Nearest Neighbor Ratio, Nearest Neighbor, and Threshold
  • Multicore support for faster execution on large feature sets
Detected features indicated by red circles and green crosses.
Detected features indicated by red circles (left) and green crosses (right). The yellow lines indicate the corresponding matched features between the two images.

Statistically robust methods like RANSAC can be used to filter outliers in matched feature sets while estimating the geometric transformation or fundamental matrix, which is useful when using feature matching for image registration, object detection, or stereo vision applications.

Feature-Based Image Registration

Image registration is the transformation of images from different camera views to use a unified co-coordinate system. Computer Vision System Toolbox supports an automatic approach to image registration by using features. Typical uses include video mosaicking, video stabilization, and image fusion.

Feature detection, extraction, and matching are the first steps in the feature-based automatic image registration workflow. You can remove the outliers in the matched feature sets using RANSAC to compute the geometric transformation between images and then apply the geometric transformation to align the two images.

Feature-based registration, as used for video stabilization.
Feature-based registration, used for video stabilization. The system toolbox detects interest points in two sequential video frames using corner features (top); the putative matches are determined with numerous outliers (bottom left), and outliers are removed using the RANSAC method (bottom right).
Next: Object Detection and Recognition

Try Computer Vision System Toolbox

Get trial software

Adresser la complexité des systèmes d’assistance à la conduite avec MATLAB et Simulink

View webinar