Home » Tutorial: Feature Detection Using OpenCV in Python

Tutorial: Feature Detection Using OpenCV in Python

Spring Framework Basics Video Course
Java SE 11 Programmer II [1Z0-816] Practice Tests
1 Year Subscription
Java SE 11 Programmer I [1Z0-815] Practice Tests
Oracle Java Certification
Java SE 11 Developer (Upgrade) [1Z0-817]

Feature detection is a crucial technique in computer vision that helps identify unique points, edges, or regions in an image.

These features are useful for tasks like object detection, image stitching, and motion tracking. OpenCV provides several feature detection methods.

What You’ll Learn

1. Introduction to Feature Detection

Features are distinct and meaningful points or regions in an image, such as edges, corners, or blobs. OpenCV provides various methods to detect features:

  • Harris Corner Detector
  • Shi-Tomasi Corner Detector
  • ORB (Oriented FAST and Rotated BRIEF)
  • SIFT (Scale-Invariant Feature Transform)

2. Harris Corner Detection

The Harris Corner Detector identifies corners in an image by analyzing changes in pixel intensity in all directions.

Example: Harris Corner Detection

import cv2
import numpy as np

# Load the image in grayscale
image = cv2.imread("input.jpg", cv2.IMREAD_GRAYSCALE)

# Convert to float32
gray = np.float32(image)

# Apply Harris corner detection
corners = cv2.cornerHarris(gray, blockSize=2, ksize=3, k=0.04)

# Dilate the result for better visibility
corners = cv2.dilate(corners, None)

# Mark the corners in the original image
result = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
result[corners > 0.01 * corners.max()] = [0, 0, 255]

# Display the result
cv2.imshow("Harris Corners", result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Parameters

  • blockSize: Neighborhood size.
  • ksize: Aperture size for the Sobel operator.
  • k: Harris detector free parameter.

3. Shi-Tomasi Corner Detection (Good Features to Track)

The Shi-Tomasi detector improves upon Harris by identifying the strongest corners.

Example: Shi-Tomasi Corner Detection

import cv2
import numpy as np

# Load the image in grayscale
image = cv2.imread("input.jpg", cv2.IMREAD_GRAYSCALE)

# Detect corners using Shi-Tomasi method
corners = cv2.goodFeaturesToTrack(image, maxCorners=100, qualityLevel=0.01, minDistance=10)
corners = np.int0(corners)

# Mark the corners on the original image
result = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for corner in corners:
    x, y = corner.ravel()
    cv2.circle(result, (x, y), 5, (0, 255, 0), -1)

# Display the result
cv2.imshow("Shi-Tomasi Corners", result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Parameters

  • maxCorners: Maximum number of corners to detect.
  • qualityLevel: Minimum quality of corners.
  • minDistance: Minimum distance between detected corners.

4. ORB (Oriented FAST and Rotated BRIEF)

ORB is a fast and efficient feature detector and descriptor extractor.

Example: ORB Feature Detection

import cv2

# Load the image
image = cv2.imread("input.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Initialize ORB detector
orb = cv2.ORB_create()

# Detect keypoints
keypoints = orb.detect(gray, None)

# Compute descriptors
keypoints, descriptors = orb.compute(gray, keypoints)

# Draw keypoints
result = cv2.drawKeypoints(image, keypoints, None, color=(0, 255, 0), flags=0)

# Display the result
cv2.imshow("ORB Features", result)
cv2.waitKey(0)
cv2.destroyAllWindows()

5. SIFT (Scale-Invariant Feature Transform)

SIFT detects keypoints and computes descriptors invariant to scale and rotation.

Example: SIFT Feature Detection

import cv2

# Load the image
image = cv2.imread("input.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Initialize SIFT detector
sift = cv2.SIFT_create()

# Detect keypoints and compute descriptors
keypoints, descriptors = sift.detectAndCompute(gray, None)

# Draw keypoints
result = cv2.drawKeypoints(image, keypoints, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

# Display the result
cv2.imshow("SIFT Features", result)
cv2.waitKey(0)
cv2.destroyAllWindows()

6. Practical Examples

6.1 Feature Matching Between Two Images

import cv2

# Load two images
image1 = cv2.imread("image1.jpg", cv2.IMREAD_GRAYSCALE)
image2 = cv2.imread("image2.jpg", cv2.IMREAD_GRAYSCALE)

# Initialize ORB detector
orb = cv2.ORB_create()

# Detect keypoints and compute descriptors
kp1, des1 = orb.detectAndCompute(image1, None)
kp2, des2 = orb.detectAndCompute(image2, None)

# Match features using Brute-Force matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance)

# Draw matches
result = cv2.drawMatches(image1, kp1, image2, kp2, matches[:10], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# Display the result
cv2.imshow("Feature Matching", result)
cv2.waitKey(0)
cv2.destroyAllWindows()

6.2 Tracking Features in Video

import cv2
import numpy as np

# Load the video
cap = cv2.VideoCapture("video.mp4")

# Read the first frame
ret, frame = cap.read()
gray_prev = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Detect Shi-Tomasi corners
corners = cv2.goodFeaturesToTrack(gray_prev, maxCorners=100, qualityLevel=0.01, minDistance=10)
corners = np.int0(corners)

# Create a mask for drawing
mask = np.zeros_like(frame)

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break

    gray_next = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Calculate optical flow
    corners_next, status, _ = cv2.calcOpticalFlowPyrLK(gray_prev, gray_next, corners, None)

    # Draw the tracks
    for i, (new, old) in enumerate(zip(corners_next, corners)):
        a, b = new.ravel()
        c, d = old.ravel()
        mask = cv2.line(mask, (a, b), (c, d), (0, 255, 0), 2)
        frame = cv2.circle(frame, (a, b), 5, (0, 0, 255), -1)

    output = cv2.add(frame, mask)
    cv2.imshow("Feature Tracking", output)

    if cv2.waitKey(30) & 0xFF == ord('q'):
        break

    gray_prev = gray_next.copy()
    corners = corners_next.reshape(-1, 1, 2)

cap.release()
cv2.destroyAllWindows()

7. Summary

Key Methods

  • cv2.cornerHarris(): Harris corner detection.
  • cv2.goodFeaturesToTrack(): Shi-Tomasi corner detection.
  • cv2.ORB_create(): ORB feature detection.
  • cv2.SIFT_create(): SIFT feature detection.

Best Practices

  1. Preprocess images (grayscale conversion, resizing) before detection.
  2. Choose a detection method based on task requirements (e.g., ORB for speed, SIFT for accuracy).
  3. Use feature matching for tasks like image alignment and object tracking.

By mastering these feature detection techniques, you can efficiently analyze and compare images

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More