Camera Calibration and Geometric Vision for Robotics¶
Camera calibration is one of the most fundamental steps in any vision-based robotics system. Without it, measurements are unreliable, 3D reconstruction fails, and robot-camera coordination breaks down.
Learning Objectives¶
- Understand the pinhole camera model and intrinsic/extrinsic parameters
- Perform intrinsic calibration using checkerboard or circle grid patterns
- Calibrate stereo camera systems and compute depth maps
- Solve the hand-eye calibration problem for robot-camera systems
- Undistort images and work with multi-camera rigs
1. Overview — Why Camera Calibration Matters¶
A calibrated camera lets you:
- Measure real-world distances from pixel coordinates
- Reconstruct 3D scenes using stereo or multi-view geometry
- Fuse vision with kinematics via hand-eye calibration
- Remove lens distortion so straight edges appear straight in images
For stereo systems:
For robot manipulators with mounted cameras:
Robot Forward Kinematics ─┐
├→ Hand-Eye Calibration → World-to-Gripper Transform
Camera Detection ──────────┘
2. Camera Models¶
2.1 The Pinhole Camera Model¶
Light from a 3D world point \((X, Y, Z)\) passes through the optical center and lands on the image sensor at pixel \((u, v)\):
Image Plane
┌──────────┐
│ │
│ (u,v) ● │
│ ╱ │
│ ╱ │
│ ╱ │ f (focal length)
│ ╱ │
●─────────────●──────────│── Optical Axis (Z)
(X,Y,Z) │ │
│ Pinhole │
└──────────┘
(u,v) = (fx * X/Z + cx, fy * Y/Z + cy)
2.2 Intrinsic Matrix K¶
- \(f_x, f_y\) — focal lengths in pixel units
- \(c_x, c_y\) — principal point (usually near image center)
Full projection: \(\mathbf{p} = K \cdot [R | t] \cdot \mathbf{P}\)
2.3 Lens Distortion¶
Radial distortion (barrel/pincushion): $\(x_{d} = x(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\)$ $\(y_{d} = y(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\)$
Tangential distortion (lens misalignment): $\(x_{d} = x + 2p_1 xy + p_2(r^2 + 2x^2)\)$ $\(y_{d} = y + p_1(r^2 + 2y^2) + 2p_2 xy\)$
OpenCV distortion vector order: \([k_1, k_2, p_1, p_2, k_3]\)
| Coefficient | Effect | Typical Range |
|---|---|---|
| k1, k2, k3 | Radial | -0.5 to 0.5 |
| p1, p2 | Tangential | -0.01 to 0.01 |
3. Intrinsic Calibration¶
3.1 Calibration Patterns¶
Checkerboard — high contrast, easy sub-pixel corner detection:
┌───┬───┬───┬───┬───┬───┐
│ │ │ │ │ │ │ Internal corners: (cols-1) × (rows-1)
├───┼───┼───┼───┼───┼───┤ e.g., 7×5 board → 6×4 corners
│ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │
└───┴───┴───┴───┴───┴───┘
ChArUco board — combines checkerboard with ArUco markers for robust detection even with partial occlusion.
Circle grid — useful when corners are hard to detect due to perspective.
3.2 Complete Intrinsic Calibration Code¶
import cv2
import numpy as np
import glob
import os
# Pattern definition
PATTERN_COLS, PATTERN_ROWS = 9, 6
SQUARE_SIZE = 0.025 # meters
# 3D object points
objp = np.zeros((PATTERN_ROWS * PATTERN_COLS, 3), np.float32)
objp[:, :2] = np.mgrid[0:PATTERN_COLS, 0:PATTERN_ROWS].T.reshape(-1, 2)
objp *= SQUARE_SIZE
obj_points, img_points = [], []
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Load images and detect corners
for path in sorted(glob.glob("calibration_images/*.png")):
img = cv2.imread(path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (PATTERN_COLS, PATTERN_ROWS))
if ret:
corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
obj_points.append(objp.copy())
img_points.append(corners)
print(f"Detected corners in {len(obj_points)} images")
# Calibrate
img_size = gray.shape[::-1] # (width, height)
ret, K, dist, rvecs, tvecs = cv2.calibrateCamera(
obj_points, img_points, img_size, None, None
)
print(f"RMS error: {ret:.4f} px")
print(f"K:\n{K}")
print(f"dist: {dist.flatten()}")
np.savez("calibration.npz", camera_matrix=K, dist_coeffs=dist, image_size=img_size)
3.3 Reprojection Error Analysis¶
A good calibration typically achieves < 0.5 px RMS error.
for i in range(len(obj_points)): projected, _ = cv2.projectPoints(obj_points[i], rvecs[i], tvecs[i], K, dist) error = cv2.norm(img_points[i], projected, cv2.NORM_L2) / len(projected) status = "OK" if error < 0.5 else "CHECK" print(f" Image {i:2d}: {error:.4f} px [{status}]")
---
## 4. Stereo Calibration
### 4.1 Concept
A stereo pair enables depth estimation from the horizontal shift (disparity)
between matched points in left and right images:
Depth $Z = f \cdot b / d$ where $f$ = focal length, $b$ = baseline.
### 4.2 Stereo Calibration Code
```python
import cv2
import numpy as np
import glob
PATTERN_COLS, PATTERN_ROWS = 9, 6
SQUARE_SIZE = 0.025
objp = np.zeros((PATTERN_ROWS * PATTERN_COLS, 3), np.float32)
objp[:, :2] = np.mgrid[0:PATTERN_COLS, 0:PATTERN_ROWS].T.reshape(-1, 2)
objp *= SQUARE_SIZE
obj_points = []
img_points_left = []
img_points_right = []
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
for left_path, right_path in zip(
sorted(glob.glob("stereo/left/*.png")),
sorted(glob.glob("stereo/right/*.png"))
):
gray_l = cv2.cvtColor(cv2.imread(left_path), cv2.COLOR_BGR2GRAY)
gray_r = cv2.cvtColor(cv2.imread(right_path), cv2.COLOR_BGR2GRAY)
ret_l, corners_l = cv2.findChessboardCorners(gray_l, (PATTERN_COLS, PATTERN_ROWS))
ret_r, corners_r = cv2.findChessboardCorners(gray_r, (PATTERN_COLS, PATTERN_ROWS))
if ret_l and ret_r:
corners_l = cv2.cornerSubPix(gray_l, corners_l, (11, 11), (-1, -1), criteria)
corners_r = cv2.cornerSubPix(gray_r, corners_r, (11, 11), (-1, -1), criteria)
obj_points.append(objp.copy())
img_points_left.append(corners_l)
img_points_right.append(corners_r)
img_size = gray_l.shape[::-1]
# Calibrate each camera individually
_, K_l, dist_l, _, _ = cv2.calibrateCamera(obj_points, img_points_left, img_size, None, None)
_, K_r, dist_r, _, _ = cv2.calibrateCamera(obj_points, img_points_right, img_size, None, None)
# Stereo calibration (find relative pose between cameras)
ret, K_l, dist_l, K_r, dist_r, R, T, E, F = cv2.stereoCalibrate(
obj_points, img_points_left, img_points_right,
K_l, dist_l, K_r, dist_r, img_size, criteria=criteria,
flags=cv2.CALIB_FIX_INTRINSIC
)
print(f"Stereo RMS error: {ret:.4f} Baseline: {abs(T[0][0])*1000:.1f} mm")
# Stereo rectification
R1, R2, P1, P2, Q, roi1, roi2 = cv2.stereoRectify(
K_l, dist_l, K_r, dist_r, img_size, R, T, alpha=0
)
# Compute rectification maps (do once, reuse for every frame)
map_l1, map_l2 = cv2.initUndistortRectifyMap(K_l, dist_l, R1, P1, img_size, cv2.CV_32FC1)
map_r1, map_r2 = cv2.initUndistortRectifyMap(K_r, dist_r, R2, P2, img_size, cv2.CV_32FC1)
4.3 Disparity and Depth Computation¶
def compute_depth_map(img_l_rect, img_r_rect, Q, num_disparities=128):
"""Compute depth map from rectified stereo pair."""
gray_l = cv2.cvtColor(img_l_rect, cv2.COLOR_BGR2GRAY)
gray_r = cv2.cvtColor(img_r_rect, cv2.COLOR_BGR2GRAY)
stereo = cv2.StereoSGBM_create(
minDisparity=0, numDisparities=num_disparities, blockSize=5,
P1=8*3*5**2, P2=32*3*5**2, disp12MaxDiff=1,
uniquenessRatio=10, speckleWindowSize=100, speckleRange=32,
mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
)
disparity = stereo.compute(gray_l, gray_r).astype(np.float32) / 16.0
points_3d = cv2.reprojectImageTo3D(disparity, Q)
return disparity, points_3d[:, :, 2]
# Rectify and compute
img_l_rect = cv2.remap(img_l, map_l1, map_l2, cv2.INTER_LINEAR)
img_r_rect = cv2.remap(img_r, map_r1, map_r2, cv2.INTER_LINEAR)
disparity, depth = compute_depth_map(img_l_rect, img_r_rect, Q)
4.4 3D Triangulation¶
def triangulate_point(p_l, p_r, P1, P2):
"""Triangulate a 3D point from two views using DLT."""
A = np.array([
p_l[0]*P1[2] - P1[0], p_l[1]*P1[2] - P1[1],
p_r[0]*P2[2] - P2[0], p_r[1]*P2[2] - P2[1],
])
_, _, Vt = np.linalg.svd(A)
X = Vt[-1]
return X / X[3]
# Projection matrices
P1_full = np.hstack([np.eye(3), np.zeros((3, 1))])
P2_full = np.hstack([R, T])
# Triangulate a matched point pair
p_l = np.linalg.inv(K_l) @ np.array([320, 240, 1.0]) # normalized coords
p_r = np.linalg.inv(K_r) @ np.array([280, 240, 1.0])
X_3d = triangulate_point(p_l, p_r, P1_full, P2_full)
print(f"3D Point: X={X_3d[0]:.4f}, Y={X_3d[1]:.4f}, Z={X_3d[2]:.4f}")
4.5 Epipolar Geometry Verification¶
After rectification, corresponding points lie on the same scanline:
def draw_epipolar_lines(img_l, img_r, num_lines=20):
h, w = img_l.shape[:2]
spacing = h // num_lines
vis_l, vis_r = img_l.copy(), img_r.copy()
for i in range(num_lines):
y = i * spacing + spacing // 2
color = tuple(int(c) for c in np.random.randint(50, 255, 3))
cv2.line(vis_l, (0, y), (w, y), color, 1)
cv2.line(vis_r, (0, y), (w, y), color, 1)
cv2.imshow("Epipolar Lines (should be horizontal)", np.hstack([vis_l, vis_r]))
cv2.waitKey(0)
cv2.destroyAllWindows()
5. Extrinsic Calibration (Hand-Eye)¶
5.1 The Hand-Eye Problem¶
When a camera is mounted on a robot end-effector, find the tool-to-camera transformation \(X\) by solving \(AX = XB\):
- \(A\) = relative end-effector motions (from robot kinematics)
- \(B\) = relative target observations (from camera)
- \(X\) = unknown end-effector-to-camera transform
5.2 Eye-in-Hand vs Eye-to-Hand¶
Eye-in-Hand: Eye-to-Hand:
Base → Joints → EE → Camera Base → Joints → EE → Gripper
Fixed Camera sees workspace
| Config | Pros | Cons |
|---|---|---|
| Eye-in-hand | Close to target, moves with robot | Drifts with joint changes |
| Eye-to-hand | Fixed calibration, whole workspace | Occlusion by arm |
5.3 Calibration with ChArUco Boards¶
import cv2
import numpy as np
# Generate ChArUco board (print this for calibration)
dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
board = cv2.aruco.CharucoBoard((5, 7), 0.04, 0.03, dictionary)
cv2.imwrite("charuco_board.png", board.generateImage((500, 700)))
5.4 Solving AX = XB with OpenCV¶
def hand_eye_calibrate(robot_poses, target_poses):
"""Solve hand-eye calibration using Tsai-Lenz method.
Args:
robot_poses: List of 4x4 end-effector pose matrices
target_poses: List of 4x4 target-in-camera pose matrices
Returns:
X: 4x4 end-effector-to-camera transform
"""
R_gripper2base = [T[:3, :3] for T in robot_poses]
t_gripper2base = [T[:3, 3].reshape(3, 1) for T in robot_poses]
R_target2cam = [T[:3, :3] for T in target_poses]
t_target2cam = [T[:3, 3].reshape(3, 1) for T in target_poses]
R_cam2gripper, t_cam2gripper = cv2.calibrateHandEye(
R_gripper2base, t_gripper2base,
R_target2cam, t_target2cam,
method=cv2.CALIB_HAND_EYE_TSAI
)
X = np.eye(4)
X[:3, :3] = R_cam2gripper
X[:3, 3] = t_cam2gripper.flatten()
return X
# Usage (replace with your actual robot/camera data):
# robot_poses = [...] # Read from robot controller
# target_poses = [...] # Detected via solvePnP on ArUco/ChArUco markers
# T_ee_camera = hand_eye_calibrate(robot_poses, target_poses)
5.5 ArUco Marker Detection for Target Poses¶
def detect_aruco_poses(frame, camera_matrix, dist_coeffs, marker_size=0.05):
"""Detect ArUco markers and return their poses."""
dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
params = cv2.aruco.DetectorParameters()
detector = cv2.aruco.ArucoDetector(dictionary, params)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, _ = detector.detectMarkers(gray)
poses = {}
if ids is not None:
for i, mid in enumerate(ids.flatten()):
rvec, tvec, _ = cv2.aruco.estimatePoseSingleMarkers(
corners[i], marker_size, camera_matrix, dist_coeffs
)
R, _ = cv2.Rodrigues(rvec[0])
T = np.eye(4)
T[:3, :3] = R
T[:3, 3] = tvec[0].flatten()
poses[mid] = T
cv2.drawFrameAxes(frame, camera_matrix, dist_coeffs, rvec[0], tvec[0], marker_size*0.5)
return poses, frame
6. Undistortion¶
6.1 Basic Undistortion (cv2.undistort)¶
calib = np.load("calibration.npz")
K, dist = calib["camera_matrix"], calib["dist_coeffs"]
img = cv2.imread("test_image.jpg")
undistorted = cv2.undistort(img, K, dist)
6.2 Efficient Remapping (cv2.remap)¶
Precompute maps once, reuse for every frame — essential for real-time:
h, w = img.shape[:2]
new_K, roi = cv2.getOptimalNewCameraMatrix(K, dist, (w, h), 1, (w, h))
map1, map2 = cv2.initUndistortRectifyMap(K, dist, None, new_K, (w, h), cv2.CV_32FC1)
# Now undistort is just a fast remap
undistorted = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR)
6.3 Real-Time Undistortion Class¶
class RealTimeUndistorter:
def __init__(self, K, dist, target_size=(640, 480)):
w, h = target_size
new_K, roi = cv2.getOptimalNewCameraMatrix(K, dist, (w, h), 1, (w, h))
self.map1, self.map2 = cv2.initUndistortRectifyMap(
K, dist, None, new_K, (w, h), cv2.CV_32FC1
)
x, y, rw, rh = roi
self.crop = (slice(y, y+rh), slice(x, x+rw))
def undistort(self, frame):
return cv2.remap(frame, self.map1, self.map2, cv2.INTER_LINEAR)[self.crop]
# Usage
undistorter = RealTimeUndistorter(K, dist)
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret: break
cv2.imshow("Undistorted", undistorter.undistort(frame))
if cv2.waitKey(1) & 0xFF == 27: break
cap.release()
7. Multi-Camera Systems¶
7.1 Camera Rig Calibration¶
For N cameras, calibrate each intrinsically, then pairwise:
class MultiCameraCalibrator:
def __init__(self, camera_ids):
self.camera_ids = camera_ids
self.intrinsics = {}
self.extrinsics = {}
def calibrate_pairwise(self, cam_i, cam_j, obj_pts, img_i, img_j,
Ki, di, Kj, dj, img_size):
"""Find relative pose between two cameras."""
ret, Ki, di, Kj, dj, R, T, E, F = cv2.stereoCalibrate(
obj_pts, img_i, img_j, Ki, di, Kj, dj, img_size,
flags=cv2.CALIB_FIX_INTRINSIC
)
T_mat = np.eye(4)
T_mat[:3, :3] = R
T_mat[:3, 3] = T.flatten()
self.extrinsics[(cam_i, cam_j)] = T_mat
return T_mat
def get_transform(self, cam_i, cam_j):
"""Get 4x4 transform between any two cameras via chain."""
if (cam_i, cam_j) in self.extrinsics:
return self.extrinsics[(cam_i, cam_j)]
# Chain through intermediate cameras
T = np.eye(4)
for k in range(self.camera_ids.index(cam_i), self.camera_ids.index(cam_j)):
T = T @ self.extrinsics[(self.camera_ids[k], self.camera_ids[k+1])]
return T
7.2 Overlapping FOV Stitching¶
def compute_stitch_homography(K_i, dist_i, R_i, t_i, K_j, dist_j, R_j, t_j, img_size):
"""Compute homography to stitch two overlapping camera views."""
R_rel = R_j @ R_i.T
map1, map2 = cv2.initUndistortRectifyMap(
K_i, dist_i, R_rel, K_j, img_size, cv2.CV_32FC1
)
return map1, map2
def stitch_two_views(img_i, img_j, H):
h, w = img_j.shape[:2]
warped = cv2.warpPerspective(img_i, H, (w * 2, h))
warped[0:h, 0:w] = img_j
gray = cv2.cvtColor(warped, cv2.COLOR_BGR2GRAY)
_, mask = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)
x, y, cw, ch = cv2.boundingRect(
max(cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0],
key=cv2.contourArea)
)
return warped[y:y+ch, x:x+cw]
8. Practical Tips¶
8.1 Lighting¶
| Condition | Effect | Recommendation |
|---|---|---|
| Too dark | Corner detection fails | Use bright, even lighting |
| Too bright | Overexposed pattern | Use diffuse light |
| Shadows | Uneven contrast | Multiple light sources |
| Flickering | Inconsistent brightness | LED or natural light |
8.2 Number of Images¶
| Scenario | Minimum | Recommended |
|---|---|---|
| Basic intrinsic | 10 | 15-25 |
| High-precision intrinsic | 25 | 30-50 |
| Stereo calibration | 10 pairs | 20-30 pairs |
| Hand-eye calibration | 10 | 15-20 |
| Wide-angle lens | 30 | 50-80 |
Key principle: Vary position, orientation, distance, and tilt of the pattern. Cover image center, edges, and corners.
8.3 Common Errors¶
| Problem | Likely Cause | Solution |
|---|---|---|
findChessboardCorners returns False |
Poor lighting, pattern not flat | Use rigid board, better lighting |
| High reprojection error (> 1.0 px) | Wrong pattern dimensions, inconsistent detection | Verify PATTERN_COLS/ROWS, use rigid board |
| Large distortion coefficients | Insufficient image variety | Tilt board, add more angles |
| Stereo rectification fails | Too-small overlap, misaligned cameras | Ensure physical alignment |
| Inconsistent hand-eye results | Not enough diverse poses | Spread poses across workspace |
8.4 Calibration Board Quality¶
| Board | Precision | Durability | Cost |
|---|---|---|---|
| Printed paper | ±2-5 mm | Low | Free |
| Laminated print | ±1-2 mm | Medium | Low |
| Acrylic board | ±0.1 mm | High | Medium |
| Glass board | ±0.05 mm | Very High | High |
9. Complete Calibration Pipeline¶
#!/usr/bin/env python3
"""End-to-end camera calibration script for robotics."""
import cv2, numpy as np, glob, os, argparse
class CameraCalibrator:
def __init__(self, pattern_size=(9, 6), square_size=0.025):
self.pattern_size = pattern_size
self.square_size = square_size
self.objp = np.zeros((pattern_size[1]*pattern_size[0], 3), np.float32)
self.objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2)
self.objp *= square_size
self.obj_points, self.img_points = [], []
self.criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
def collect_images(self, source=0, output_dir="calibration_images", num=25):
os.makedirs(output_dir, exist_ok=True)
cap = cv2.VideoCapture(source)
count = 0
print(f"SPACE=capture, ESC=done. Collecting {num} images.")
while count < num:
ret, frame = cap.read()
if not ret: break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
found, corners = cv2.findChessboardCorners(gray, self.pattern_size)
disp = frame.copy()
color = (0,255,0) if found else (0,0,255)
if found: cv2.drawChessboardCorners(disp, self.pattern_size, corners, found)
cv2.putText(disp, f"{count}/{num}", (10,30), cv2.FONT_HERSHEY_SIMPLEX, 1, color, 2)
cv2.imshow("Capture", disp)
key = cv2.waitKey(30) & 0xFF
if key == 32 and found:
cv2.imwrite(f"{output_dir}/img_{count:03d}.png", frame)
count += 1
elif key == 27: break
cap.release(); cv2.destroyAllWindows()
def load_images(self, image_dir):
paths = sorted(glob.glob(os.path.join(image_dir, "*.png")) +
glob.glob(os.path.join(image_dir, "*.jpg")))
self.obj_points, self.img_points = [], []
self.image_size = None
for path in paths:
gray = cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2GRAY)
if self.image_size is None: self.image_size = gray.shape[::-1]
ret, corners = cv2.findChessboardCorners(gray, self.pattern_size)
if ret:
corners = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), self.criteria)
self.obj_points.append(self.objp.copy())
self.img_points.append(corners)
print(f"Loaded {len(self.img_points)}/{len(paths)} valid images")
def calibrate(self):
ret, self.K, self.dist, self.rvecs, self.tvecs = cv2.calibrateCamera(
self.obj_points, self.img_points, self.image_size, None, None
)
print(f"RMS error: {ret:.4f} px")
print(f"K:\n{self.K}")
print(f"dist: {self.dist.flatten()}")
# Per-image analysis
for i in range(len(self.rvecs)):
proj, _ = cv2.projectPoints(self.obj_points[i], self.rvecs[i], self.tvecs[i], self.K, self.dist)
err = cv2.norm(self.img_points[i], proj, cv2.NORM_L2) / len(proj)
print(f" Image {i:2d}: {err:.4f} px {'OK' if err<0.5 else 'CHECK'}")
return ret
def save(self, filepath="calibration.npz"):
np.savez(filepath, camera_matrix=self.K, dist_coeffs=self.dist, image_size=self.image_size)
print(f"Saved to {filepath}")
def create_undistorter(self):
h, w = self.image_size[1], self.image_size[0]
new_K, roi = cv2.getOptimalNewCameraMatrix(self.K, self.dist, (w,h), 1, (w,h))
m1, m2 = cv2.initUndistortRectifyMap(self.K, self.dist, None, new_K, (w,h), cv2.CV_32FC1)
x, y, rw, rh = roi
return lambda f: cv2.remap(f, m1, m2, cv2.INTER_LINEAR)[y:y+rh, x:x+rw]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Camera Calibration")
parser.add_argument("--collect", action="store_true")
parser.add_argument("--images", default="calibration_images")
parser.add_argument("--pattern", type=int, nargs=2, default=[9,6])
parser.add_argument("--square", type=float, default=0.025)
args = parser.parse_args()
cal = CameraCalibrator(tuple(args.pattern), args.square)
if args.collect:
cal.collect_images(args.images)
else:
cal.load_images(args.images)
if len(cal.img_points) >= 5:
err = cal.calibrate()
cal.save()
else:
print("Need at least 5 valid images")
10. References¶
- OpenCV Camera Calibration Tutorial — Official OpenCV guide
- OpenCV Stereo Calibration — Stereo vision tutorial
- Tsai & Lenz (1989) — Hand-eye calibration method
- Zhang (2000) — Flexible camera calibration technique
- Marchand et al. (2016) — Pose estimation for augmented reality
- OpenCV ArUco Module — ArUco/ChArUco calibration
- Szeliski, Computer Vision: Algorithms and Applications — Multi-view geometry (Ch. 6-8)
- AprilTag 3 — Fiducial marker system for robotics