Pitch2D turns broadcast video into player and ball tracking data. Calibrate four points, pick two team colours, and let computer vision do the rest.
No setup ceremony. No exhaustive labelling. Just calibrate, configure, and process.
MP4, MOV or WebM up to 500 MB. Broadcast TV, drone, or fixed-tripod — all supported.
Click any four known positions on the pitch (corners, box intersections, halfway markers). Pitch2D solves the homography from there.
Annotated overlay video, 2D minimap, and frame-by-frame tracking JSON. Ready to drop into Python, R or your own dashboards.
The Pitch2D engine runs on your local hardware — never on someone else's cloud. Your unreleased match analysis is never uploaded, logged or shared.
Everything you need to turn a video file into structured tactical data.
YOLOv8 pre-trained on COCO, paired with ByteTrack to keep IDs stable across cuts and occlusions.
Click-to-calibrate transform from camera view to top-down pitch coordinates. Survives camera pans.
HSV clustering on the torso region with per-track-ID lock-in. Referees auto-detected and labelled.
Replays, close-ups and studio frames are detected and skipped — your tracking data stays clean.
Per-frame JSON: pixel coordinates, pitch coordinates, team IDs, track IDs. Pandas-ready.
Auto-detects CUDA. ~3-5× real-time on a modern NVIDIA card; CPU fallback for everything else.
Skip manual annotation. Get pitch-coordinate data into your notebook in minutes, not days.
Heat maps, average positions, off-ball runs — measured straight from any clip you can find.
No video leaves your facility. Run it on a single workstation behind your firewall.
One video file is all it takes.
Open the app →