Intuitive, high-performance YouTube data extraction library for Python.
StreamSnapper provides a clean, pythonic interface for extracting YouTube metadata and streams. It wraps yt-dlp with an intuitive object-oriented API, automatic quality enhancements, and robust type safety.
- Intuitive API: Access data via properties (
yt.streams.video.best). - Smart Categorization: Videos, audios, and subtitles are automatically sorted and filtered.
- Modern Tech: Built with
pydanticv2 andorjsonfor high performance. - AI Detection: Automatically detects AI-upscaled content.
- Type Safe: Fully typed for excellent IDE support.
uv add streamsnapperRequirements: Python 3.10+
from streamsnapper import YouTube
# automatic extraction on initialization
yt = YouTube("https://www.youtube.com/watch?v=dQw4w9WgXcQ")
# Video Metadata
print(f"Title: {yt.metadata.title}")
print(f"Views: {yt.metadata.view_count}")
print(f"Duration: {yt.metadata.duration_formatted}")
# Stream Access (Automatically sorted by quality)
print(f"Best Video: {yt.streams.video.best.url}")
print(f"Best Audio: {yt.streams.audio.best.url}")StreamSnapper offers a readable fluent interface for filtering:
# specific resolution
stream = yt.streams.video.filter(resolution="1080p").best
# specific codec and frame rate
stream = yt.streams.video.filter(codec="vp9", fps=60).best
# High quality audio
stream = yt.streams.audio.filter(min_bitrate=128).bestWe automatically filter and deduplicate thumbnails to give you exactly what you need:
yt.metadata.thumbnails: A curated list of high-quality thumbnails (maxres, sd, hq).yt.metadata.all_thumbnails: The complete raw list of all available thumbnails.
Detects if a stream has been AI-upscaled (e.g., "1080p Premium" or similar enhancements):
if stream.is_ai_upscaled:
print("This stream is AI upscaled!")All models support high-performance JSON serialization using orjson:
json_data = yt.metadata.to_json()The main entry point.
yt = YouTube(
url="https://...",
cookies=None, # Optional: CookieFile or CookieBrowser
logging=False # Optional: Enable verbose logging
)Properties:
.metadata:VideoInformationobject (title, id, description, stats, etc.).streams:Streamsobject containing:.video:VideoStreamCollection.audio:AudioStreamCollection.subtitle:SubtitleStreamCollection
Represents a single video format. Key attributes:
url: Direct download URL.resolution: string (e.g., "1080p").codec: string (e.g., "vp9").bitrate: float (Mbps/Kbps).is_hdr: boolean.is_ai_upscaled: boolean.
Access private or age-restricted content using cookies. Cookies are automatically reused across extraction and all subsequent downloads β they are extracted a single time and propagated to every stream, so you never hit unnecessary re-authentication.
from streamsnapper import YouTube, CookieBrowser, CookieFile
# Use cookies from local Chrome browser
yt = YouTube(url, cookies=CookieBrowser.CHROME)
# Use a Netscape-formatted cookie file
yt = YouTube(url, cookies=CookieFile("cookies.txt"))# Download best video-only stream
path = yt.streams.video.best.download(output_path="downloads/")
# Download best audio-only stream
path = yt.streams.audio.best.download(output_path="downloads/")Default filenames follow the convention:
- Video-only:
<Title> (video-only) [yt-<ID>].ext - Audio-only:
<Title> (audio-only) [yt-<ID>].ext
YouTube separates high-quality video and audio into distinct streams. Use download_with_audio to merge them automatically with ffmpeg:
video = yt.streams.video.best
audio = yt.streams.audio.best
# Merge and save β requires ffmpeg in PATH
path = video.download_with_audio(audio=audio, output_path="downloads/")
# Result filename: <Title> [yt-<ID>].extNote
ffmpeg must be installed and available in your system PATH for merging to work.
MIT License - see LICENSE file.