Context
Stable Diffusion and SDXL generate images that get shared and distributed. Under EU AI Act Article 50 (August 2, 2026), AI-generated images must carry transparency metadata and deepfake disclosure becomes mandatory.
Currently, generated images carry no provenance — no indication they're AI-generated, no model info, no generation parameters.
Possible approach
Embed provenance in image metadata (EXIF/PNG tEXt) during generation:
# After generating
image = pipe("a sunset").images[0]
# Embed provenance
from PIL.PngImagePlugin import PngInfo
pnginfo = PngInfo()
pnginfo.add_text("ai_generated", "true")
pnginfo.add_text("model", "stabilityai/stable-diffusion-xl-base-1.0")
pnginfo.add_text("generated_at", "2026-03-31T10:00:00Z")
image.save("output.png", pnginfo=pnginfo)
This uses Pillow (already a dependency), zero additional packages needed.
Why
- EU AI Act deepfake provisions target AI-generated images specifically
- Photos carry EXIF (camera, GPS). AI images carry nothing.
- Stability AI's open models are widely deployed — setting a provenance default would influence the ecosystem
Reference
- AKF embeds provenance into images via EXIF/XMP
- C2PA handles media provenance but requires PKI infrastructure
- A lightweight EXIF-based approach has broader adoption potential
Context
Stable Diffusion and SDXL generate images that get shared and distributed. Under EU AI Act Article 50 (August 2, 2026), AI-generated images must carry transparency metadata and deepfake disclosure becomes mandatory.
Currently, generated images carry no provenance — no indication they're AI-generated, no model info, no generation parameters.
Possible approach
Embed provenance in image metadata (EXIF/PNG tEXt) during generation:
This uses Pillow (already a dependency), zero additional packages needed.
Why
Reference