Moderators provides normalized, consistent output regardless of the underlying model or framework.
Results are returned as a list of PredictionResult dataclass instances:
[
PredictionResult(
source_path='',
classifications={'porn': 0.9821},
detections=[],
raw_output={'label': 'porn', 'score': 0.9821}
),
...
]The CLI outputs the same structure as JSON:
[
{
"source_path": "",
"classifications": { "porn": 0.9821 },
"detections": [],
"raw_output": { "label": "porn", "score": 0.9821 }
}
]Use dataclasses.asdict() to convert Python results to JSON-ready dictionaries:
from dataclasses import asdict
from moderators import AutoModerator
moderator = AutoModerator.from_pretrained("viddexa/nsfw-detection-2-mini")
result = moderator("/path/to/image.jpg")
json_ready = [asdict(r) for r in result]
print(json_ready)source_path(str): Path to the input file, or empty string for text/direct inputclassifications(dict): Normalized classification results as{label: score}pairsdetections(list): Object detection results (empty for classification tasks)raw_output(dict): Original model output for reference
From Hugging Face Hub:
from moderators import AutoModerator
moderator = AutoModerator.from_pretrained("viddexa/nsfw-detection-2-mini")From local directory:
moderator = AutoModerator.from_pretrained("/path/to/model")With offline mode:
moderator = AutoModerator.from_pretrained("model-id", local_files_only=True)Image input:
result = moderator("/path/to/image.jpg")Text input:
result = moderator("Text to classify")Moderators automatically detects the task type from the model's config.json when possible, so you don't need to specify the task manually.
Supported tasks:
- Image classification (e.g., NSFW detection)
- Text classification (e.g., sentiment analysis, toxicity detection)
- From the Hub: Pass a model ID like
viddexa/nsfw-detection-2-minior any compatible Transformers model - From disk: Pass a local folder that contains a
config.jsonnext to your model weights
The system automatically infers the task and integration from the config when possible.