Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 107 additions & 3 deletions doc/code/targets/4_openai_video_target.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,24 @@
"source": [
"# 4. OpenAI Video Target\n",
"\n",
"This example shows how to use the video target to create a video from a text prompt.\n",
"`OpenAIVideoTarget` supports three modes:\n",
"- **Text-to-video**: Generate a video from a text prompt.\n",
"- **Remix**: Create a variation of an existing video (using `video_id` from a prior generation).\n",
"- **Text+Image-to-video**: Use an image as the first frame of the generated video.\n",
"\n",
"Note that the video scorer requires `opencv`, which is not a default PyRIT dependency. You need to install it manually or using `pip install pyrit[opencv]`."
]
},
{
"cell_type": "markdown",
"id": "2177d3e7",
"metadata": {},
"source": [
"## Text-to-Video\n",
"\n",
"This example shows the simplest mode: generating video from text prompts, with scoring."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -58,7 +71,7 @@
"source": [
"## Generating and scoring a video:\n",
"\n",
"Using the video target you can send prompts to generate a video. The video scorer can evaluate the video content itself. Note this section is simply scoring the **video** not the audio. "
"Using the video target you can send prompts to generate a video. The video scorer can evaluate the video content itself. Note this section is simply scoring the **video** not the audio."
]
},
{
Expand Down Expand Up @@ -661,11 +674,102 @@
")\n",
"\n",
"for result in results:\n",
" await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore"
" await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore\n",
"\n",
"# Capture video_id from the first result for use in the remix section below\n",
"video_id = results[0].last_response.prompt_metadata[\"video_id\"]\n",
"print(f\"Video ID for remix: {video_id}\")"
]
},
{
"cell_type": "markdown",
"id": "d53a3e8e",
"metadata": {},
"source": [
"## Remix (Video Variation)\n",
"\n",
"Remix creates a variation of an existing video. After any successful generation, the response\n",
"includes a `video_id` in `prompt_metadata`. Pass this back via `prompt_metadata={\"video_id\": \"<id>\"}` to remix."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16465ce0",
"metadata": {},
"outputs": [],
"source": [
"from pyrit.models import Message, MessagePiece\n",
"\n",
"# Remix using the video_id captured from the text-to-video section above\n",
"remix_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=\"Make it a watercolor painting style\",\n",
" prompt_metadata={\"video_id\": video_id},\n",
")\n",
"remix_result = await video_target.send_prompt_async(message=Message([remix_piece])) # type: ignore\n",
"print(f\"Remixed video: {remix_result[0].message_pieces[0].converted_value}\")"
]
},
{
"cell_type": "markdown",
"id": "3c632d6e",
"metadata": {},
"source": [
"## Text+Image-to-Video\n",
"\n",
"Use an image as the first frame of the generated video. The input image dimensions must match\n",
"the video resolution (e.g. 1280x720). Pass both a text piece and an `image_path` piece in the same message."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66bd1d72",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"# Create a simple test image matching the video resolution (1280x720)\n",
"from PIL import Image\n",
"\n",
"from pyrit.common.path import HOME_PATH\n",
"\n",
"sample_image = HOME_PATH / \"assets\" / \"pyrit_architecture.png\"\n",
"resized = Image.open(sample_image).resize((1280, 720)).convert(\"RGB\")\n",
"\n",
"import tempfile\n",
"\n",
"tmp = tempfile.NamedTemporaryFile(suffix=\".jpg\", delete=False)\n",
"resized.save(tmp, format=\"JPEG\")\n",
"tmp.close()\n",
"image_path = tmp.name\n",
"\n",
"# Send text + image to the video target\n",
"i2v_target = OpenAIVideoTarget()\n",
"conversation_id = str(uuid.uuid4())\n",
"\n",
"text_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=\"Animate this image with gentle camera motion\",\n",
" conversation_id=conversation_id,\n",
")\n",
"image_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=image_path,\n",
" converted_value_data_type=\"image_path\",\n",
" conversation_id=conversation_id,\n",
")\n",
"result = await i2v_target.send_prompt_async(message=Message([text_piece, image_piece])) # type: ignore\n",
"print(f\"Text+Image-to-video result: {result[0].message_pieces[0].converted_value}\")"
]
}
],
"metadata": {
"jupytext": {
"main_language": "python"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
Expand Down
74 changes: 73 additions & 1 deletion doc/code/targets/4_openai_video_target.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,18 @@
# %% [markdown]
# # 4. OpenAI Video Target
#
# This example shows how to use the video target to create a video from a text prompt.
# `OpenAIVideoTarget` supports three modes:
# - **Text-to-video**: Generate a video from a text prompt.
# - **Remix**: Create a variation of an existing video (using `video_id` from a prior generation).
# - **Text+Image-to-video**: Use an image as the first frame of the generated video.
#
# Note that the video scorer requires `opencv`, which is not a default PyRIT dependency. You need to install it manually or using `pip install pyrit[opencv]`.

# %% [markdown]
# ## Text-to-Video
#
# This example shows the simplest mode: generating video from text prompts, with scoring.

# %%
from pyrit.executor.attack import (
AttackExecutor,
Expand Down Expand Up @@ -123,3 +131,67 @@

for result in results:
await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore

# Capture video_id from the first result for use in the remix section below
video_id = results[0].last_response.prompt_metadata["video_id"]
print(f"Video ID for remix: {video_id}")

# %% [markdown]
# ## Remix (Video Variation)
#
# Remix creates a variation of an existing video. After any successful generation, the response
# includes a `video_id` in `prompt_metadata`. Pass this back via `prompt_metadata={"video_id": "<id>"}` to remix.

# %%
from pyrit.models import Message, MessagePiece

# Remix using the video_id captured from the text-to-video section above
remix_piece = MessagePiece(
role="user",
original_value="Make it a watercolor painting style",
prompt_metadata={"video_id": video_id},
)
remix_result = await video_target.send_prompt_async(message=Message([remix_piece])) # type: ignore
print(f"Remixed video: {remix_result[0].message_pieces[0].converted_value}")

# %% [markdown]
# ## Text+Image-to-Video
#
# Use an image as the first frame of the generated video. The input image dimensions must match
# the video resolution (e.g. 1280x720). Pass both a text piece and an `image_path` piece in the same message.

# %%
import uuid

# Create a simple test image matching the video resolution (1280x720)
from PIL import Image

from pyrit.common.path import HOME_PATH

sample_image = HOME_PATH / "assets" / "pyrit_architecture.png"
resized = Image.open(sample_image).resize((1280, 720)).convert("RGB")

import tempfile

tmp = tempfile.NamedTemporaryFile(suffix=".jpg", delete=False)
resized.save(tmp, format="JPEG")
tmp.close()
image_path = tmp.name

# Send text + image to the video target
i2v_target = OpenAIVideoTarget()
conversation_id = str(uuid.uuid4())

text_piece = MessagePiece(
role="user",
original_value="Animate this image with gentle camera motion",
conversation_id=conversation_id,
)
image_piece = MessagePiece(
role="user",
original_value=image_path,
converted_value_data_type="image_path",
conversation_id=conversation_id,
)
result = await i2v_target.send_prompt_async(message=Message([text_piece, image_piece])) # type: ignore
print(f"Text+Image-to-video result: {result[0].message_pieces[0].converted_value}")
51 changes: 51 additions & 0 deletions pyrit/models/message.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,57 @@ def get_piece(self, n: int = 0) -> MessagePiece:

return self.message_pieces[n]

def get_pieces_by_type(
self,
*,
data_type: Optional[PromptDataType] = None,
original_value_data_type: Optional[PromptDataType] = None,
converted_value_data_type: Optional[PromptDataType] = None,
) -> list[MessagePiece]:
"""
Return all message pieces matching the given data type.

Args:
data_type: Alias for converted_value_data_type (for convenience).
original_value_data_type: The original_value_data_type to filter by.
converted_value_data_type: The converted_value_data_type to filter by.

Returns:
A list of matching MessagePiece objects (may be empty).
"""
effective_converted = converted_value_data_type or data_type
results = self.message_pieces
if effective_converted:
results = [p for p in results if p.converted_value_data_type == effective_converted]
if original_value_data_type:
results = [p for p in results if p.original_value_data_type == original_value_data_type]
return list(results)

def get_piece_by_type(
self,
*,
data_type: Optional[PromptDataType] = None,
original_value_data_type: Optional[PromptDataType] = None,
converted_value_data_type: Optional[PromptDataType] = None,
) -> Optional[MessagePiece]:
"""
Return the first message piece matching the given data type, or None.

Args:
data_type: Alias for converted_value_data_type (for convenience).
original_value_data_type: The original_value_data_type to filter by.
converted_value_data_type: The converted_value_data_type to filter by.

Returns:
The first matching MessagePiece, or None if no match is found.
"""
pieces = self.get_pieces_by_type(
data_type=data_type,
original_value_data_type=original_value_data_type,
converted_value_data_type=converted_value_data_type,
)
return pieces[0] if pieces else None

@property
def api_role(self) -> ChatMessageRole:
"""
Expand Down
Loading
Loading