diff --git a/.DS_Store b/.DS_Store
new file mode 100644
index 0000000..eb9caab
Binary files /dev/null and b/.DS_Store differ
diff --git a/video-generation/.DS_Store b/video-generation/.DS_Store
new file mode 100644
index 0000000..ee08d8a
Binary files /dev/null and b/video-generation/.DS_Store differ
diff --git a/video-generation/README.MD b/video-generation/README.MD
new file mode 100644
index 0000000..54a4e3e
--- /dev/null
+++ b/video-generation/README.MD
@@ -0,0 +1,271 @@
+## π§ Qdrant-Powered AI Video Idea Generation
+
+
Qdrant + OpenAI: Making Your YouTube Channel Smarter
+
+## β¨ How Qdrant is Used
+This application intelligently generates video ideas and finds related content by combining:
+
+- Qdrant for vector search (semantic memory of past transcripts)
+
+- OpenAI for embedding generation and creative idea generation
+
+## π Step-by-Step Process
+### 1. Create Embeddings
+When a user uploads or processes a video transcript, we:
+
+- Use OpenAIβs `text-embedding-ada-002 model` to convert the text into a vector.
+
+- Store that vector + metadata (like `user_id`, `video_id`, etc.) inside Qdrant under the collection `video_transcripts`.
+
+```
+π Function: embed_and_store(user, text, metadata)
+```
+### 2. Semantic Search
+When a user searches for a new video idea:
+
+- We embed the search query using OpenAI again.
+
+- Then search Qdrant to find the most similar past transcripts, filtering only for the specific user's own content.
+```
+π Function: search_similar_transcripts(query_text, user, top_k=10)
+
+```
+Flow:
+```
+[Transcript Text]
+ β¬ (embedding)
+[Vector Embedding + Payload (including transcript)]
+ β‘ (store in Qdrant)
+
+Later...
+
+[Query Text]
+ β¬ (embedding)
+[Query Embedding]
+ β‘ (search Qdrant for similar vectors)
+ β‘ (retrieve payloads including transcripts)
+ β‘ (use transcripts + query to ask OpenAI to propose a new idea)
+
+```
+## π Project Structure
+
+```markdown
+root/
+βββ backend/ # Django REST Framework (DRF) project
+β βββ Dockerfile
+β βββ docker-compose.yml
+β βββ (other backend code)
+β
+βββ frontend/ # Vue.js frontend project
+β βββ (Vue.js code)
+βββ README.md # (this file)
+
+```
+### 3. Creative Generation
+With the top similar transcripts found:
+
+- We prompt OpenAI to brainstorm a fresh video idea, blending the userβs past content with the current query trend.
+
+- OpenAI returns a ready-to-use idea + title.
+```
+βοΈ Function: generate_video_idea(user, query_text, similar_payloads)
+```
+## ποΈ Qdrant Collection
+We ensure the collection exists before use:
+```
+Collection name: video_transcripts
+Vector size: 1536
+Distance metric: COSINE
+```
+
+
+```
+π οΈ Function: ensure_qdrant_collection()
+
+```
+ π― Thanks to Qdrant, users don't just get random content ideas. They get highly relevant, fresh, and personalized ideas based on their real content history β boosting channel growth smartly!
+
+## βοΈ Prerequisites
+Before starting, ensure you have installed:
+
+- Docker + Docker Compose
+
+- Node.js
+
+- npm
+
+## π οΈ Backend Setup (`backend/`)
+- Step 1: Environment Variables
+- Create a `.env` file inside the `backend/` folder with your environment variables.
+
+- Example `.env`:
+```
+DEBUG=True
+SECRET_KEY=your_secret_key_here
+ALLOWED_HOSTS=*
+DATABASE_URL=postgres://postgres:postgres@db:5432/yourdb
+```
+## Step 2: Run Backend Services
+From inside the backend/ folder:
+```bash
+docker-compose up --build
+
+```
+This will:
+
+- Build the Django app and Postgres database
+
+- Start the backend server with Gunicorn at http://localhost:8000
+
+- Automatically create a volume for persistent Postgres data
+
+## Step 3: Collect Static Files (if needed)
+If you need static files immediately:
+```bash
+docker-compose exec web python manage.py collectstatic --noinput
+
+```
+## π¨ Frontend Setup (`frontend/`)
+Step 1: Install Dependencies
+
+```bash
+npm install
+```
+
+## Step 2: Run the Frontend
+```bash
+npm run dev
+
+```
+This will start the Vue.js frontend, typically at http://localhost:5173/ (Vite default).
+
+## π‘ Connect Frontend to Backend
+Update the API base URL in your Vue.js app to point to the backend:
+
+For example in your `frontend/src/config.js` (or wherever you configure API):
+```js
+export const API_BASE_URL = "http://localhost:8000";
+```
+Or use `.env` variables if your Vue project is set up for them.
+
+## π Production Notes
+- In production, you should properly set up `CORS`, `HTTPS`, secure Gunicorn settings, and serve the frontend with Nginx or another web server.
+
+- Backend static files should be collected into the /staticfiles volume and served separately.
+
+- Consider setting `DEBUG=False` and securely managing your `.env` variables.
+## π Useful Commands
+| Action | Command |
+|:------|:---------|
+| Run Backend | `docker-compose up --build` |
+| Stop Backend | `docker-compose down` |
+| Enter Backend Container | `docker-compose exec web bash` |
+| Run Migrations | `docker-compose exec web python manage.py migrate` |
+| Create Superuser | `docker-compose exec web python manage.py createsuperuser` |
+
+## π― Quick URLs
+- Backend API: http://localhost:8000
+
+- Frontend App: http://localhost:5173
+
+## πΊ YouTube API Integration
+
+ Upload Videos Directly to YouTube β Without Leaving the App!
+
+## β¨ How YouTube Upload Works
+This application uses the YouTube Data API v3 to:
+
+- Authenticate users securely via OAuth2
+
+- Upload generated videos directly to their YouTube channels
+
+- Set titles, descriptions, tags, privacy settings, and more
+
+## π Authentication Flow
+When a user connects their YouTube account:
+
+- We save their OAuth2 tokens: access token, refresh token, client ID, client secret, and scopes.
+
+- We use these credentials to build a YouTube client dynamically whenever uploads are needed.
+
+π Code Example: Building authenticated YouTube client
+```python
+from google.oauth2.credentials import Credentials
+from googleapiclient.discovery import build
+
+creds = Credentials(
+ token=token_obj.access_token,
+ refresh_token=token_obj.refresh_token,
+ token_uri=token_obj.token_uri,
+ client_id=token_obj.client_id,
+ client_secret=token_obj.client_secret,
+ scopes=token_obj.scopes.split(","),
+)
+
+youtube = build("youtube", "v3", credentials=creds)
+
+```
+## β¬οΈ Uploading a Video
+Once authenticated:
+- We prepare a metadata body (title, description, tags, etc.)
+
+- We upload the video file using YouTube's videos.insert endpoint.
+
+- After successful upload, the video URL is returned, and sent to the user via email.
+
+π Code Example: Uploading to YouTube
+```python
+body = {
+ "snippet": {
+ "title": "Your Video Title",
+ "description": "Video description here",
+ "tags": ["tag1", "tag2"],
+ "categoryId": "28" # e.g., Science & Technology
+ },
+ "status": {
+ "privacyStatus": "public",
+ "madeForKids": False
+ }
+}
+media = MediaFileUpload(video_path, mimetype="video/mp4")
+request = youtube.videos().insert(part="snippet,status", body=body, media_body=media)
+response = request.execute()
+youtube_url = f"https://youtube.com/watch?v={response['id']}"
+
+```
+## βοΈ Setting up YouTube API Credentials
+Before users can connect their YouTube accounts, you must create your own YouTube OAuth2 credentials.
+
+π Steps to create credentials:
+1. Go to the Google Cloud Console.
+
+2. Create a new project or select an existing one.
+
+3. Enable the YouTube Data API v3 for your project.
+
+4. Create OAuth2 Client ID credentials:
+
+- Application type: Web Application
+
+- Add your app's authorized redirect URIs.
+
+5. Save the Client ID and Client Secret into your backend.
+## π₯ Required OAuth2 Scopes:
+```
+https://www.googleapis.com/auth/youtube.upload
+https://www.googleapis.com/auth/youtube.readonly
+
+```
+ β‘ Important: You must set up OAuth consent screen, branding, and request necessary scopes for production approval if you plan to allow external users (outside your Google Cloud organization).
+
+## π― Summary
+- Users authenticate once.
+
+- Uploads happen securely in the background.
+
+- Uploaded videos are public, tagged, and ready to grow your channel!
+## π₯ Flow Overview
+ User logs in β Grants YouTube access β App uploads videos to their YouTube channel β App returns published video URL.
+
+ β¨ Thank you for using this application!
+Built with Qdrant. β¨
\ No newline at end of file
diff --git a/video-generation/backend/.gitignore b/video-generation/backend/.gitignore
new file mode 100644
index 0000000..b3916e2
--- /dev/null
+++ b/video-generation/backend/.gitignore
@@ -0,0 +1,176 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+cookies.txt
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+# For a library or package, you might want to ignore these files since the code is
+# intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# UV
+# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+#uv.lock
+
+# poetry
+# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+
+# pdm
+# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+#pdm.lock
+# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
+# in version control.
+# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
+.pdm.toml
+.pdm-python
+.pdm-build/
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+# and can be added to the global gitignore or merged into this file. For a more nuclear
+# option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/
+
+# Ruff stuff:
+.ruff_cache/
+
+# PyPI configuration file
+.pypirc
+
+shorts/
\ No newline at end of file
diff --git a/video-generation/backend/Dockerfile b/video-generation/backend/Dockerfile
new file mode 100644
index 0000000..b04fc2a
--- /dev/null
+++ b/video-generation/backend/Dockerfile
@@ -0,0 +1,26 @@
+# Dockerfile
+FROM python:3.10-slim
+
+ENV PYTHONDONTWRITEBYTECODE=1
+ENV PYTHONUNBUFFERED=1
+
+WORKDIR /app
+
+# System dependencies
+RUN apt update && apt install -y ffmpeg gcc libffi-dev
+
+# Python dependencies
+COPY requirements.txt .
+RUN pip install --upgrade pip && pip install -r requirements.txt
+
+# Copy app and script
+COPY . .
+COPY django.sh /app/django.sh
+RUN chmod +x /app/django.sh
+
+# Collect static files
+RUN python manage.py collectstatic --noinput
+
+EXPOSE 8000
+
+CMD ["/app/django.sh"]
diff --git a/video-generation/backend/api/__init__.py b/video-generation/backend/api/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/api/admin.py b/video-generation/backend/api/admin.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/api/apps.py b/video-generation/backend/api/apps.py
new file mode 100644
index 0000000..66656fd
--- /dev/null
+++ b/video-generation/backend/api/apps.py
@@ -0,0 +1,6 @@
+from django.apps import AppConfig
+
+
+class ApiConfig(AppConfig):
+ default_auto_field = 'django.db.models.BigAutoField'
+ name = 'api'
diff --git a/video-generation/backend/api/management/__init__.py b/video-generation/backend/api/management/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/api/management/commands/__init__.py b/video-generation/backend/api/management/commands/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/api/management/commands/create_qdrant_collection.py b/video-generation/backend/api/management/commands/create_qdrant_collection.py
new file mode 100644
index 0000000..8866b47
--- /dev/null
+++ b/video-generation/backend/api/management/commands/create_qdrant_collection.py
@@ -0,0 +1,9 @@
+from django.core.management.base import BaseCommand
+from api.youtube_utils import ensure_qdrant_collection
+
+class Command(BaseCommand):
+ help = "Create Qdrant collection if it doesn't exist"
+
+ def handle(self, *args, **kwargs):
+ ensure_qdrant_collection()
+ self.stdout.write(self.style.SUCCESS("β
Qdrant collection checked/created."))
diff --git a/video-generation/backend/api/management/commands/run_youtube_process.py b/video-generation/backend/api/management/commands/run_youtube_process.py
new file mode 100644
index 0000000..0bdcfbc
--- /dev/null
+++ b/video-generation/backend/api/management/commands/run_youtube_process.py
@@ -0,0 +1,15 @@
+from django.core.management.base import BaseCommand
+from users.models import User
+from api.tasks import process_youtube_channel
+
+class Command(BaseCommand):
+ help = 'Manually trigger process_youtube_channel task for a given user ID'
+
+ def add_arguments(self, parser):
+ parser.add_argument('user_id', type=str)
+
+ def handle(self, *args, **options):
+ user = User.objects.get(id=options['user_id'])
+ channel_id = user.youtube_token.channel_id
+ process_youtube_channel.delay(channel_id, str(user.id))
+ self.stdout.write(self.style.SUCCESS(f"β
Task triggered for user {user.email}"))
diff --git a/video-generation/backend/api/management/commands/schedule_tasks.py b/video-generation/backend/api/management/commands/schedule_tasks.py
new file mode 100644
index 0000000..2493ad3
--- /dev/null
+++ b/video-generation/backend/api/management/commands/schedule_tasks.py
@@ -0,0 +1,35 @@
+from django.core.management.base import BaseCommand
+from django_celery_beat.models import PeriodicTask, CrontabSchedule
+from users.models import User
+import json
+import uuid
+
+class Command(BaseCommand):
+ help = 'Schedule periodic create_and_upload_video and vector DB update tasks for all users'
+
+ def handle(self, *args, **kwargs):
+ schedule, _ = CrontabSchedule.objects.get_or_create(hour='*/8', minute='0')
+ task_id = str(uuid.uuid4())
+
+ for user in User.objects.all():
+ # Schedule video creation + upload
+ PeriodicTask.objects.update_or_create(
+ name=f"Create Upload Video {user.id}",
+ defaults={
+ "task": "api.tasks.generate_and_upload_youtube_short_task",
+ "crontab": schedule,
+ "args": json.dumps([str(user.id), task_id])
+ }
+ )
+
+ # Schedule vector DB update
+ PeriodicTask.objects.update_or_create(
+ name=f"Update VectorDB {user.id}",
+ defaults={
+ "task": "api.tasks.update_vectordb_from_youtube",
+ "crontab": schedule,
+ "args": json.dumps([str(user.id)])
+ }
+ )
+
+ self.stdout.write(self.style.SUCCESS("β
Scheduled all periodic tasks for all users!"))
diff --git a/video-generation/backend/api/migrations/__init__.py b/video-generation/backend/api/migrations/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/api/models.py b/video-generation/backend/api/models.py
new file mode 100644
index 0000000..71a8362
--- /dev/null
+++ b/video-generation/backend/api/models.py
@@ -0,0 +1,3 @@
+from django.db import models
+
+# Create your models here.
diff --git a/video-generation/backend/api/qdrant_utils.py b/video-generation/backend/api/qdrant_utils.py
new file mode 100644
index 0000000..d98e576
--- /dev/null
+++ b/video-generation/backend/api/qdrant_utils.py
@@ -0,0 +1,63 @@
+from qdrant_client import QdrantClient
+import openai
+import os
+from qdrant_client.http import models as rest
+from openai import OpenAI
+
+qdrant_client = QdrantClient(url=os.getenv("QDRANT_HOST"))
+
+def search_similar_transcripts(query_text, user, top_k=10):
+ # 1οΈβ£ Generate the embedding for your query
+ openai_client = OpenAI(api_key=user.openai_api_key_decrypted)
+ emb_response = openai_client.embeddings.create(
+ input=[query_text],
+ model="text-embedding-ada-002"
+ )
+ query_embedding = emb_response.data[0].embedding
+
+ # 2οΈβ£ Search Qdrant using that vector
+ hits = qdrant_client.search(
+ collection_name="video_transcripts",
+ query_vector=query_embedding,
+ limit=top_k,
+ query_filter=rest.Filter(
+ must=[
+ rest.FieldCondition(
+ key="user_id",
+ match=rest.MatchValue(value=str(user.id))
+ )
+ ]
+ )
+ )
+
+ # 3οΈβ£ Only keep hits whose payload actually has a nonβempty "transcript"
+ return [hit.payload for hit in hits if hit.payload.get("transcript")]
+
+
+def generate_video_idea(user,query_text, similar_payloads):
+ existing_transcripts = "\n---\n".join(
+ p["transcript"][:1000] for p in similar_payloads
+)
+ prompt = f"""
+You are a world-class YouTube strategist.
+
+Your job is to propose a brand-new video idea that is:
+- Similar to existing transcript topics.
+- Visually engaging.
+- Potentially viral.
+- Aligned with the query: "{query_text}"
+
+Here are related past transcripts:
+{existing_transcripts}
+
+Return your idea in 1 short paragraph, with a unique title suggestion in quotes.
+"""
+
+
+
+ response = openai.chat.completions.create(
+ model=user.openai_model,
+ messages=[{"role": "user", "content": prompt}]
+ )
+
+ return response.choices[0].message.content
diff --git a/video-generation/backend/api/redis_client.py b/video-generation/backend/api/redis_client.py
new file mode 100644
index 0000000..f6dfd00
--- /dev/null
+++ b/video-generation/backend/api/redis_client.py
@@ -0,0 +1,5 @@
+import os
+import redis
+
+redis_url = os.environ.get("REDIS_URL")
+r = redis.Redis.from_url(redis_url, decode_responses=True)
\ No newline at end of file
diff --git a/video-generation/backend/api/tasks.py b/video-generation/backend/api/tasks.py
new file mode 100644
index 0000000..05ec7df
--- /dev/null
+++ b/video-generation/backend/api/tasks.py
@@ -0,0 +1,587 @@
+from celery import shared_task
+from api.youtube_utils import get_top_video_ids, download_audio
+from api.youtube_utils import embed_and_store, ensure_qdrant_collection
+from api.transcription import transcribe_audio
+import logging
+from api.qdrant_utils import qdrant_client
+from users.models import User
+from users.models import Video
+from qdrant_client.http import models as rest
+import os, subprocess
+import openai
+from django.core.mail import EmailMultiAlternatives
+from googleapiclient.http import MediaFileUpload
+from googleapiclient.discovery import build
+from .qdrant_utils import search_similar_transcripts, generate_video_idea
+from api.redis_client import r
+from elevenlabs import ElevenLabs
+from replicate import Client as ReplicateClient
+from google.oauth2.credentials import Credentials
+import json
+
+logger = logging.getLogger(__name__)
+
+def send_video_upload_email(user_email, youtube_url):
+ subject = "π¬ New Video Uploaded to Your Channel!"
+ from_email = "Taledy Team "
+ to = [user_email]
+
+ text_content = f"New video uploaded: {youtube_url}"
+
+ html_content = f"""
+
+
+
+
+
+ π Your New Video is Live!
+
+ Hey there, your new video has just been uploaded and is ready for viewers.
+
+
+
+ π₯ Watch on YouTube
+
+
+
+ If you didn't expect this email, you can safely ignore it.
+
+ |
+
+
+
+
+ """
+
+ msg = EmailMultiAlternatives(subject, text_content, from_email, to)
+ msg.attach_alternative(html_content, "text/html")
+ msg.send()
+
+@shared_task
+def process_youtube_channel(channel_id, user_id):
+ user = User.objects.get(id=user_id)
+ ensure_qdrant_collection()
+ video_ids = get_top_video_ids(channel_id)
+
+ for vid in video_ids:
+ try:
+ path = download_audio(vid)
+ text = transcribe_audio(user, path)
+ embed_and_store(user, text, {
+ "video_id": vid,
+ "user_id": user_id,
+ "channel_id": channel_id,
+ "transcript": text,
+ })
+ logger.info(f"Processed and stored video {vid}")
+ except Exception as e:
+ logger.error(f"Failed to process {vid}: {e}")
+ logger.info(f"Processed and stored all videos for user {user_id}")
+
+
+@shared_task
+def generate_and_upload_youtube_short_task(user_id,task_id):
+ log_path = os.path.join("shorts", f"make_short_{task_id}.log")
+ logger = logging.getLogger(f"make_short_{task_id}")
+ handler = logging.FileHandler(log_path)
+ handler.setFormatter(logging.Formatter('%(asctime)s - %(message)s'))
+ logger.setLevel(logging.INFO)
+ logger.addHandler(handler)
+ try:
+ logger.info("Received user request for short video generation")
+ logger.info(f"Task ID: {task_id}")
+ user = User.objects.get(id=user_id)
+ print(f"[π€] User: {user.id}, [π] Task ID: {task_id} landed in celery")
+ os.makedirs("shorts", exist_ok=True)
+ r.hset(f"task:{task_id}", mapping={
+ "status": "starting",
+ "type": "transcription"
+ })
+
+ openai.api_key = user.openai_api_key_decrypted
+ replicate_key = user.replicate_api_key_decrypted
+ elevenlabs_key = user.elevenlabs_api_key_decrypted
+ voice_id = user.elevenlabs_voice_id
+ model = getattr(user, 'openai_model', None) or "gpt-4o"
+
+ seed_prompt = f"new YouTube video idea for {user.audience}"
+ similar_transcripts = search_similar_transcripts(seed_prompt, user=user)
+ logger.info("Obtained similar transcripts")
+
+ # π§ Generate unique idea based on what's already covered
+ idea = generate_video_idea(user,seed_prompt, similar_transcripts)
+ logger.info(f"Generated video idea: {idea}")
+
+ # 1οΈβ£ Research with OpenAI
+ research_prompt = f"""
+ Research the following topic and provide a summary of key points, insights, and examples:
+ {idea}
+ Summarize the key points and insights in a concise format.
+ """
+ research_response = openai.chat.completions.create(
+ model="gpt-4o-search-preview",
+ web_search_options={
+ "search_context_size": "medium",
+ "user_location": {
+ "type": "approximate",
+ "approximate": {"country": "US"}
+ }
+ },
+ messages=[{"role": "user", "content": research_prompt}]
+ )
+ research_output = research_response.choices[0].message.content.strip()
+ logger.info("Obtained research output")
+
+ # 2οΈβ£ Script Generation
+ script_prompt = f"""
+ You are a viral short-form content writer for TikTok and YouTube Shorts.
+
+ Your job is to create a fast-paced, high-retention, 1-minute video script about the following topic:
+ {idea}
+
+ Based on the research below:
+ {research_output}
+
+ Target audience: {user.audience}
+ Tone: energetic, concise, and curiosity-driven.
+ Format: Use simple language, analogies if needed, and build toward an "aha!" moment.
+ Make sure the script includes:
+ 1. A strong hook in the first 3 seconds
+ 2. Fast transitions between key points (no fluff)
+ 3. A payoff (surprising insight, use case, or why it matters)
+ 4. A call to action (e.g., Subscribe for more)
+ Keep it under 300 words.
+ Output the script only, no scene directions, no emojis or headings, just the script.
+ """
+ script_response = openai.chat.completions.create(
+ model=model,
+ max_tokens=600,
+ messages=[{"role": "user", "content": script_prompt}]
+ )
+ logger.info("Obtained script")
+ transcript = script_response.choices[0].message.content.strip()
+
+ # 3οΈβ£ Generate Voiceover
+ audio_path = os.path.abspath(f"shorts/{task_id}_voiceover.mp3")
+ logger.info("Generating voiceover")
+ client = ElevenLabs(api_key=elevenlabs_key, timeout=1000)
+ temp_audio = client.text_to_speech.convert(
+ text=transcript,
+ voice_settings={"speed": 1.2},
+ voice_id=voice_id,
+ model_id="eleven_multilingual_v2",
+ output_format="mp3_44100_128",
+ )
+ logger.info("Obtained voiceover")
+ with open(audio_path, "wb") as f:
+ for chunk_data in temp_audio:
+ f.write(chunk_data)
+
+ # 4οΈβ£ Generate Images with Replicate (ASYNC)
+ import asyncio, aiohttp
+ import math
+ # Duration
+ logger.info("Obtaining audio duration")
+ result = subprocess.run([
+ "ffprobe", "-v", "error", "-show_entries",
+ "format=duration", "-of",
+ "default=noprint_wrappers=1:nokey=1", audio_path
+ ], capture_output=True, text=True)
+ try:
+ duration = float(result.stdout.strip())
+ except:
+ duration = 30.0
+ if duration < 5:
+ duration = 30
+ logger.info(f"Audio duration: {duration:.2f}s")
+
+ num_images = int(duration // 5) + 1
+ logger.info(f"Using {num_images} image(s)")
+ words = transcript.split()
+ words_per_chunk = math.ceil(len(words) / num_images)
+ logger.info(f"Approx {words_per_chunk} words per chunk.")
+ transcript_chunks = [
+ ' '.join(words[i:i + words_per_chunk])
+ for i in range(0, len(words), words_per_chunk)
+ ]
+ prompts = [f"Generate a descriptive image prompt for this transcript chunk:\n\n{chunk}" for chunk in transcript_chunks]
+ logger.info(f"Starting image generation for {len(prompts)} prompts...")
+ async def generate_images(prompts, replicate_key, task_id):
+ replicate_api = ReplicateClient(api_token=replicate_key)
+ image_paths = []
+ for idx, image_prompt in enumerate(prompts):
+ logger.info(f"Processing image {idx + 1}/{len(prompts)}")
+ logger.info(f"Prompt: {image_prompt}")
+ image_path = os.path.abspath(f"shorts/{task_id}_{idx}.jpg")
+ for attempt in range(3):
+ try:
+ if not os.path.exists(image_path):
+ logger.info(f"Attempt {attempt+1}: Generating image {idx + 1} using FLUX")
+ prediction = replicate_api.predictions.create(
+ model=user.flux_model,
+ input={
+ "prompt": image_prompt,
+ "prompt_upsampling": True,
+ "aspect_ratio": "9:16",
+ "width": 1440,
+ "height": 1440,
+ "output_format": "jpg"
+ }
+ )
+ logger.info(f"Waiting for prediction {prediction.id}...")
+ for attempt in range(600):
+ prediction = replicate_api.predictions.get(prediction.id)
+ if prediction.status == "succeeded" and prediction.output:
+ image_url = prediction.output[0] if isinstance(prediction.output, list) else prediction.output
+ async with aiohttp.ClientSession() as session:
+ async with session.get(image_url) as resp:
+ if resp.status == 200:
+ with open(image_path, "wb") as f:
+ f.write(await resp.read())
+ else:
+ raise Exception(f"Failed to download image: {resp.status}")
+ break
+ elif prediction.status in ["failed", "canceled"]:
+ logger.error(f"Prediction failed: {prediction}")
+ raise RuntimeError(f"Prediction failed: {prediction}")
+ await asyncio.sleep(1)
+ else:
+ logger.info(f"Using cached image for chunk {idx + 1}")
+ break
+ except Exception as e:
+ logger.error(f"Image generation failed at chunk {idx + 1}, attempt {attempt+1}: {e}")
+ await asyncio.sleep(2)
+ image_paths.append(image_path)
+ return image_paths
+
+ image_paths = asyncio.run(generate_images(prompts, replicate_key, task_id))
+ logger.info(f"Generated {len(image_paths)} images")
+
+ import re
+ # --- SRT and chunking helpers ---
+ def srt_time_to_seconds(t):
+ h, m, s_ms = t.split(':')
+ s, ms = s_ms.split(',')
+ return int(h) * 3600 + int(m) * 60 + int(s) + int(ms) / 1000.0
+ def seconds_to_srt_time(sec):
+ hrs = int(sec // 3600)
+ sec -= hrs * 3600
+ mins = int(sec // 60)
+ sec -= mins * 60
+ secs = int(sec)
+ ms = int(round((sec - secs) * 1000))
+ return f"{hrs:02}:{mins:02}:{secs:02},{ms:03d}"
+ def parse_srt_blocks(raw_srt):
+ blocks = []
+ raw_blocks = re.split(r'\n\s*\n', raw_srt.strip(), flags=re.MULTILINE)
+ index_pattern = re.compile(r'^(\d+)$')
+ time_pattern = re.compile(r'^(\d{2}:\d{2}:\d{2},\d{3}) --> (\d{2}:\d{2}:\d{2},\d{3})$')
+ for rb in raw_blocks:
+ lines = rb.strip().split('\n')
+ if len(lines) < 2:
+ continue
+ idx_match = index_pattern.match(lines[0].strip())
+ time_match = time_pattern.match(lines[1].strip())
+ if not (idx_match and time_match):
+ continue
+ block_index = int(idx_match.group(1))
+ start_sec = srt_time_to_seconds(time_match.group(1))
+ end_sec = srt_time_to_seconds(time_match.group(2))
+ text_lines = lines[2:]
+ blocks.append({
+ 'index': block_index,
+ 'start': start_sec,
+ 'end': end_sec,
+ 'lines': text_lines
+ })
+ return blocks
+ def build_srt_block_str(block_index, start_sec, end_sec, text_lines):
+ start_str = seconds_to_srt_time(start_sec)
+ end_str = seconds_to_srt_time(end_sec)
+ text_part = "\n".join(text_lines)
+ return f"{block_index}\n{start_str} --> {end_str}\n{text_part}\n"
+ def chunk_into_few_words(line, chunk_size=2):
+ words = line.split()
+ lines = []
+ for i in range(0, len(words), chunk_size):
+ lines.append(" ".join(words[i:i + chunk_size]))
+ return lines
+ def transform_srt_for_few_words_timed(raw_srt, chunk_size=2):
+ blocks = parse_srt_blocks(raw_srt)
+ new_blocks = []
+ new_index = 1
+ for b in blocks:
+ start = b['start']
+ end = b['end']
+ duration = end - start if end > start else 1
+ original_text = " ".join(b['lines'])
+ chunked = chunk_into_few_words(original_text, chunk_size)
+ if len(chunked) <= 1:
+ new_blocks.append({
+ 'index': new_index,
+ 'start': start,
+ 'end': end,
+ 'lines': [original_text],
+ })
+ new_index += 1
+ else:
+ block_duration = duration / len(chunked)
+ cur_start = start
+ for chunk_text in chunked:
+ cur_end = cur_start + block_duration
+ new_blocks.append({
+ 'index': new_index,
+ 'start': cur_start,
+ 'end': cur_end,
+ 'lines': [chunk_text],
+ })
+ new_index += 1
+ cur_start = cur_end
+ new_srt = []
+ for nb in new_blocks:
+ new_srt.append(build_srt_block_str(nb['index'], nb['start'], nb['end'], nb['lines']))
+ return "\n".join(new_srt).strip() + "\n"
+
+ # --- Animate each image with zoompan, then concatenate ---
+ logger.info("\nStarting video clip generation...")
+ video_clips = []
+ for i, path in enumerate(image_paths):
+ out_path = os.path.abspath(f"shorts/{task_id}_clip_{i}.mp4")
+ logger.info(f"Processing clip {i+1}/{len(image_paths)}")
+ if not os.path.exists(path):
+ print(f"Image file does not exist: {path}")
+ continue
+ cmd = [
+ "ffmpeg", "-y", "-loop", "1", "-i", path,
+ "-vf", "zoompan=z='min(zoom+0.0015,1.15)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=125:s=576x1024,format=yuv420p",
+ "-frames:v", "125", "-r", "25",
+ "-c:v", "libx264", "-pix_fmt", "yuv420p",
+ out_path
+ ]
+ logger.info(f"Running ffmpeg command with zoompan effect...")
+ subprocess.run(cmd, check=True)
+ logger.info(f"Generated clip {i+1}")
+ video_clips.append(out_path)
+ logger.info(f"\nCreated {len(video_clips)} video clips")
+
+ # Concatenate all video clips
+ concat_list = os.path.abspath(f"shorts/{task_id}_concat.txt")
+ with open(concat_list, "w") as f:
+ for clip in video_clips:
+ f.write(f"file '{os.path.abspath(clip)}'\n")
+ concat_path = os.path.abspath(f"shorts/{task_id}_final_video.mp4")
+ subprocess.run([
+ "ffmpeg", "-y", "-f", "concat", "-safe", "0",
+ "-i", concat_list,
+ "-c:v", "libx264", "-pix_fmt", "yuv420p",
+ concat_path
+ ], check=True)
+ if not os.path.exists(concat_path):
+ raise Exception("Concatenated video not created")
+
+ # Merge audio with concatenated video
+ logger.info("\nMerging audio with video...")
+ final_audio_path = os.path.abspath(f"shorts/{task_id}_with_audio.mp4")
+ subprocess.run([
+ "ffmpeg", "-y",
+ "-i", concat_path, "-i", audio_path,
+ "-c:v", "libx264", "-pix_fmt", "yuv420p",
+ "-c:a", "aac",
+ final_audio_path
+ ], check=True)
+ if not os.path.exists(final_audio_path):
+ logger.error("Audio merge failed")
+ raise Exception("Audio merge failed")
+
+ # Generate SRT subtitles with OpenAI Whisper
+ logger.info("\nGenerating SRT subtitles...")
+ srt_path = f"shorts/{task_id}.srt"
+ with open(audio_path, "rb") as audio_file:
+ transcript_res = openai.audio.transcriptions.create(
+ model="whisper-1",
+ file=audio_file,
+ response_format="srt"
+ )
+ few_words_srt = transform_srt_for_few_words_timed(transcript_res, chunk_size=2)
+ logger.info("Generated SRT subtitles")
+ with open(srt_path, "w", encoding="utf-8") as srt_out:
+ srt_out.write(few_words_srt)
+
+ # Burn captions onto video
+ logger.info("\nBurning captions onto video...")
+ final_captioned_path = os.path.abspath(f"shorts/{task_id}_animated_video.mp4")
+ srt_abs = os.path.abspath(srt_path)
+ style_str = (
+ "Fontname=Arial,"
+ "Bold=1,"
+ "Fontsize=15,"
+ "BorderStyle=1,"
+ "Outline=2,"
+ "Shadow=0,"
+ "PrimaryColour=&H00FFFFFF,"
+ "OutlineColour=&H00000000,"
+ "Alignment=6,"
+ "MarginV=120,"
+ "MarginL=60,"
+ "MarginR=60,"
+ "WrapStyle=2"
+ )
+ logger.info("Burning captions onto video...")
+ subtitles_filter = f"subtitles={srt_abs}:force_style='{style_str}'"
+ full_filter = f"{subtitles_filter},format=yuv420p"
+ subprocess.run([
+ "ffmpeg", "-y",
+ "-i", final_audio_path,
+ "-vf", full_filter,
+ "-c:v", "libx264",
+ "-pix_fmt", "yuv420p",
+ "-c:a", "copy",
+ final_captioned_path
+ ], check=True)
+ logger.info("Burned captions onto video")
+ video_path = final_captioned_path
+
+ # 6οΈβ£ Generate YouTube Metadata
+ logger.info("\nGenerating YouTube metadata...")
+ title_prompt = f"Generate a viral YouTube Shorts title based on this script:\n\nScript:\n{transcript}\nRespond only with the title text."
+ title_response = openai.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": title_prompt}]
+ ).choices[0].message.content.strip()
+ title = title_response if title_response else "AI Video"
+ logger.info(f"Generated title: {title}")
+
+ description_prompt = f"Generate a compelling YouTube Shorts description for this script:\n\nScript:\n{transcript}\nKeep it concise and engaging. Respond only with the description."
+ description_response = openai.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": description_prompt}]
+ ).choices[0].message.content.strip()
+ description = description_response if description_response else ""
+ description = description_response if description_response else ""
+
+ tags_prompt = f"Suggest 10 trending and relevant tags (comma-separated) for a YouTube Shorts video based on this script:\n\nScript:\n{transcript}\nRespond as: tag1, tag2, tag3, ..., tag10. Respond with the tags only"
+ tags_response = openai.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": tags_prompt}]
+ ).choices[0].message.content.strip()
+ tags = [tag.strip() for tag in tags_response.split(",") if tag.strip()]
+ logger.info(f"Generated tags: {tags}")
+
+ # 7οΈβ£ Upload to YouTube
+ logger.info("\nUploading to YouTube...")
+ creds = Credentials(
+ token=user.youtube_token.access_token,
+ refresh_token=user.youtube_token.refresh_token,
+ token_uri=user.youtube_token.token_uri,
+ client_id=user.youtube_token.client_id,
+ client_secret=user.youtube_token.client_secret,
+ scopes=user.youtube_token.scopes.split(",")
+ )
+ logger.info("YouTube credentials loaded")
+ youtube = build("youtube", "v3", credentials=creds)
+ body = {
+ "snippet": {
+ "title": title.replace('"', ''),
+ "description": description,
+ "tags": tags,
+ "categoryId": "28"
+ },
+ "status": {
+ "privacyStatus": "public",
+ "madeForKids": False
+ }
+ }
+ logger.info("YouTube body created")
+ media = MediaFileUpload(video_path, mimetype="video/mp4")
+ logger.info("YouTube media created")
+ request = youtube.videos().insert(part="snippet,status", body=body, media_body=media)
+ logger.info("YouTube upload request created")
+ response = request.execute()
+ youtube_url = f"https://youtube.com/watch?v={response['id']}"
+ logger.info(f"Uploaded to YouTube: {youtube_url}")
+ # save to db
+ logger.info("Saving to database...")
+ Video.objects.create(
+ user=user,
+ video_url=youtube_url,
+ title=title,
+ description=description
+ )
+ # send email
+ logger.info("Sending email...")
+ send_video_upload_email(user.email, youtube_url)
+ logger.info("Email sent")
+ result_json = {
+ "task_id": task_id,
+ "youtube_url": youtube_url,
+ "title": title,
+ "description": description,
+ "tags": tags,
+ "video_path": video_path,
+ "audio_path": audio_path,
+ "image_paths": image_paths,
+ "transcript": transcript,
+ "research_output": research_output,
+ }
+ logger.info("Task completed, saving to Redis")
+ r.hset(f"task:{task_id}", mapping={
+ "status": "completed",
+ "result": json.dumps(result_json)
+ })
+ logger.info(f"Task {task_id} completed")
+ logger.info("Removing logger handlers")
+ logger.removeHandler(handler)
+ handler.close()
+ return result_json
+
+ except Exception as e:
+ logger.error(f"Failed to process: {e}")
+ r.hset(f"task:{task_id}", mapping={
+ "status": "failed",
+ "type": "generate_and_upload_youtube_short",
+ "error": str(e)
+ })
+ return {"error": str(e)}
+
+@shared_task
+def update_vectordb_from_youtube(user_id):
+ user = User.objects.get(id=user_id)
+
+ ensure_qdrant_collection()
+
+ channel_id = user.youtube_token.channel_id # Make sure you're storing this!
+ top_video_ids = get_top_video_ids(channel_id)
+
+ # Fetch existing video IDs from Qdrant
+ scroll_result = qdrant_client.scroll(
+ collection_name="video_transcripts",
+ scroll_filter=rest.Filter(
+ must=[
+ rest.FieldCondition(
+ key="user_id",
+ match=rest.MatchValue(value=str(user.id))
+ )
+ ]
+ ),
+ with_payload=True,
+ limit=1000
+ )
+
+ stored_video_ids = set(point.payload.get("video_id") for point in scroll_result[0])
+ new_video_ids = [vid for vid in top_video_ids if vid not in stored_video_ids]
+
+ logger.info(f"Found {len(new_video_ids)} new videos for user {user.email}")
+
+ for vid in new_video_ids:
+ try:
+ path = download_audio(vid)
+ text = transcribe_audio(user, path)
+ embed_and_store(user, text, {
+ "user_id": str(user.id),
+ "video_id": vid,
+ "channel_id": channel_id,
+ "transcript": text,
+ })
+ logger.info(f"β
Processed and stored video {vid} for user {user.email}")
+ except Exception as e:
+ logger.error(f"β Failed to process video {vid} for user {user.email}: {e}")
diff --git a/video-generation/backend/api/tests.py b/video-generation/backend/api/tests.py
new file mode 100644
index 0000000..7ce503c
--- /dev/null
+++ b/video-generation/backend/api/tests.py
@@ -0,0 +1,3 @@
+from django.test import TestCase
+
+# Create your tests here.
diff --git a/video-generation/backend/api/transcription.py b/video-generation/backend/api/transcription.py
new file mode 100644
index 0000000..f69c483
--- /dev/null
+++ b/video-generation/backend/api/transcription.py
@@ -0,0 +1,27 @@
+import openai
+import os
+import logging
+logger = logging.getLogger(__name__)
+
+def transcribe_audio(user, file_path):
+ """
+ Transcribe an audio file using OpenAI Whisper API.
+
+ Parameters:
+ - user: The user object, optionally used for custom OpenAI API keys.
+ - file_path: Path to the audio file (.mp3, .wav, etc.)
+
+ Returns:
+ - The transcribed text.
+ """
+ # Optionally support per-user OpenAI key
+ openai.api_key = user.openai_api_key_decrypted
+
+ with open(file_path, "rb") as audio_file:
+ logger.info(f"Transcribing {file_path} for user {user.id}")
+ response = openai.audio.transcriptions.create(model="whisper-1", file=audio_file)
+ transcript = response.text # Access the text attribute
+ logger.info("Transcription succeeded, %d characters", len(transcript))
+
+
+ return transcript
diff --git a/video-generation/backend/api/urls.py b/video-generation/backend/api/urls.py
new file mode 100644
index 0000000..9ddfc28
--- /dev/null
+++ b/video-generation/backend/api/urls.py
@@ -0,0 +1,9 @@
+from django.urls import path
+from .views import TaskStatusView, TestTaskView
+from .views import GenerateAndUploadShortView
+
+urlpatterns = [
+ path("test-task/", TestTaskView.as_view()),
+ path("task-status//", TaskStatusView.as_view()),
+ path("generate-and-upload-short/", GenerateAndUploadShortView.as_view()),
+]
diff --git a/video-generation/backend/api/views.py b/video-generation/backend/api/views.py
new file mode 100644
index 0000000..6d8e898
--- /dev/null
+++ b/video-generation/backend/api/views.py
@@ -0,0 +1,71 @@
+from rest_framework.views import APIView
+from rest_framework.response import Response
+from celery.result import AsyncResult
+from rest_framework.permissions import IsAuthenticated
+from rest_framework import status
+import uuid
+import os
+from .tasks import generate_and_upload_youtube_short_task
+import logging
+from api.redis_client import r
+
+class UserAPIKeysView(APIView):
+ permission_classes = [IsAuthenticated]
+
+ def get(self, request):
+ user = request.user
+ return Response({
+ "openai_api_key": user.openai_api_key_decrypted,
+ "replicate_api_key": user.replicate_api_key_decrypted,
+ "elevenlabs_api_key": user.elevenlabs_api_key_decrypted,
+ })
+
+class TestTaskView(APIView):
+ def post(self, request):
+ task = test_celery_task.delay(2, 3)
+ return Response({"task_id": task.id})
+
+class TaskStatusView(APIView):
+ def get(self, request, task_id):
+ result = AsyncResult(task_id)
+ return Response({
+ "state": result.state,
+ "result": str(result.result) if result.ready() else None,
+ }, status=status.HTTP_200_OK)
+
+class GenerateAndUploadShortView(APIView):
+ permission_classes = [IsAuthenticated]
+
+ def post(self, request):
+ user = request.user
+ try:
+ result = generate_and_upload_youtube_short(user.id)
+ return Response(result, status=status.HTTP_201_CREATED)
+ except Exception as e:
+ return Response({"error": str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
+
+def generate_and_upload_youtube_short(user_id):
+ task_id = str(uuid.uuid4())
+ log_path = os.path.join("shorts", f"make_short_{task_id}.log")
+ os.makedirs("shorts", exist_ok=True)
+ logger = logging.getLogger(f"make_short_{task_id}")
+ handler = logging.FileHandler(log_path)
+ handler.setFormatter(logging.Formatter('%(asctime)s - %(message)s'))
+ logger.setLevel(logging.INFO)
+ logger.addHandler(handler)
+
+ logger.info("Received user request for short video generation")
+ logger.info(f"Task ID: {task_id}")
+ r.hset(f"task:{task_id}", mapping={
+ "status": "queued",
+ "type": "generate_and_upload_youtube_short"
+ })
+
+ task = generate_and_upload_youtube_short_task.delay(user_id, task_id)
+ logger.info(f"user_id = {user_id}, type = {type(user_id)}")
+ logger.info(f"task_id = {task_id}, type = {type(task_id)}")
+
+ logger.info(f"Dispatched Celery task {task.id}")
+ logger.removeHandler(handler)
+ handler.close()
+ return {"status": "queued", "task_id": task.id}
diff --git a/video-generation/backend/api/youtube_utils.py b/video-generation/backend/api/youtube_utils.py
new file mode 100644
index 0000000..74e1437
--- /dev/null
+++ b/video-generation/backend/api/youtube_utils.py
@@ -0,0 +1,138 @@
+from googleapiclient.discovery import build
+import os
+import yt_dlp
+from qdrant_client import QdrantClient
+from qdrant_client.http.models import VectorParams, Distance, PointStruct
+import os
+import uuid
+from openai import OpenAI
+from google.oauth2.credentials import Credentials
+import logging
+from django.conf import settings
+
+logger = logging.getLogger(__name__)
+# Set up basic logging configuration
+logging.basicConfig(
+ level=logging.INFO, # or logging.DEBUG for more verbose output
+ format='[%(asctime)s] %(levelname)s in %(module)s: %(message)s',
+)
+
+def get_authenticated_channel_id(token_obj):
+ from google.oauth2.credentials import Credentials
+ from googleapiclient.discovery import build
+
+ creds = Credentials(
+ token=token_obj.access_token,
+ refresh_token=token_obj.refresh_token,
+ token_uri=token_obj.token_uri,
+ client_id=token_obj.client_id,
+ client_secret=token_obj.client_secret,
+ scopes=token_obj.scopes.split(","),
+ )
+
+ youtube = build("youtube", "v3", credentials=creds)
+
+ response = youtube.channels().list(
+ part="id",
+ mine=True
+ ).execute()
+
+ return response["items"][0]["id"]
+
+
+def get_top_video_ids(channel_id, max_results=50):
+ youtube = build('youtube', 'v3', developerKey=os.getenv("YOUTUBE_API_KEY"))
+ res = youtube.search().list(
+ part="id", channelId=channel_id, order="viewCount", maxResults=max_results
+ ).execute()
+
+ return [item["id"]["videoId"] for item in res["items"] if item["id"]["kind"] == "youtube#video"]
+
+import os
+import yt_dlp
+
+def download_audio(video_id):
+ url = f"https://www.youtube.com/watch?v={video_id}"
+ output_dir = os.path.abspath("shorts")
+ os.makedirs(output_dir, exist_ok=True)
+
+ # We set output as a template, yt-dlp will append correct extension
+ output_template = os.path.join(output_dir, f"{video_id}.%(ext)s")
+ expected_output = os.path.join(output_dir, f"{video_id}.mp3")
+
+ print(f"[π§] Downloading audio to: {expected_output}")
+
+ cookiefile_path = os.path.join(settings.BASE_DIR, 'cookies.txt')
+
+ ydl_opts = {
+ 'cookiefile': cookiefile_path,
+ 'format': 'bestaudio/best',
+ 'outtmpl': output_template,
+ 'quiet': True,
+ 'postprocessors': [{
+ 'key': 'FFmpegExtractAudio',
+ 'preferredcodec': 'mp3',
+ 'preferredquality': '0', # Highest quality
+ }],
+ }
+
+ with yt_dlp.YoutubeDL(ydl_opts) as ydl:
+ ydl.download([url])
+
+ if not os.path.exists(expected_output):
+ raise FileNotFoundError(f"Audio file not found after download: {expected_output}")
+
+ print(f"[β
] Audio downloaded: {expected_output}")
+ return expected_output
+
+
+
+qdrant = QdrantClient(url=os.getenv("QDRANT_HOST"),prefer_grpc=False )
+
+def ensure_qdrant_collection():
+ if not qdrant.collection_exists("video_transcripts"):
+ qdrant.create_collection(
+ collection_name="video_transcripts",
+ vectors_config=VectorParams(
+ size=1536,
+ distance=Distance.COSINE
+ )
+ )
+
+
+def embed_and_store(user, text, metadata):
+ logger.info(f"[π] Starting embed_and_store for user {user.id} with metadata: {metadata}")
+
+ try:
+ client = OpenAI(api_key=user.openai_api_key_decrypted)
+ logger.info("[π§ ] Initialized OpenAI client.")
+ except Exception as e:
+ logger.exception("[β] Failed to initialize OpenAI client.")
+ raise e
+
+ try:
+ response = client.embeddings.create(
+ input=[text],
+ model="text-embedding-ada-002"
+ )
+ embedding = response.data[0].embedding
+ logger.info("[β
] Embedding successfully created.")
+ except Exception as e:
+ logger.exception("[β] Failed to generate embedding.")
+ raise e
+
+ try:
+ point_id = str(uuid.uuid4())
+ logger.info(f"[π] Generated UUID: {point_id}")
+
+ point = PointStruct(id=point_id, vector=embedding, payload=metadata)
+ logger.info("[π¦] PointStruct created.")
+
+ qdrant.upsert("video_transcripts", [point])
+ logger.info(f"[π€] Upserted into Qdrant with point ID {point_id}")
+
+ return point_id
+
+ except Exception as e:
+ logger.exception("[β] Failed to upsert into Qdrant.")
+ raise e
diff --git a/video-generation/backend/app/__init__.py b/video-generation/backend/app/__init__.py
new file mode 100644
index 0000000..cd04264
--- /dev/null
+++ b/video-generation/backend/app/__init__.py
@@ -0,0 +1,3 @@
+from .celery import app as celery_app
+
+__all__ = ['celery_app']
diff --git a/video-generation/backend/app/asgi.py b/video-generation/backend/app/asgi.py
new file mode 100644
index 0000000..b6bc9f6
--- /dev/null
+++ b/video-generation/backend/app/asgi.py
@@ -0,0 +1,16 @@
+"""
+ASGI config for app project.
+
+It exposes the ASGI callable as a module-level variable named ``application``.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/5.2/howto/deployment/asgi/
+"""
+
+import os
+
+from django.core.asgi import get_asgi_application
+
+os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
+
+application = get_asgi_application()
diff --git a/video-generation/backend/app/celery.py b/video-generation/backend/app/celery.py
new file mode 100644
index 0000000..745e316
--- /dev/null
+++ b/video-generation/backend/app/celery.py
@@ -0,0 +1,16 @@
+import os
+from celery import Celery
+
+os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
+app = Celery('app')
+app.config_from_object('django.conf:settings', namespace='CELERY')
+app.autodiscover_tasks()
+
+
+# Load task modules from all registered Django app configs.
+app.config_from_object('django.conf:settings', namespace='CELERY')
+app.autodiscover_tasks()
+
+@app.task(bind=True)
+def debug_task(self):
+ print(f'Request: {self.request!r}')
\ No newline at end of file
diff --git a/video-generation/backend/app/middleware.py b/video-generation/backend/app/middleware.py
new file mode 100644
index 0000000..c770ed0
--- /dev/null
+++ b/video-generation/backend/app/middleware.py
@@ -0,0 +1,31 @@
+import os
+from django.http import JsonResponse
+from dotenv import load_dotenv
+
+load_dotenv()
+
+class APIKeyMiddleware:
+ def __init__(self, get_response):
+ self.get_response = get_response
+ self.api_key = os.getenv("X-API-KEY")
+ self.exempt_paths = [
+ "/api/users/register/",
+ "/api/users/login/",
+ "/api/users/",
+ "/admin/",
+ "/admin/login/",
+ "/admin/logout/",
+ "/favicon.ico",
+ ]
+
+ def __call__(self, request):
+ # Skip API key check for exempt paths
+ if any(request.path.startswith(path) for path in self.exempt_paths):
+ return self.get_response(request)
+
+ # Check API key
+ key = request.headers.get("X-API-KEY")
+ if not key or key != self.api_key:
+ return JsonResponse({"detail": "Unauthorized: Invalid or missing API Key."}, status=401)
+
+ return self.get_response(request)
diff --git a/video-generation/backend/app/settings.py b/video-generation/backend/app/settings.py
new file mode 100644
index 0000000..25314c9
--- /dev/null
+++ b/video-generation/backend/app/settings.py
@@ -0,0 +1,193 @@
+"""
+Django settings for app project.
+
+Generated by 'django-admin startproject' using Django 5.2.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/5.2/topics/settings/
+
+For the full list of settings and their values, see
+https://docs.djangoproject.com/en/5.2/ref/settings/
+"""
+
+from dotenv import load_dotenv
+load_dotenv()
+import os
+
+from datetime import timedelta
+
+from pathlib import Path
+
+# Build paths inside the project like this: BASE_DIR / 'subdir'.
+BASE_DIR = Path(__file__).resolve().parent.parent
+
+EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
+EMAIL_HOST = "" # or use your SMTP provider
+EMAIL_PORT = ""
+EMAIL_HOST_USER = ""
+EMAIL_HOST_PASSWORD = ""
+DEFAULT_FROM_EMAIL = ""
+
+
+# Quick-start development settings - unsuitable for production
+# See https://docs.djangoproject.com/en/5.2/howto/deployment/checklist/
+
+# SECURITY WARNING: keep the secret key used in production secret!
+SECRET_KEY = 'django-insecure-c5&_2kc4emvz3%-)#+fuuu++2_%a)7f&zdy725-fb44v31_-k1'
+
+# SECURITY WARNING: don't run with debug turned on in production!
+DEBUG = True
+from datetime import timedelta
+
+SIMPLE_JWT = {
+ "ACCESS_TOKEN_LIFETIME": timedelta(days=7), # default is 5 minutes
+ "REFRESH_TOKEN_LIFETIME": timedelta(days=30), # default is 1 day
+ "ROTATE_REFRESH_TOKENS": False,
+ "BLACKLIST_AFTER_ROTATION": True,
+ "AUTH_HEADER_TYPES": ("Bearer",),
+}
+
+
+ALLOWED_HOSTS = ["*"]
+
+CSRF_TRUSTED_ORIGINS = ["http://localhost:5173"]
+CORS_ALLOW_CREDENTIALS = True
+
+CORS_ALLOW_HEADERS = [
+ 'accept',
+ 'accept-encoding',
+ 'authorization',
+ 'content-type',
+ 'dnt',
+ 'origin',
+ 'user-agent',
+ 'x-csrftoken',
+ 'x-requested-with',
+ 'x-api-key', # Custom header for API key
+]
+
+STATIC_URL = "/static/"
+STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
+
+CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL")
+CELERY_RESULT_BACKEND = os.getenv("CELERY_RESULT_BACKEND")
+
+CORS_ALLOWED_ORIGINS = [
+ "http://localhost:5173",
+
+]
+
+EMAIL_USE_TLS = False
+EMAIL_USE_SSL = False
+
+CELERY_ACCEPT_CONTENT = ['json']
+CELERY_TASK_SERIALIZER = 'json'
+CELERY_RESULT_SERIALIZER = 'json'
+
+# Application definition
+
+INSTALLED_APPS = [
+ 'corsheaders',
+ 'django.contrib.admin',
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
+ 'django.contrib.sessions',
+ 'django.contrib.messages',
+ 'django.contrib.staticfiles',
+ 'rest_framework',
+ 'rest_framework_simplejwt',
+ 'django_celery_beat',
+ 'users',
+ 'api',
+
+]
+AUTH_USER_MODEL = 'users.User'
+REST_FRAMEWORK = {
+ 'DEFAULT_AUTHENTICATION_CLASSES': (
+ 'rest_framework_simplejwt.authentication.JWTAuthentication',
+ ),
+}
+MIDDLEWARE = [
+ 'corsheaders.middleware.CorsMiddleware',
+ 'django.middleware.common.CommonMiddleware',
+ 'django.middleware.security.SecurityMiddleware',
+ 'django.contrib.sessions.middleware.SessionMiddleware',
+ 'django.middleware.csrf.CsrfViewMiddleware',
+ 'django.contrib.auth.middleware.AuthenticationMiddleware',
+ 'django.contrib.messages.middleware.MessageMiddleware',
+ 'django.middleware.clickjacking.XFrameOptionsMiddleware',
+ 'app.middleware.APIKeyMiddleware', # Custom middleware for API key validation
+]
+
+ROOT_URLCONF = 'app.urls'
+
+TEMPLATES = [
+ {
+ 'BACKEND': 'django.template.backends.django.DjangoTemplates',
+ 'DIRS': [],
+ 'APP_DIRS': True,
+ 'OPTIONS': {
+ 'context_processors': [
+ 'django.template.context_processors.request',
+ 'django.contrib.auth.context_processors.auth',
+ 'django.contrib.messages.context_processors.messages',
+ ],
+ },
+ },
+]
+
+WSGI_APPLICATION = 'app.wsgi.application'
+
+
+# Database
+# https://docs.djangoproject.com/en/5.2/ref/settings/#databases
+
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.db.backends.postgresql',
+ 'NAME': os.getenv('POSTGRES_DB'),
+ 'USER': os.getenv('POSTGRES_USER'),
+ 'PASSWORD': os.getenv('POSTGRES_PASSWORD'),
+ 'HOST': 'db', # service name in docker-compose
+ 'PORT': '5432',
+ }
+}
+
+
+
+# Password validation
+# https://docs.djangoproject.com/en/5.2/ref/settings/#auth-password-validators
+
+AUTH_PASSWORD_VALIDATORS = [
+ {
+ 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
+ },
+ {
+ 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
+ },
+]
+
+
+# Internationalization
+# https://docs.djangoproject.com/en/5.2/topics/i18n/
+
+LANGUAGE_CODE = 'en-us'
+
+TIME_ZONE = 'UTC'
+
+USE_I18N = True
+
+USE_TZ = True
+
+
+# Default primary key field type
+# https://docs.djangoproject.com/en/5.2/ref/settings/#default-auto-field
+
+DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
+
diff --git a/video-generation/backend/app/urls.py b/video-generation/backend/app/urls.py
new file mode 100644
index 0000000..922900b
--- /dev/null
+++ b/video-generation/backend/app/urls.py
@@ -0,0 +1,25 @@
+"""
+URL configuration for app project.
+
+The `urlpatterns` list routes URLs to views. For more information please see:
+ https://docs.djangoproject.com/en/5.2/topics/http/urls/
+Examples:
+Function views
+ 1. Add an import: from my_app import views
+ 2. Add a URL to urlpatterns: path('', views.home, name='home')
+Class-based views
+ 1. Add an import: from other_app.views import Home
+ 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
+Including another URLconf
+ 1. Import the include() function: from django.urls import include, path
+ 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
+"""
+from django.urls import path, include
+from django.contrib import admin
+
+urlpatterns = [
+ path('api/users/', include('users.urls')),
+ path("api/", include("api.urls")),
+ path('admin/', admin.site.urls),
+]
+
diff --git a/video-generation/backend/app/wsgi.py b/video-generation/backend/app/wsgi.py
new file mode 100644
index 0000000..121dd78
--- /dev/null
+++ b/video-generation/backend/app/wsgi.py
@@ -0,0 +1,16 @@
+"""
+WSGI config for app project.
+
+It exposes the WSGI callable as a module-level variable named ``application``.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/5.2/howto/deployment/wsgi/
+"""
+
+import os
+
+from django.core.wsgi import get_wsgi_application
+
+os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
+
+application = get_wsgi_application()
diff --git a/video-generation/backend/compose-example.yml b/video-generation/backend/compose-example.yml
new file mode 100644
index 0000000..84e9da8
--- /dev/null
+++ b/video-generation/backend/compose-example.yml
@@ -0,0 +1,26 @@
+version: '3.9'
+
+services:
+ web:
+ build: .
+ command: gunicorn adnari.wsgi:application --bind 0.0.0.0:8000
+ env_file:
+ - .env
+ ports:
+ - "8000:8000"
+ depends_on:
+ - db
+ volumes:
+ - ./staticfiles:/app/staticfiles
+
+ db:
+ image: postgres:15
+ environment:
+ POSTGRES_DB: automato
+ POSTGRES_USER: postgres
+ POSTGRES_PASSWORD: postgres
+ volumes:
+ - postgres_data:/var/lib/postgresql/data/
+
+volumes:
+ postgres_data:
\ No newline at end of file
diff --git a/video-generation/backend/django.sh b/video-generation/backend/django.sh
new file mode 100644
index 0000000..573d145
--- /dev/null
+++ b/video-generation/backend/django.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+echo "Starting Migrations..."
+python manage.py migrate
+echo ====================================
+
+echo "Starting Server..."
+python manage.py runserver 0.0.0.0:8000
+echo "Starting Migrations..."
+python manage.py migrate
+python manage.py schedule_tasks
+echo ====================================
+
+echo "Starting Server..."
+python manage.py runserver 0.0.0.0:8000
\ No newline at end of file
diff --git a/video-generation/backend/docker-compose.yml b/video-generation/backend/docker-compose.yml
new file mode 100644
index 0000000..09e6035
--- /dev/null
+++ b/video-generation/backend/docker-compose.yml
@@ -0,0 +1,80 @@
+version: '3.9'
+
+services:
+ web:
+ build: .
+ command: gunicorn app.wsgi:application --bind 0.0.0.0:8000
+ volumes:
+ - .:/app
+ - ./shorts:/app/shorts
+ - ./temp_uploads:/app/temp_uploads
+ - ./uploads:/app/uploads
+ - ./staticfiles:/app/staticfiles
+ ports:
+ - "6182:8000"
+ env_file:
+ - .env
+ environment:
+ - PYTHONPATH=/app
+ depends_on:
+ - redis
+ - qdrant
+ - db
+ restart: always
+
+ celery:
+ build: .
+ command: celery -A app worker --loglevel=info
+ volumes:
+ - ./shorts:/app/shorts
+ - ./temp_uploads:/app/temp_uploads
+ - ./uploads:/app/uploads
+ env_file:
+ - .env
+ environment:
+ - PYTHONPATH=/app
+ depends_on:
+ - redis
+ - db
+ restart: always
+
+ beat:
+ build: .
+ command: celery -A app beat --loglevel=info --scheduler django_celery_beat.schedulers:DatabaseScheduler
+ volumes:
+ - .:/app
+ env_file:
+ - .env
+ environment:
+ - PYTHONPATH=/app
+ depends_on:
+ - redis
+ - db
+ restart: always
+
+ redis:
+ image: redis:7
+ ports:
+ - "6380:6379"
+ restart: always
+
+ qdrant:
+ image: qdrant/qdrant
+ ports:
+ - "6333:6333"
+ volumes:
+ - qdrant_storage:/qdrant/storage
+ restart: always
+
+ db:
+ image: postgres:15
+ ports:
+ - "5433:5432"
+ env_file:
+ - .env
+ volumes:
+ - postgres_data:/var/lib/postgresql/data
+
+volumes:
+ postgres_data:
+ qdrant_storage:
diff --git a/video-generation/backend/env_example b/video-generation/backend/env_example
new file mode 100644
index 0000000..393cd23
--- /dev/null
+++ b/video-generation/backend/env_example
@@ -0,0 +1,14 @@
+X-API-KEY=
+DJANGO_ENCRYPTION_KEY=z
+QDRANT_HOST=http://qdrant:6333
+REDIS_URL=redis://redis:6379/0
+YOUTUBE_API_KEY=
+CELERY_BROKER_URL=redis://redis:6379/0
+CELERY_RESULT_BACKEND=redis://redis:6379/0
+POSTGRES_DB=
+POSTGRES_USER=
+POSTGRES_PASSWORD=
+GOOGLE_CLIENT_ID=
+GOOGLE_CLIENT_SECRET=
+REDIRECT_URI=
+
diff --git a/video-generation/backend/manage.py b/video-generation/backend/manage.py
new file mode 100755
index 0000000..4931389
--- /dev/null
+++ b/video-generation/backend/manage.py
@@ -0,0 +1,22 @@
+#!/usr/bin/env python
+"""Django's command-line utility for administrative tasks."""
+import os
+import sys
+
+
+def main():
+ """Run administrative tasks."""
+ os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
+ try:
+ from django.core.management import execute_from_command_line
+ except ImportError as exc:
+ raise ImportError(
+ "Couldn't import Django. Are you sure it's installed and "
+ "available on your PYTHONPATH environment variable? Did you "
+ "forget to activate a virtual environment?"
+ ) from exc
+ execute_from_command_line(sys.argv)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/video-generation/backend/requirements.txt b/video-generation/backend/requirements.txt
new file mode 100644
index 0000000..a51fdfd
--- /dev/null
+++ b/video-generation/backend/requirements.txt
@@ -0,0 +1,24 @@
+celery==5.5.1
+redis==5.2.1
+python-dotenv==1.1.0
+qdrant-client==1.14.1
+openai==1.75.0
+google-api-python-client==2.167.0
+google-auth==2.39.0
+google-auth-oauthlib==1.2.2
+requests==2.32.3
+djangorestframework==3.16.0
+django-cors-headers==3.14.0
+Django==5.2
+django-celery-beat==2.8.0
+yt-dlp==2025.3.31
+djangorestframework-simplejwt==5.5.0
+cryptography==44.0.2
+elevenlabs==1.56.0
+replicate==1.0.4
+aiohttp==3.11.18
+psycopg2-binary==2.9.10
+gunicorn==23.0.0
+
+
+
diff --git a/video-generation/backend/users/__init__.py b/video-generation/backend/users/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/users/admin.py b/video-generation/backend/users/admin.py
new file mode 100644
index 0000000..255e5e7
--- /dev/null
+++ b/video-generation/backend/users/admin.py
@@ -0,0 +1,10 @@
+from django.contrib import admin
+from .models import User, YouTubeToken, Video
+
+admin.site.register(User)
+admin.site.register(Video)
+
+class YouTubeTokenAdmin(admin.ModelAdmin):
+ list_display = ['user', 'channel_id', 'expiry']
+
+admin.site.register(YouTubeToken, YouTubeTokenAdmin)
diff --git a/video-generation/backend/users/apps.py b/video-generation/backend/users/apps.py
new file mode 100644
index 0000000..72b1401
--- /dev/null
+++ b/video-generation/backend/users/apps.py
@@ -0,0 +1,6 @@
+from django.apps import AppConfig
+
+
+class UsersConfig(AppConfig):
+ default_auto_field = 'django.db.models.BigAutoField'
+ name = 'users'
diff --git a/video-generation/backend/users/migrations/0001_initial.py b/video-generation/backend/users/migrations/0001_initial.py
new file mode 100644
index 0000000..c9f9cb4
--- /dev/null
+++ b/video-generation/backend/users/migrations/0001_initial.py
@@ -0,0 +1,67 @@
+# Generated by Django 5.2 on 2025-04-24 12:19
+
+import django.db.models.deletion
+import uuid
+from django.conf import settings
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ initial = True
+
+ dependencies = [
+ ('auth', '0012_alter_user_first_name_max_length'),
+ ]
+
+ operations = [
+ migrations.CreateModel(
+ name='User',
+ fields=[
+ ('password', models.CharField(max_length=128, verbose_name='password')),
+ ('last_login', models.DateTimeField(blank=True, null=True, verbose_name='last login')),
+ ('is_superuser', models.BooleanField(default=False, help_text='Designates that this user has all permissions without explicitly assigning them.', verbose_name='superuser status')),
+ ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
+ ('email', models.EmailField(max_length=254, unique=True)),
+ ('is_active', models.BooleanField(default=True)),
+ ('is_staff', models.BooleanField(default=False)),
+ ('openai_api_key', models.TextField(blank=True, null=True)),
+ ('replicate_api_key', models.TextField(blank=True, null=True)),
+ ('elevenlabs_api_key', models.TextField(blank=True, null=True)),
+ ('openai_model', models.CharField(default='gpt-4o', max_length=255)),
+ ('elevenlabs_voice_id', models.CharField(blank=True, max_length=255, null=True)),
+ ('audience', models.CharField(blank=True, max_length=255, null=True)),
+ ('groups', models.ManyToManyField(blank=True, help_text='The groups this user belongs to. A user will get all permissions granted to each of their groups.', related_name='user_set', related_query_name='user', to='auth.group', verbose_name='groups')),
+ ('user_permissions', models.ManyToManyField(blank=True, help_text='Specific permissions for this user.', related_name='user_set', related_query_name='user', to='auth.permission', verbose_name='user permissions')),
+ ],
+ options={
+ 'abstract': False,
+ },
+ ),
+ migrations.CreateModel(
+ name='Video',
+ fields=[
+ ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
+ ('title', models.CharField(max_length=255)),
+ ('description', models.TextField()),
+ ('video_url', models.URLField()),
+ ('created_at', models.DateTimeField(auto_now_add=True)),
+ ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='videos', to=settings.AUTH_USER_MODEL)),
+ ],
+ ),
+ migrations.CreateModel(
+ name='YouTubeToken',
+ fields=[
+ ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
+ ('access_token', models.TextField()),
+ ('refresh_token', models.TextField()),
+ ('token_uri', models.CharField(max_length=255)),
+ ('client_id', models.CharField(max_length=255)),
+ ('client_secret', models.CharField(max_length=255)),
+ ('scopes', models.TextField()),
+ ('channel_id', models.CharField(blank=True, max_length=255, null=True)),
+ ('expiry', models.DateTimeField()),
+ ('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='youtube_token', to=settings.AUTH_USER_MODEL)),
+ ],
+ ),
+ ]
diff --git a/video-generation/backend/users/migrations/0002_user_flux_model.py b/video-generation/backend/users/migrations/0002_user_flux_model.py
new file mode 100644
index 0000000..d3ba67d
--- /dev/null
+++ b/video-generation/backend/users/migrations/0002_user_flux_model.py
@@ -0,0 +1,18 @@
+# Generated by Django 5.2 on 2025-04-24 14:51
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('users', '0001_initial'),
+ ]
+
+ operations = [
+ migrations.AddField(
+ model_name='user',
+ name='flux_model',
+ field=models.CharField(default='black-forest-labs/flux-schnell', max_length=255),
+ ),
+ ]
diff --git a/video-generation/backend/users/migrations/__init__.py b/video-generation/backend/users/migrations/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/video-generation/backend/users/models.py b/video-generation/backend/users/models.py
new file mode 100644
index 0000000..7b0ff13
--- /dev/null
+++ b/video-generation/backend/users/models.py
@@ -0,0 +1,90 @@
+from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, BaseUserManager
+from django.db import models
+import uuid
+
+from google.auth import default
+from .utils import encrypt_value, decrypt_value
+
+class UserManager(BaseUserManager):
+ def create_user(self, email, password=None, **extra_fields):
+ if not email:
+ raise ValueError("The Email must be set")
+ email = self.normalize_email(email)
+ user = self.model(email=email, **extra_fields)
+ user.set_password(password)
+ user.save()
+ return user
+
+ def create_superuser(self, email, password=None, **extra_fields):
+ extra_fields.setdefault("is_staff", True)
+ extra_fields.setdefault("is_superuser", True)
+ return self.create_user(email, password, **extra_fields)
+
+class User(AbstractBaseUser, PermissionsMixin):
+ id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
+ email = models.EmailField(unique=True)
+ is_active = models.BooleanField(default=True)
+ is_staff = models.BooleanField(default=False)
+
+ # Optional: storing OpenAI, Replicate, ElevenLabs keys per user
+ openai_api_key = models.TextField(null=True, blank=True)
+ replicate_api_key = models.TextField(null=True, blank=True)
+ elevenlabs_api_key = models.TextField(null=True, blank=True)
+ openai_model = models.CharField(max_length=255, default='gpt-4o')
+ elevenlabs_voice_id = models.CharField(max_length=255, null=True, blank=True)
+ audience = models.TextField(null=True, blank=True)
+ flux_model = models.CharField(max_length=255, default="black-forest-labs/flux-schnell")
+
+ objects = UserManager()
+
+ USERNAME_FIELD = "email"
+ REQUIRED_FIELDS = []
+
+ def save(self, *args, **kwargs):
+ # Encrypt keys before saving
+ if self.openai_api_key and not self.openai_api_key.startswith('gAAAA'):
+ self.openai_api_key = encrypt_value(self.openai_api_key)
+ if self.replicate_api_key and not self.replicate_api_key.startswith('gAAAA'):
+ self.replicate_api_key = encrypt_value(self.replicate_api_key)
+ if self.elevenlabs_api_key and not self.elevenlabs_api_key.startswith('gAAAA'):
+ self.elevenlabs_api_key = encrypt_value(self.elevenlabs_api_key)
+ super().save(*args, **kwargs)
+
+ @property
+ def openai_api_key_decrypted(self):
+ return decrypt_value(self.openai_api_key)
+
+ @property
+ def replicate_api_key_decrypted(self):
+ return decrypt_value(self.replicate_api_key)
+
+ @property
+ def elevenlabs_api_key_decrypted(self):
+ return decrypt_value(self.elevenlabs_api_key)
+
+ def __str__(self):
+ return self.email
+
+class YouTubeToken(models.Model):
+ id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
+ user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="youtube_token")
+ access_token = models.TextField()
+ refresh_token = models.TextField()
+ token_uri = models.CharField(max_length=255)
+ client_id = models.CharField(max_length=255)
+ client_secret = models.CharField(max_length=255)
+ scopes = models.TextField()
+ channel_id = models.CharField(max_length=255, null=True, blank=True)
+ expiry = models.DateTimeField()
+
+
+class Video(models.Model):
+ id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
+ user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="videos")
+ title = models.CharField(max_length=255)
+ description = models.TextField()
+ video_url = models.URLField()
+ created_at = models.DateTimeField(auto_now_add=True)
+
+ def __str__(self):
+ return self.title
\ No newline at end of file
diff --git a/video-generation/backend/users/serializers.py b/video-generation/backend/users/serializers.py
new file mode 100644
index 0000000..d311462
--- /dev/null
+++ b/video-generation/backend/users/serializers.py
@@ -0,0 +1,86 @@
+from rest_framework import serializers
+from .models import User
+from django.contrib.auth import authenticate
+from .models import YouTubeToken
+from .models import Video
+from rest_framework import generics, permissions
+
+
+class VideoSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = Video
+ fields = ['id', 'title', 'description', 'video_url', 'created_at']
+ read_only_fields = ['id', 'created_at']
+
+
+class VideoListCreateView(generics.ListCreateAPIView):
+ serializer_class = VideoSerializer
+ permission_classes = [permissions.IsAuthenticated]
+
+ def get_queryset(self):
+ return Video.objects.filter(user=self.request.user).order_by('-created_at')
+
+ def perform_create(self, serializer):
+ serializer.save(user=self.request.user)
+
+class RegisterSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = User
+ fields = ['email', 'password']
+ extra_kwargs = {'password': {'write_only': True}}
+
+ def create(self, validated_data):
+ return User.objects.create_user(**validated_data)
+
+class LoginSerializer(serializers.Serializer):
+ email = serializers.EmailField()
+ password = serializers.CharField(write_only=True)
+
+ def validate(self, data):
+ user = authenticate(**data)
+ if user and user.is_active:
+ return user
+ raise serializers.ValidationError("Invalid credentials")
+
+class YouTubeTokenSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = YouTubeToken
+ fields = "__all__"
+ read_only_fields = ['user']
+
+class UpdateAPIKeysSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = User
+ fields = [
+ 'openai_api_key',
+ 'replicate_api_key',
+ 'elevenlabs_api_key',
+ 'openai_model',
+ 'elevenlabs_voice_id',
+ 'audience',
+ 'flux_model',
+ ]
+ extra_kwargs = {
+ 'openai_api_key': {'write_only': True},
+ 'replicate_api_key': {'write_only': True},
+ 'elevenlabs_api_key': {'write_only': True},
+ }
+
+
+class UserAPIKeysSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = User
+ fields = [
+ "openai_api_key",
+ "replicate_api_key",
+ "elevenlabs_api_key",
+ "openai_model",
+ "elevenlabs_voice_id",
+ "audience",
+ "flux_model",
+ ]
+ extra_kwargs = {
+ "openai_api_key": {"read_only": True},
+ "replicate_api_key": {"read_only": True},
+ "elevenlabs_api_key": {"read_only": True},
+ }
\ No newline at end of file
diff --git a/video-generation/backend/users/tests.py b/video-generation/backend/users/tests.py
new file mode 100644
index 0000000..7ce503c
--- /dev/null
+++ b/video-generation/backend/users/tests.py
@@ -0,0 +1,3 @@
+from django.test import TestCase
+
+# Create your tests here.
diff --git a/video-generation/backend/users/urls.py b/video-generation/backend/users/urls.py
new file mode 100644
index 0000000..7cdd49b
--- /dev/null
+++ b/video-generation/backend/users/urls.py
@@ -0,0 +1,21 @@
+from django.urls import path
+from .views import RegisterView, LoginView
+from .views import SaveYouTubeTokenView
+from .views import VideoListCreateView, VideoRetrieveUpdateDestroyView
+from .views import UpdateAPIKeysView
+from .views import UserAPIKeysView
+from .views import YouTubeOAuthCallbackView
+
+urlpatterns = [
+ path('register/', RegisterView.as_view(), name='register'),
+ path('login/', LoginView.as_view(), name='login'),
+ path("youtube/token/", SaveYouTubeTokenView.as_view(), name="save-youtube-token"),
+ path("youtube/callback/", YouTubeOAuthCallbackView.as_view(), name="youtube-oauth-callback"),
+
+ path("videos/", VideoListCreateView.as_view(), name="video-list-create"),
+ path('videos//', VideoRetrieveUpdateDestroyView.as_view(), name='video-detail'),
+ path('update-keys/', UpdateAPIKeysView.as_view(), name='update-api-keys'),
+ path("api-keys/", UserAPIKeysView.as_view(), name="get-api-keys"),
+
+]
+
diff --git a/video-generation/backend/users/utils.py b/video-generation/backend/users/utils.py
new file mode 100644
index 0000000..8ff9d63
--- /dev/null
+++ b/video-generation/backend/users/utils.py
@@ -0,0 +1,24 @@
+import os
+from cryptography.fernet import Fernet, InvalidToken
+
+def get_fernet():
+ key = os.getenv('DJANGO_ENCRYPTION_KEY')
+ print("π Loaded Fernet key:")
+ if not key:
+ raise ValueError("Missing DJANGO_ENCRYPTION_KEY in environment.")
+ return Fernet(key)
+
+def encrypt_value(value):
+ if not value:
+ return None
+ f = get_fernet()
+ return f.encrypt(value.encode()).decode()
+
+def decrypt_value(value):
+ if not value:
+ return None
+ try:
+ f = get_fernet()
+ return f.decrypt(value.encode()).decode()
+ except InvalidToken:
+ return "[DECRYPTION_FAILED]"
diff --git a/video-generation/backend/users/views.py b/video-generation/backend/users/views.py
new file mode 100644
index 0000000..48d60fc
--- /dev/null
+++ b/video-generation/backend/users/views.py
@@ -0,0 +1,183 @@
+from rest_framework.views import APIView
+from rest_framework.response import Response
+from rest_framework import status
+from .serializers import RegisterSerializer, LoginSerializer
+from rest_framework_simplejwt.tokens import RefreshToken
+from api.youtube_utils import get_authenticated_channel_id
+from rest_framework import generics, permissions
+from .models import Video
+from .serializers import VideoSerializer
+from .models import YouTubeToken
+from .serializers import YouTubeTokenSerializer
+from rest_framework.permissions import IsAuthenticated
+from api.tasks import process_youtube_channel
+from .serializers import UpdateAPIKeysSerializer
+from .serializers import UserAPIKeysSerializer
+
+
+import requests
+from django.utils.dateparse import parse_datetime
+from django.utils import timezone
+import os
+
+from google.auth.transport.requests import Request
+from google.oauth2.credentials import Credentials
+from datetime import datetime, timedelta
+
+GOOGLE_CLIENT_ID = os.getenv("GOOGLE_CLIENT_ID")
+GOOGLE_CLIENT_SECRET = os.getenv("GOOGLE_CLIENT_SECRET")
+REDIRECT_URI = os.getenv("REDIRECT_URI")
+
+class YouTubeOAuthCallbackView(APIView):
+ permission_classes = [IsAuthenticated]
+
+ def post(self, request):
+ code = request.data.get("code")
+ if not code:
+ return Response({"error": "Missing code"}, status=400)
+
+ token_url = "https://oauth2.googleapis.com/token"
+ data = {
+ "code": code,
+ "client_id": request.data.get("client_id"),
+ "client_secret": request.data.get("client_secret"),
+ "redirect_uri": request.data.get("redirect_uri"),
+ "grant_type": "authorization_code"
+ }
+
+ token_response = requests.post(token_url, data=data)
+ if not token_response.ok:
+ return Response({"error": "Failed to fetch tokens", "details": token_response.json()}, status=400)
+
+ token_data = token_response.json()
+
+ # Store in DB
+ token_obj, _ = YouTubeToken.objects.update_or_create(
+ user=request.user,
+ defaults={
+ "access_token": token_data["access_token"],
+ "refresh_token": token_data.get("refresh_token"),
+ "token_uri": "https://oauth2.googleapis.com/token",
+ "client_id": data["client_id"],
+ "client_secret": data["client_secret"],
+ "scopes": "https://www.googleapis.com/auth/youtube.upload https://www.googleapis.com/auth/youtube.readonly",
+ "expiry": datetime.utcnow() + timedelta(seconds=token_data["expires_in"])
+ }
+ )
+
+ return Response({"message": "Token saved successfully"})
+
+class RegisterView(APIView):
+ def post(self, request):
+ serializer = RegisterSerializer(data=request.data)
+ if serializer.is_valid():
+ user = serializer.save()
+ refresh = RefreshToken.for_user(user)
+ return Response({
+ "refresh": str(refresh),
+ "access": str(refresh.access_token),
+ }, status=status.HTTP_201_CREATED)
+ return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
+
+class LoginView(APIView):
+ def post(self, request):
+ serializer = LoginSerializer(data=request.data)
+ if serializer.is_valid():
+ user = serializer.validated_data
+ refresh = RefreshToken.for_user(user)
+ return Response({
+ "refresh": str(refresh),
+ "access": str(refresh.access_token),
+ })
+ return Response(serializer.errors, status=status.HTTP_401_UNAUTHORIZED)
+
+class SaveYouTubeTokenView(generics.GenericAPIView):
+ permission_classes = [IsAuthenticated]
+
+ def post(self, request, *args, **kwargs):
+ code = request.data.get('code')
+ if not code:
+ return Response({"detail": "Missing authorization code."}, status=status.HTTP_400_BAD_REQUEST)
+
+ try:
+ # Exchange code for tokens
+ token_url = "https://oauth2.googleapis.com/token"
+ data = {
+ "code": code,
+ "client_id": GOOGLE_CLIENT_ID,
+ "client_secret": GOOGLE_CLIENT_SECRET,
+ "redirect_uri": REDIRECT_URI,
+ "grant_type": "authorization_code",
+ }
+ token_response = requests.post(token_url, data=data)
+ token_response.raise_for_status()
+ token_data = token_response.json()
+
+ access_token = token_data.get("access_token")
+ refresh_token = token_data.get("refresh_token")
+ token_uri = token_url
+ scopes = token_data.get("scope")
+ expires_in = token_data.get("expires_in") # seconds
+
+ expiry = timezone.now() + timezone.timedelta(seconds=expires_in)
+
+ # Save to database
+ youtube_token, created = YouTubeToken.objects.update_or_create(
+ user=request.user,
+ defaults={
+ "access_token": access_token,
+ "refresh_token": refresh_token,
+ "token_uri": token_uri,
+ "client_id": GOOGLE_CLIENT_ID,
+ "client_secret": GOOGLE_CLIENT_SECRET,
+ "scopes": scopes,
+ "expiry": expiry,
+ }
+ )
+
+ # Fetch channel ID and update
+ try:
+ channel_id = get_authenticated_channel_id(youtube_token)
+ youtube_token.channel_id = channel_id
+ youtube_token.save()
+
+ process_youtube_channel.delay(channel_id, str(request.user.id))
+ except Exception as e:
+ print(f"β οΈ Failed to fetch channel ID: {e}")
+
+ return Response({"detail": "YouTube token saved successfully."}, status=status.HTTP_200_OK)
+
+ except requests.exceptions.RequestException as e:
+ print(f"β οΈ Failed to exchange code: {e}")
+ return Response({"detail": "Failed to exchange code for tokens."}, status=status.HTTP_400_BAD_REQUEST)
+
+class VideoRetrieveUpdateDestroyView(generics.RetrieveUpdateDestroyAPIView):
+ serializer_class = VideoSerializer
+ permission_classes = [permissions.IsAuthenticated]
+
+ def get_queryset(self):
+ return Video.objects.filter(user=self.request.user)
+
+class VideoListCreateView(generics.ListCreateAPIView):
+ serializer_class = VideoSerializer
+ permission_classes = [permissions.IsAuthenticated]
+
+ def get_queryset(self):
+ return Video.objects.filter(user=self.request.user).order_by('-created_at')
+
+ def perform_create(self, serializer):
+ serializer.save(user=self.request.user)
+
+class UpdateAPIKeysView(generics.UpdateAPIView):
+ serializer_class = UpdateAPIKeysSerializer
+ permission_classes = [permissions.IsAuthenticated]
+
+ def get_object(self):
+ return self.request.user
+
+class UserAPIKeysView(generics.RetrieveAPIView):
+ serializer_class = UserAPIKeysSerializer
+ permission_classes = [permissions.IsAuthenticated]
+
+ def get_object(self):
+ return self.request.user
\ No newline at end of file
diff --git a/video-generation/front-end/.gitignore b/video-generation/front-end/.gitignore
new file mode 100644
index 0000000..5075aa6
--- /dev/null
+++ b/video-generation/front-end/.gitignore
@@ -0,0 +1,25 @@
+# Logs
+logs
+*.log
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+pnpm-debug.log*
+lerna-debug.log*
+dist
+dist.zip
+node_modules
+dist
+dist-ssr
+*.local
+
+# Editor directories and files
+.vscode/*
+!.vscode/extensions.json
+.idea
+.DS_Store
+*.suo
+*.ntvs*
+*.njsproj
+*.sln
+*.sw?
diff --git a/video-generation/front-end/.vscode/extensions.json b/video-generation/front-end/.vscode/extensions.json
new file mode 100644
index 0000000..a7cea0b
--- /dev/null
+++ b/video-generation/front-end/.vscode/extensions.json
@@ -0,0 +1,3 @@
+{
+ "recommendations": ["Vue.volar"]
+}
diff --git a/video-generation/front-end/README.md b/video-generation/front-end/README.md
new file mode 100644
index 0000000..33895ab
--- /dev/null
+++ b/video-generation/front-end/README.md
@@ -0,0 +1,5 @@
+# Vue 3 + TypeScript + Vite
+
+This template should help get you started developing with Vue 3 and TypeScript in Vite. The template uses Vue 3 `
+