Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,14 @@ As mentioned above, this is a research demonstration prototype and should not be

## Getting Started

### Nebius AI Studio quickstart (OpenAI-compatible)

- Set NEBIUS_API_KEY in backend/.env and choose DEFAULT_MODEL=nebius-kimi-k2.
- Use FRAME_BASE_URL=https://api.studio.nebius.com/v1/ and FRAME_MODEL=moonshotai/Kimi-K2-Instruct.
- Frontend: NEXT_PUBLIC_API_VERSION=v1, NEXT_PUBLIC_ENABLE_V2_API=false, DRY_RUN=true (no Tavily) or false (with Tavily).
- See Nebius AI Studio Cookbook for more examples: https://github.com/nebius/ai-studio-cookbook


To run the prototype, you need to start both the backend and frontend services:

### 1. Backend Setup
Expand Down
28 changes: 28 additions & 0 deletions backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,3 +404,31 @@ For issues and questions:
- Create an issue in the repository
- Check the logs in the `logs/` directory
- Review the configuration settings

### Nebius AI Studio (OpenAI-compatible)

To run the v1 endpoint with Nebius as the default LLM:

1) Copy env and set CORS to your frontend port (e.g., 3004):

```bash
cp env.example .env
sed -i '' 's#^FRONTEND_URL=.*#FRONTEND_URL=http://localhost:3004#' .env
```

2) Set Nebius defaults and provide the key (no quotes):

```bash
sed -i '' 's#^DEFAULT_MODEL=.*#DEFAULT_MODEL=nebius-kimi-k2#' .env
# NEBIUS_API_KEY=your-nebius-api-key
# FRAME_BASE_URL=https://api.studio.nebius.com/v1/
# FRAME_MODEL=moonshotai/Kimi-K2-Instruct
```

3) Start:

```bash
./launch_server.sh
```

Notes: Tavily search optional via tavily_api.txt.
53 changes: 52 additions & 1 deletion backend/env.example
Original file line number Diff line number Diff line change
@@ -1,47 +1,98 @@
# Universal Deep Research Backend (UDR-B) - Environment Configuration
# Universal Deep Research Backend (UDR-B) - Environment Configuration
# Copy this file to .env and customize the values for your deployment
# Copy this file to .env and customize the values for your deployment


# Server Configuration
# Server Configuration
HOST=0.0.0.0
HOST=0.0.0.0
PORT=8000
PORT=8000
LOG_LEVEL=info
LOG_LEVEL=info


# CORS Configuration
# CORS Configuration
FRONTEND_URL=http://localhost:3000
FRONTEND_URL=http://localhost:3000
ALLOW_CREDENTIALS=true
ALLOW_CREDENTIALS=true


# Model Configuration
# Model Configuration
DEFAULT_MODEL=llama-3.1-nemotron-253b
DEFAULT_MODEL=llama-3.1-nemotron-253b
LLM_BASE_URL=https://integrate.api.nvidia.com/v1
LLM_BASE_URL=https://integrate.api.nvidia.com/v1
LLM_API_KEY_FILE=nvdev_api.txt
LLM_API_KEY_FILE=nvdev_api.txt
LLM_TEMPERATURE=0.2
LLM_TEMPERATURE=0.2
LLM_TOP_P=0.7
LLM_TOP_P=0.7
LLM_MAX_TOKENS=2048
LLM_MAX_TOKENS=2048


# Search Configuration
# Search Configuration
TAVILY_API_KEY_FILE=tavily_api.txt
TAVILY_API_KEY_FILE=tavily_api.txt
MAX_SEARCH_RESULTS=10
MAX_SEARCH_RESULTS=10


# Research Configuration
# Research Configuration
MAX_TOPICS=1
MAX_TOPICS=1
MAX_SEARCH_PHRASES=1
MAX_SEARCH_PHRASES=1
MOCK_DIRECTORY=mock_instances/stocks_24th_3_sections
MOCK_DIRECTORY=mock_instances/stocks_24th_3_sections
RANDOM_SEED=42
RANDOM_SEED=42


# Logging Configuration
# Logging Configuration
LOG_DIR=logs
LOG_DIR=logs
TRACE_ENABLED=true
TRACE_ENABLED=true
COPY_INTO_STDOUT=false
COPY_INTO_STDOUT=false


# FrameV4 Configuration
# FrameV4 Configuration
LONG_CONTEXT_CUTOFF=8192
LONG_CONTEXT_CUTOFF=8192
FORCE_LONG_CONTEXT=false
FORCE_LONG_CONTEXT=false
MAX_ITERATIONS=1024
MAX_ITERATIONS=1024
INTERACTION_LEVEL=none
INTERACTION_LEVEL=none


# Model-specific overrides (optional)
# Model-specific overrides (optional)
# LLAMA_3_1_8B_BASE_URL=https://integrate.api.nvidia.com/v1
# LLAMA_3_1_8B_BASE_URL=https://integrate.api.nvidia.com/v1
# LLAMA_3_1_8B_MODEL=nvdev/meta/llama-3.1-8b-instruct
# LLAMA_3_1_8B_MODEL=nvdev/meta/llama-3.1-8b-instruct
# LLAMA_3_1_8B_TEMPERATURE=0.2
# LLAMA_3_1_8B_TEMPERATURE=0.2
# LLAMA_3_1_8B_TOP_P=0.7
# LLAMA_3_1_8B_TOP_P=0.7
# LLAMA_3_1_8B_MAX_TOKENS=2048
# LLAMA_3_1_8B_MAX_TOKENS=2048 # LLAMA_3_1_8B_MAX_TOKENS=2048
# Nebius AI Studio (OpenAI-compatible) configuration (uncomment and set to enable)
# DEFAULT_MODEL=nebius-kimi-k2
# FRAME_BASE_URL=https://api.studio.nebius.com/v1/
# FRAME_MODEL=moonshotai/Kimi-K2-Instruct
# NEBIUS_API_KEY=your-nebius-api-key
14 changes: 14 additions & 0 deletions frontend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,3 +169,17 @@ The application can be deployed to any platform that supports Next.js:
This software is provided for research and demonstration purposes only. Please refer to the [DISCLAIMER](DISCLAIMER.txt) file for complete terms and conditions regarding the use of this software. You can find the license in [LICENSE](LICENSE.txt).

**Do not use this code in production.**

### Using Nebius AI Studio (OpenAI-compatible)

Local dev with Nebius v1:

```bash
cp env.example .env.local
sed -i '' 's#^NEXT_PUBLIC_API_VERSION=.*#NEXT_PUBLIC_API_VERSION=v1#' .env.local
sed -i '' 's#^NEXT_PUBLIC_ENABLE_V2_API=.*#NEXT_PUBLIC_ENABLE_V2_API=false#' .env.local
sed -i '' 's#^NEXT_PUBLIC_DRY_RUN=.*#NEXT_PUBLIC_DRY_RUN=false#' .env.local
npm run dev -- -p 3004
```

Backend must be configured with `DEFAULT_MODEL=nebius-kimi-k2` and `NEBIUS_API_KEY` in `backend/.env`.