Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -278,6 +278,11 @@ MSG*.bin

# python
*.pyc
__pycache__/
*.pytest_cache/

# Frontend configuration files (contain API keys)
src/frontend/config/settings.json

**/Generated Files/
**/Merged/*
Expand Down
25 changes: 21 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,8 +99,17 @@ Documentation](https://github.com/SoftwareDevLabs).
📁 notebooks/ → Quick experiments and prototyping
📁 tests/ → Unit, integration, and end-to-end tests
📁 src/ → The core engine — all logic lives here (./src/README.md)
└── frontend/ → Web-based GUI for LLM backend configuration

```

### Frontend GUI Features

The new frontend provides a web-based interface for:
- **Backend Selection**: Choose between OpenAI, Anthropic, and other LLM providers
- **Configuration Management**: Set API keys, models, and provider-specific settings
- **Real-time Switching**: Switch between backends with live updates
- **Settings Persistence**: All configurations are saved and persist across sessions
---

## ⚡ Best Practices
Expand All @@ -120,10 +129,18 @@ Documentation](https://github.com/SoftwareDevLabs).
## 🧭 Getting Started

1. Clone the repo
2. Install via `requirements.txt`
3. Set up model configs
4. Check sample code
5. Begin in notebooks
2. Install via `requirements.txt`:
```bash
pip install -r requirements.txt
```
3. Launch the frontend interface:
```bash
python run_frontend.py
```
4. Access the web interface at `http://localhost:5000`
5. Configure your LLM backends in the Settings page
6. Check sample code in the `examples/` directory
7. Begin experimenting in notebooks

---

Expand Down
6 changes: 6 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Core SDLC dependencies
Flask==3.0.0
Werkzeug==3.0.1

# Development and testing
pytest==8.4.1
21 changes: 21 additions & 0 deletions run_frontend.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/usr/bin/env python3
"""
Launch script for the SDLC Core frontend application.
"""

import os
import sys

# Add the src directory to the Python path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src'))

from src.frontend.app import app

if __name__ == '__main__':
print("Starting SDLC Core Frontend...")
print("Access the application at: http://localhost:5000")
print("Dashboard: http://localhost:5000/")
print("Settings: http://localhost:5000/settings")
print("\nPress Ctrl+C to stop the server")

app.run(debug=True, host='0.0.0.0', port=5000)

Check failure

Code scanning / CodeQL

Flask app is run in debug mode High

A Flask app appears to be run in debug mode. This may allow an attacker to run arbitrary code through the debugger.

Copilot Autofix

AI 4 months ago

To fix the problem, the Flask application should not be run with debug=True unconditionally. The best practice is to enable debug mode only in a development environment, and ensure it is disabled in production. This can be achieved by checking an environment variable (such as FLASK_ENV or a custom variable like SDLC_ENV) or by defaulting to debug=False. The most robust fix is to read an environment variable (e.g., SDLC_ENV), and set debug=True only if it is set to 'development'. This change should be made in the if __name__ == '__main__': block in run_frontend.py. No new imports are needed, as os is already imported.

Suggested changeset 1
run_frontend.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/run_frontend.py b/run_frontend.py
--- a/run_frontend.py
+++ b/run_frontend.py
@@ -18,4 +18,5 @@
     print("Settings: http://localhost:5000/settings")
     print("\nPress Ctrl+C to stop the server")
     
-    app.run(debug=True, host='0.0.0.0', port=5000)
\ No newline at end of file
+    debug_mode = os.environ.get('SDLC_ENV', 'production').lower() == 'development'
+    app.run(debug=debug_mode, host='0.0.0.0', port=5000)
\ No newline at end of file
EOF
@@ -18,4 +18,5 @@
print("Settings: http://localhost:5000/settings")
print("\nPress Ctrl+C to stop the server")

app.run(debug=True, host='0.0.0.0', port=5000)
debug_mode = os.environ.get('SDLC_ENV', 'production').lower() == 'development'
app.run(debug=debug_mode, host='0.0.0.0', port=5000)
Copilot is powered by AI and may make mistakes. Always verify output.
64 changes: 64 additions & 0 deletions src/frontend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Frontend for SDLC Core LLM Infrastructure

This frontend application provides a web-based GUI for managing LLM backend settings in the SDLC Core system.

## Features

- **Backend Selection**: Choose between different LLM providers (OpenAI, Anthropic, etc.)
- **Configuration Management**: Set API keys, models, and other backend-specific settings
- **Visual Interface**: Clean, responsive web interface with real-time updates
- **Settings Persistence**: Configuration is saved locally and persists across sessions

## Installation

1. Install dependencies:
```bash
pip install -r requirements.txt
```

2. Run the application:
```bash
python app.py
```

3. Open your browser and navigate to `http://localhost:5000`

## Usage

### Dashboard
- View the currently active LLM backend
- See all available backends
- Quick-switch between backends

### Settings Page
- Configure API keys for each backend
- Set preferred models
- Enable/disable specific backends
- Save and persist configuration changes

## Configuration

The application stores settings in `config/settings.json`. This file is automatically created with default settings on first run.

Default backends supported:
- **OpenAI**: GPT models (GPT-3.5, GPT-4, etc.)

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

GPT is not a recognized word. (unrecognized-spelling)

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

GPT is not a recognized word. (unrecognized-spelling)

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

GPT is not a recognized word. (unrecognized-spelling)
- **Anthropic**: Claude models

## API Endpoints

- `GET /api/config` - Get current configuration
- `POST /api/config` - Update configuration
- `POST /api/backend/select` - Switch active backend

## File Structure

```
frontend/
├── app.py # Main Flask application
├── requirements.txt # Python dependencies
├── templates/
│ ├── index.html # Dashboard page
│ └── settings.html # Settings configuration page
└── config/
└── settings.json # Persistent configuration (auto-generated)
```
106 changes: 106 additions & 0 deletions src/frontend/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
"""
Frontend web application for SDLC_core LLM infrastructure settings.
Provides a GUI interface for selecting and configuring LLM backends.
"""

from flask import Flask, render_template, request, jsonify, redirect, url_for
import json
import os
from typing import Dict, Any

app = Flask(__name__)

# Configuration file path
CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'config', 'settings.json')

# Default configuration
DEFAULT_CONFIG = {
"selected_backend": "openai",

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

openai is not a recognized word. (unrecognized-spelling)
"backends": {
"openai": {

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

openai is not a recognized word. (unrecognized-spelling)
"name": "OpenAI",
"description": "OpenAI's GPT models (GPT-3.5, GPT-4, etc.)",

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

GPT is not a recognized word. (unrecognized-spelling)
"api_key": "",
"model": "gpt-3.5-turbo",

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

gpt is not a recognized word. (unrecognized-spelling)
"enabled": True
},
"anthropic": {
"name": "Anthropic",
"description": "Anthropic's Claude models",
"api_key": "",
"model": "claude-3-sonnet-20240229",

Check failure

Code scanning / check-spelling

Unrecognized Spelling Error

claude is not a recognized word. (unrecognized-spelling)
"enabled": True
}
}
}

def load_config() -> Dict[str, Any]:
"""Load configuration from file or return default."""
try:
if os.path.exists(CONFIG_FILE):
with open(CONFIG_FILE, 'r') as f:
return json.load(f)
except Exception as e:
print(f"Error loading config: {e}")
return DEFAULT_CONFIG.copy()

def save_config(config: Dict[str, Any]) -> bool:
"""Save configuration to file."""
try:
os.makedirs(os.path.dirname(CONFIG_FILE), exist_ok=True)
with open(CONFIG_FILE, 'w') as f:
json.dump(config, f, indent=2)
return True
except Exception as e:
print(f"Error saving config: {e}")
return False

@app.route('/')
def index():
"""Main dashboard."""
config = load_config()
return render_template('index.html', config=config)

@app.route('/settings')
def settings():
"""Settings page for backend configuration."""
config = load_config()
return render_template('settings.html', config=config)

@app.route('/api/config', methods=['GET'])
def get_config():
"""API endpoint to get current configuration."""
config = load_config()
return jsonify(config)

@app.route('/api/config', methods=['POST'])
def update_config():
"""API endpoint to update configuration."""
try:
new_config = request.json
if save_config(new_config):
return jsonify({"success": True, "message": "Configuration updated successfully"})
else:
return jsonify({"success": False, "message": "Failed to save configuration"}), 500
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 400

Check warning

Code scanning / CodeQL

Information exposure through an exception Medium

Stack trace information
flows to this location and may be exposed to an external user.

Copilot Autofix

AI 4 months ago

To fix the problem, we should avoid returning the raw exception message (str(e)) to the user in the API response. Instead, we should return a generic error message such as "An internal error has occurred" or "Failed to update configuration". The detailed exception information should be logged on the server for debugging. This can be done using Python's built-in logging module, which is a well-known and standard way to log errors. The changes should be made in the update_config function (lines 85-86) in src/frontend/app.py. We need to:

  • Import the logging module at the top of the file.
  • Configure logging (if not already done).
  • Replace the response in the exception handler to return a generic error message.
  • Log the exception details on the server.

Suggested changeset 1
src/frontend/app.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/frontend/app.py b/src/frontend/app.py
--- a/src/frontend/app.py
+++ b/src/frontend/app.py
@@ -7,9 +7,13 @@
 import json
 import os
 from typing import Dict, Any
+import logging
 
 app = Flask(__name__)
 
+# Configure logging
+logging.basicConfig(level=logging.INFO)
+
 # Configuration file path
 CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'config', 'settings.json')
 
@@ -83,7 +84,8 @@
         else:
             return jsonify({"success": False, "message": "Failed to save configuration"}), 500
     except Exception as e:
-        return jsonify({"success": False, "message": str(e)}), 400
+        logging.exception("Exception occurred while updating configuration")
+        return jsonify({"success": False, "message": "An internal error has occurred"}), 400
 
 @app.route('/api/backend/select', methods=['POST'])
 def select_backend():
EOF
@@ -7,9 +7,13 @@
import json
import os
from typing import Dict, Any
import logging

app = Flask(__name__)

# Configure logging
logging.basicConfig(level=logging.INFO)

# Configuration file path
CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'config', 'settings.json')

@@ -83,7 +84,8 @@
else:
return jsonify({"success": False, "message": "Failed to save configuration"}), 500
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 400
logging.exception("Exception occurred while updating configuration")
return jsonify({"success": False, "message": "An internal error has occurred"}), 400

@app.route('/api/backend/select', methods=['POST'])
def select_backend():
Copilot is powered by AI and may make mistakes. Always verify output.

@app.route('/api/backend/select', methods=['POST'])
def select_backend():
"""API endpoint to select a backend."""
try:
data = request.json
backend = data.get('backend')

config = load_config()
if backend in config['backends']:
config['selected_backend'] = backend
if save_config(config):
return jsonify({"success": True, "message": f"Backend switched to {backend}"})

return jsonify({"success": False, "message": "Invalid backend"}), 400
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 400

Check warning

Code scanning / CodeQL

Information exposure through an exception Medium

Stack trace information
flows to this location and may be exposed to an external user.

Copilot Autofix

AI 4 months ago

To fix the problem, we should avoid returning the raw exception message to the client. Instead, we should log the exception details on the server (for debugging and auditing purposes) and return a generic error message to the client. This ensures that sensitive information is not exposed to users, while still allowing developers to diagnose issues. The fix should be applied to the exception handler in the /api/backend/select endpoint (lines 102-103). We should also ensure that the logging is done using Python's standard logging library, which is more appropriate than print for production code. This will require importing the logging module and configuring it if not already done.


Suggested changeset 1
src/frontend/app.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/frontend/app.py b/src/frontend/app.py
--- a/src/frontend/app.py
+++ b/src/frontend/app.py
@@ -7,9 +7,11 @@
 import json
 import os
 from typing import Dict, Any
-
+import logging
 app = Flask(__name__)
 
+# Configure logging
+logging.basicConfig(level=logging.INFO)
 # Configuration file path
 CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'config', 'settings.json')
 
@@ -100,7 +100,8 @@
         
         return jsonify({"success": False, "message": "Invalid backend"}), 400
     except Exception as e:
-        return jsonify({"success": False, "message": str(e)}), 400
+        logging.exception("Exception occurred in select_backend endpoint")
+        return jsonify({"success": False, "message": "An internal error has occurred."}), 500
 
 if __name__ == '__main__':
     app.run(debug=True, host='0.0.0.0', port=5000)
\ No newline at end of file
EOF
@@ -7,9 +7,11 @@
import json
import os
from typing import Dict, Any

import logging
app = Flask(__name__)

# Configure logging
logging.basicConfig(level=logging.INFO)
# Configuration file path
CONFIG_FILE = os.path.join(os.path.dirname(__file__), 'config', 'settings.json')

@@ -100,7 +100,8 @@

return jsonify({"success": False, "message": "Invalid backend"}), 400
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 400
logging.exception("Exception occurred in select_backend endpoint")
return jsonify({"success": False, "message": "An internal error has occurred."}), 500

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
Copilot is powered by AI and may make mistakes. Always verify output.

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)

Check failure

Code scanning / CodeQL

Flask app is run in debug mode High

A Flask app appears to be run in debug mode. This may allow an attacker to run arbitrary code through the debugger.

Copilot Autofix

AI 4 months ago

To fix this problem, the Flask application should not be run with debug=True unconditionally. The best practice is to control the debug mode using an environment variable (such as FLASK_DEBUG or a custom variable), or to default to debug=False unless explicitly set otherwise. This ensures that in production environments, debug mode is not enabled, reducing the risk of exposing the interactive debugger. The change should be made in the if __name__ == '__main__': block, replacing the hardcoded debug=True with logic that checks an environment variable (e.g., os.environ.get('FLASK_DEBUG', '0') == '1'). No new imports are needed, as os is already imported.

Suggested changeset 1
src/frontend/app.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/frontend/app.py b/src/frontend/app.py
--- a/src/frontend/app.py
+++ b/src/frontend/app.py
@@ -103,4 +103,5 @@
         return jsonify({"success": False, "message": str(e)}), 400
 
 if __name__ == '__main__':
-    app.run(debug=True, host='0.0.0.0', port=5000)
\ No newline at end of file
+    debug_mode = os.environ.get('FLASK_DEBUG', '0') == '1'
+    app.run(debug=debug_mode, host='0.0.0.0', port=5000)
\ No newline at end of file
EOF
@@ -103,4 +103,5 @@
return jsonify({"success": False, "message": str(e)}), 400

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
debug_mode = os.environ.get('FLASK_DEBUG', '0') == '1'
app.run(debug=debug_mode, host='0.0.0.0', port=5000)
Copilot is powered by AI and may make mistakes. Always verify output.
2 changes: 2 additions & 0 deletions src/frontend/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Flask==3.0.0
Werkzeug==3.0.1
Loading
Loading