Skip to content

Feat/hint#7

Merged
Asuniia merged 4 commits intomainfrom
feat/hint
Mar 25, 2026
Merged

Feat/hint#7
Asuniia merged 4 commits intomainfrom
feat/hint

Conversation

@Asuniia
Copy link
Contributor

@Asuniia Asuniia commented Mar 24, 2026

Summary by CodeRabbit

  • New Features
    • Agent setup now accepts timezone, polling frequency, environment, and data path configuration options
    • Extended database support: SQLite, MySQL, and MongoDB with optional authentication variants
    • Configuration preview table displayed before applying changes
    • EDGE_KEY validation on startup
    • Automatic Docker daemon startup attempt when not running
    • Random helpful hints displayed during operations
    • Confirmation prompt before file generation

@coderabbitai
Copy link

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

This PR expands CLI configuration options for the agent command, refactors database setup to support SQL/NoSQL categories with SQLite, adds random hints throughout status messages, introduces edge key validation, implements Docker daemon auto-start detection, and adds confirmation prompts before file generation with configuration preview tables.

Changes

Cohort / File(s) Summary
Hint System
core/utils.py, commands/common.py, commands/dashboard.py, core/network.py, core/updater.py
Added get_random_hint() function and integrated random hint display into status messages, progress descriptions, and banner output across multiple commands and utilities.
Database Configuration Refactor
commands/agent.py, commands/dashboard.py, commands/db.py
Refactored database setup to support SQL/NoSQL category selection, added SQLite support with volume mappings, replaced MongoDB variants with standard/auth branching, and added configuration preview tables with explicit confirmation prompts before file generation.
System Utilities & Validation
core/utils.py
Added start_docker() for cross-platform Docker daemon management with retry logic, validate_edge_key() for Base64/JSON EDGE_KEY validation, and enhanced check_system() to prompt and attempt Docker daemon startup.
Agent CLI Enhancement
commands/agent.py
Added new CLI options (--tz, --polling, --env, --data-path), integrated edge key validation with abort on failure, and added configuration preview generation with confirmation gate.
Dashboard & Database Display
commands/dashboard.py, commands/db.py
Refactored database configuration flow to support internal/external modes with dynamic port generation and Postgres password handling; updated list output formatting based on database type (e.g., "Local File" for SQLite).
CLI & Import Organization
commands/config.py, main.py
Consolidated and reordered imports for consistency; added trailing comma formatting to command signatures; no functional behavior changes.
Minor Fixes
release
Added trailing newline to final status message output.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Whiskers twitch with database delight,
SQLite and hints make configs bright,
Docker starts with a hoppy command,
Edge keys validated across the land,
Confirmations prompt before we run—
Configuration made fun, hop-hop-fun! 🌟

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.76% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feat/hint' is vague and does not clearly convey what the changeset accomplishes. While it references a feature branch name, it provides no meaningful information about the actual changes. Use a descriptive title that summarizes the main change, such as 'Add random hint display to CLI commands' or 'Enhance CLI with configuration options and hints'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/hint

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@commands/agent.py`:
- Around line 147-156: The code always prompts for Username and Password
regardless of db_type, preventing configuring a no-auth MongoDB; change the flow
around the existing Prompt.ask calls so credentials are requested only when
needed: for non-MongoDB types (postgresql/mysql/mariadb) always prompt for
user/password, and for "mongodb" prompt for an additional Confirm/Prompt (e.g.,
"Require authentication?") and only ask Prompt.ask("Username") and
Prompt.ask("Password", password=True) when that confirm is true; update the
block that defines db_name, host, port, user, password and use db_type and the
new auth-confirm to gate the credential prompts.
- Around line 137-145: The code currently calls add_db_to_json(...) immediately
when building each DB entry; change this so you accumulate the DB dicts
in-memory (e.g., a list named pending_dbs) instead of calling add_db_to_json in
the discovery/collection sites (the current immediate calls to add_db_to_json in
the blocks that create the {"name": friendly_name, "database": container_path,
"type": db_type, "generated_id": ...} dicts). Remove those immediate writes and
append the same dicts to pending_dbs; then, after the final preview/confirmation
step where the user approves (the confirmation/approval branch), iterate
pending_dbs and call add_db_to_json(...) for each entry (preserving the same
payload structure and uuid generation if not already set). Ensure all the other
locations that currently call add_db_to_json are converted to this pattern so
entries are only persisted after explicit approval.

In `@commands/dashboard.py`:
- Around line 77-85: The DATABASE_URL construction in env_vars.update currently
interpolates raw db_user and db_pass (used in the "DATABASE_URL" value) which
can break DSN parsing for special characters; import urllib.parse.quote and
percent-encode db_user and db_pass (e.g., use quote(db_user, safe='') and
quote(db_pass, safe='')) before composing the f-string so the DATABASE_URL uses
the encoded credentials while leaving db_host, db_port, and db_name unchanged;
update the env_vars.update block that sets "DATABASE_URL" to use the
encoded_user and encoded_pass variables.

In `@commands/db.py`:
- Around line 139-175: The SQLite branch currently continues execution and falls
through into the shared compose/env handling, producing a second malformed DB
entry and invalid volume stanza; modify the code around the db_engine ==
"sqlite" branch (where compose_path is read/written and add_db_to_json is
called) to either return immediately after completing the sqlite flow or wrap
the later shared compose/env block (the code that uses service_name and
credentials) with an explicit if db_engine != "sqlite" guard so the sqlite path
does not execute the shared logic (refer to symbols db_engine, compose_path,
add_db_to_json, service_name).
- Around line 260-272: The current logic inserts vol_snippet by finding the
first occurrence of "volumes:" in new_content which may match an indented
service-level bind-mount and corrupt the YAML; update the code that computes
vol_pos/end_of_volumes (the block using new_content.find("volumes:"),
vol_snippet, end_of_volumes, and service_name) to instead locate the top-level
(unindented) "volumes:" anchor (for example by using a regex that matches
^volumes: or by parsing new_content with a YAML parser, adding service_name +
"-data" under the root volumes mapping, and serializing back). Ensure you only
modify the root-level volumes map and not any indented service-level volumes so
the inserted "  {service_name}-data:" goes into the top-level volumes section.
- Around line 95-104: The current prompts always require Username and Password,
preventing configuration of no-auth MongoDB in existing mode; modify the logic
around db_type and the Prompt.ask calls so that when db_type == "mongodb" (and
in the existing flow) Username and Password are optional: either skip prompting
or allow empty defaults (e.g., Prompt.ask("Username", default="") and
Prompt.ask("Password", password=True, default="")), and ensure subsequent code
that builds the connection string or client only includes credentials when
user/password are non-empty.

In `@core/updater.py`:
- Around line 145-152: Replace the lexicographic comparison of version strings
with semantic version comparison: import and use packaging.version.parse (or
packaging.version.Version) to parse latest_tag and current into Version objects
and change the condition to check if parse(latest_tag) > parse(current)
(preserving the existing prerelease guard with is_prerelease). Update the if at
the top of the block (the one referencing latest_tag, current, and
is_prerelease) to use parsed versions, and keep the console.print and
typer.confirm behavior unchanged.

In `@core/utils.py`:
- Around line 161-162: The validator currently checks required_fields =
["serverUrl", "agentId", "masterKeyB64"] with return all(field in data for field
in required_fields), which wrongly passes for lists/strings; update the check
(in the function that performs this validation in core/utils.py) to first ensure
data is a dict via isinstance(data, dict) and return False if not, then perform
the all(field in data for field in required_fields) check so only mappings are
accepted.
- Around line 90-91: The Windows branch that currently runs
subprocess.run(["start", "docker"], shell=True, check=True) should be changed to
invoke Docker Desktop properly: call subprocess.run(["docker", "desktop",
"start"], check=True) for newer Docker CLI versions (or fallback to launching
the executable at "C:\\Program Files\\Docker\\Docker\\Docker Desktop.exe" if the
CLI command fails), and remove shell=True when passing an argument list; update
the code in the block handling os_type == "Windows" (the branch using os_type
and subprocess.run) to try the CLI command first and fall back to the executable
path with proper error handling and retries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8b96a925-1d81-4066-89ed-60d390c2975c

📥 Commits

Reviewing files that changed from the base of the PR and between 01b59a9 and 1219f42.

📒 Files selected for processing (10)
  • commands/agent.py
  • commands/common.py
  • commands/config.py
  • commands/dashboard.py
  • commands/db.py
  • core/network.py
  • core/updater.py
  • core/utils.py
  • main.py
  • release
📜 Review details
🧰 Additional context used
🪛 Ruff (0.15.6)
commands/config.py

[error] 21-21: Possible SQL injection vector through string-based query construction

(S608)

commands/common.py

[warning] 9-9: Do not perform function call typer.Argument in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


[warning] 17-17: Do not perform function call typer.Argument in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


[warning] 25-25: Do not perform function call typer.Argument in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

main.py

[warning] 12-12: Boolean-typed positional argument in function definition

(FBT001)

core/updater.py

[warning] 57-57: Boolean default positional argument in function definition

(FBT002)


[warning] 70-73: Use ternary operator include_pre = channel == "beta" if channel else is_prerelease(current) instead of if-else-block

Replace if-else-block with include_pre = channel == "beta" if channel else is_prerelease(current)

(SIM108)


[error] 96-97: try-except-pass detected, consider logging the exception

(S110)


[warning] 96-96: Do not catch blind exception: Exception

(BLE001)


[warning] 111-111: Too many branches (34 > 12)

(PLR0912)


[warning] 111-111: Too many statements (100 > 50)

(PLR0915)


[error] 153-154: try-except-pass detected, consider logging the exception

(S110)


[warning] 153-153: Do not catch blind exception: Exception

(BLE001)


[warning] 186-187: Use elif instead of else then if, to reduce indentation

Convert to elif

(PLR5501)


[warning] 241-241: Use raise without specifying exception name

Remove exception name

(TRY201)


[error] 263-263: subprocess call: check for execution of untrusted input

(S603)


[error] 264-264: Starting a process with a partial executable path

(S607)


[error] 267-267: subprocess call: check for execution of untrusted input

(S603)


[error] 268-268: Starting a process with a partial executable path

(S607)


[error] 270-270: subprocess call: check for execution of untrusted input

(S603)


[error] 270-270: Starting a process with a partial executable path

(S607)


[warning] 282-282: Do not catch blind exception: Exception

(BLE001)


[warning] 285-285: Do not catch blind exception: Exception

(BLE001)


[warning] 288-288: Do not catch blind exception: Exception

(BLE001)

commands/db.py

[warning] 63-63: Too many branches (25 > 12)

(PLR0912)


[warning] 63-63: Too many statements (125 > 50)

(PLR0915)


[warning] 146-146: Unnecessary mode argument

Remove mode argument

(UP015)


[warning] 251-251: Unnecessary mode argument

Remove mode argument

(UP015)


[warning] 294-295: Explicitly concatenated string should be implicitly concatenated

Remove redundant '+' operator to implicitly concatenate

(ISC003)

commands/agent.py

[warning] 31-31: Too many branches (38 > 12)

(PLR0912)


[warning] 31-31: Too many statements (186 > 50)

(PLR0915)


[warning] 38-38: Boolean-typed positional argument in function definition

(FBT001)


[warning] 38-38: Boolean positional value in function call

(FBT003)

commands/dashboard.py

[warning] 22-22: Too many statements (63 > 50)

(PLR0915)


[warning] 25-25: Boolean-typed positional argument in function definition

(FBT001)


[warning] 25-25: Boolean positional value in function call

(FBT003)

core/utils.py

[error] 58-58: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


[error] 87-87: Starting a process with a partial executable path

(S607)


[error] 89-89: Starting a process with a partial executable path

(S607)


[error] 91-91: subprocess call with shell=True identified, security issue

(S602)


[error] 91-91: Starting a process with a partial executable path

(S607)


[error] 97-97: Starting a process with a partial executable path

(S607)


[warning] 103-103: Consider moving this statement to an else block

(TRY300)


[error] 104-104: Do not use bare except

(E722)


[warning] 106-106: Do not catch blind exception: Exception

(BLE001)


[error] 120-120: subprocess call: check for execution of untrusted input

(S603)


[warning] 130-131: Use a single if statement instead of nested if statements

(SIM102)


[warning] 135-135: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


[warning] 136-136: Do not catch blind exception: Exception

(BLE001)


[warning] 138-138: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


[warning] 163-163: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (4)
commands/agent.py (1)

34-38: Unable to proceed: No review_comment provided. Please provide the review comment you would like me to rewrite within <review_comment> tags.

main.py (2)

3-7: Import reorder is clean and behavior-preserving.

No functional concerns here; this keeps module structure tidy without changing CLI behavior.


11-38: Spacing and top-level separation look good.

The added blank-line separation improves readability and does not alter command/callback execution.

release (1)

96-96: Release success output is fine.

Message is clear and consistent; no issues in this updated segment.

Comment on lines +137 to +145
add_db_to_json(
path,
{
"name": friendly_name,
"database": container_path,
"type": db_type,
"generated_id": str(uuid.uuid4()),
},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Defer add_db_to_json(...) until after the final confirmation.

These writes happen before Lines 431-435. If the user cancels at the preview step, the command still leaves persisted DB entries behind, so the new confirmation gate is not actually non-destructive. Keep the collected database definitions in memory and flush them only after approval.

Also applies to: 158-169, 201-209, 237-249, 279-291, 324-336, 358-370, 431-435

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/agent.py` around lines 137 - 145, The code currently calls
add_db_to_json(...) immediately when building each DB entry; change this so you
accumulate the DB dicts in-memory (e.g., a list named pending_dbs) instead of
calling add_db_to_json in the discovery/collection sites (the current immediate
calls to add_db_to_json in the blocks that create the {"name": friendly_name,
"database": container_path, "type": db_type, "generated_id": ...} dicts). Remove
those immediate writes and append the same dicts to pending_dbs; then, after the
final preview/confirmation step where the user approves (the
confirmation/approval branch), iterate pending_dbs and call add_db_to_json(...)
for each entry (preserving the same payload structure and uuid generation if not
already set). Ensure all the other locations that currently call add_db_to_json
are converted to this pattern so entries are only persisted after explicit
approval.

Comment on lines +147 to +156
db_name = Prompt.ask("Database Name")
host = Prompt.ask("Host", default="localhost")
port = IntPrompt.ask(
"Port",
default=5432
if db_type == "postgresql"
else (3306 if db_type in ["mysql", "mariadb"] else 27017),
)
user = Prompt.ask("Username")
password = Prompt.ask("Password", password=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Existing MongoDB should not require credentials unconditionally.

This branch always asks for Username and Password, but the local-container path already supports a standard MongoDB variant with empty credentials at Lines 358-366. Right now an existing no-auth Mongo instance cannot be configured here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/agent.py` around lines 147 - 156, The code always prompts for
Username and Password regardless of db_type, preventing configuring a no-auth
MongoDB; change the flow around the existing Prompt.ask calls so credentials are
requested only when needed: for non-MongoDB types (postgresql/mysql/mariadb)
always prompt for user/password, and for "mongodb" prompt for an additional
Confirm/Prompt (e.g., "Require authentication?") and only ask
Prompt.ask("Username") and Prompt.ask("Password", password=True) when that
confirm is true; update the block that defines db_name, host, port, user,
password and use db_type and the new auth-confirm to gate the credential
prompts.

Comment on lines +77 to +85
env_vars.update(
{
"POSTGRES_DB": db_name,
"POSTGRES_USER": db_user,
"POSTGRES_PASSWORD": db_pass,
"POSTGRES_HOST": db_host,
"DATABASE_URL": f"postgresql://{db_user}:{db_pass}@{db_host}:{db_port}/{db_name}?schema=public",
"PG_PORT": str(db_port),
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, find and examine the file
find . -name "dashboard.py" -path "*/commands/*" -type f

Repository: Portabase/cli

Length of output: 80


🏁 Script executed:

# Read the dashboard.py file around lines 77-85
if [ -f "commands/dashboard.py" ]; then
  wc -l commands/dashboard.py
  sed -n '70,95p' commands/dashboard.py
fi

Repository: Portabase/cli

Length of output: 1221


🏁 Script executed:

# Check for any existing imports related to URL encoding
rg "urllib|quote" commands/dashboard.py -A 2 -B 2

Repository: Portabase/cli

Length of output: 39


🏁 Script executed:

# Search for how DATABASE_URL is used elsewhere in the codebase
rg "DATABASE_URL" . --type py -B 2 -A 2

Repository: Portabase/cli

Length of output: 1186


🏁 Script executed:

# Check the context around the hardcoded DATABASE_URL to see if it's also part of this change
sed -n '50,70p' commands/dashboard.py

Repository: Portabase/cli

Length of output: 805


🏁 Script executed:

# Verify PostgreSQL URI encoding requirements through standard documentation
# PostgreSQL connection strings follow RFC 3986, which requires special characters to be percent-encoded
python3 <<'EOF'
# Demonstrate the issue with special characters in PostgreSQL DSN
import re
from urllib.parse import quote, urlparse, parse_qs

# Test DSN with special characters that break parsing
test_password = "p@ss:word#123"
broken_dsn = f"postgresql://user:{test_password}@localhost:5432/db?schema=public"

print("=== Testing PostgreSQL DSN Parsing ===")
print(f"Password with special chars: {test_password}")
print(f"Broken DSN: {broken_dsn}")

# Try to parse it
try:
    parsed = urlparse(broken_dsn)
    print(f"\nParsed netloc: {parsed.netloc}")
    print(f"Parsed hostname: {parsed.hostname}")
    print(f"Parsed port: {parsed.port}")
    print(f"Parsed password: {parsed.password}")
    print("\nNote: The @ in password breaks the parsing - password includes everything before the last @")
except Exception as e:
    print(f"Error: {e}")

print("\n=== Correct approach with percent-encoding ===")
encoded_user = quote("user", safe='')
encoded_pass = quote(test_password, safe='')
correct_dsn = f"postgresql://{encoded_user}:{encoded_pass}@localhost:5432/db?schema=public"
print(f"Encoded DSN: {correct_dsn}")

try:
    parsed = urlparse(correct_dsn)
    print(f"Parsed hostname: {parsed.hostname}")
    print(f"Parsed port: {parsed.port}")
    print(f"Parsed password: {parsed.password}")
    print("✓ Parsing works correctly with encoding")
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: Portabase/cli

Length of output: 575


Percent-encode external DB credentials in DATABASE_URL.

Line 83 interpolates raw db_user and db_pass into the URI. Passwords containing @, :, /, or # will produce an invalid DSN even though the prompt accepted them. The special characters break URI parsing (e.g., @ is interpreted as the user/host delimiter). Percent-encode the credentials before composing the URL using urllib.parse.quote().

🔐 Minimal fix
+from urllib.parse import quote
...
-                "DATABASE_URL": f"postgresql://{db_user}:{db_pass}@{db_host}:{db_port}/{db_name}?schema=public",
+                "DATABASE_URL": (
+                    f"postgresql://{quote(db_user, safe='')}:{quote(db_pass, safe='')}@"
+                    f"{db_host}:{db_port}/{db_name}?schema=public"
+                ),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
env_vars.update(
{
"POSTGRES_DB": db_name,
"POSTGRES_USER": db_user,
"POSTGRES_PASSWORD": db_pass,
"POSTGRES_HOST": db_host,
"DATABASE_URL": f"postgresql://{db_user}:{db_pass}@{db_host}:{db_port}/{db_name}?schema=public",
"PG_PORT": str(db_port),
}
env_vars.update(
{
"POSTGRES_DB": db_name,
"POSTGRES_USER": db_user,
"POSTGRES_PASSWORD": db_pass,
"POSTGRES_HOST": db_host,
"DATABASE_URL": (
f"postgresql://{quote(db_user, safe='')}:{quote(db_pass, safe='')}@"
f"{db_host}:{db_port}/{db_name}?schema=public"
),
"PG_PORT": str(db_port),
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/dashboard.py` around lines 77 - 85, The DATABASE_URL construction in
env_vars.update currently interpolates raw db_user and db_pass (used in the
"DATABASE_URL" value) which can break DSN parsing for special characters; import
urllib.parse.quote and percent-encode db_user and db_pass (e.g., use
quote(db_user, safe='') and quote(db_pass, safe='')) before composing the
f-string so the DATABASE_URL uses the encoded credentials while leaving db_host,
db_port, and db_name unchanged; update the env_vars.update block that sets
"DATABASE_URL" to use the encoded_user and encoded_pass variables.

Comment on lines +95 to +104
db_name = Prompt.ask("Database Name")
host = Prompt.ask("Host", default="localhost")
port = IntPrompt.ask(
"Port",
default=5432
if db_type == "postgresql"
else (3306 if db_type in ["mysql", "mariadb"] else 27017),
)
user = Prompt.ask("Username")
password = Prompt.ask("Password", password=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Allow unauthenticated MongoDB in existing mode.

This path requires Username and Password for every MongoDB connection, but the new-container flow already supports a standard MongoDB variant with no credentials. As written, an existing no-auth Mongo instance cannot be configured here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/db.py` around lines 95 - 104, The current prompts always require
Username and Password, preventing configuration of no-auth MongoDB in existing
mode; modify the logic around db_type and the Prompt.ask calls so that when
db_type == "mongodb" (and in the existing flow) Username and Password are
optional: either skip prompting or allow empty defaults (e.g.,
Prompt.ask("Username", default="") and Prompt.ask("Password", password=True,
default="")), and ensure subsequent code that builds the connection string or
client only includes credentials when user/password are non-empty.

Comment on lines +139 to +175
if db_engine == "sqlite":
db_name = Prompt.ask("Database Name", default="local")
if not db_name.endswith(".sqlite"):
db_name += ".sqlite"

compose_path = path / "docker-compose.yml"
if compose_path.exists():
with open(compose_path, "r") as f:
lines = f.readlines()

new_lines = []
in_app_service = False
in_volumes = False
for line in lines:
new_lines.append(line)
if "app:" in line:
in_app_service = True
if in_app_service and "volumes:" in line:
in_volumes = True
if in_volumes and "- ./databases.json" in line:
new_lines.append(f" - ./{db_name}:/config/{db_name}\n")
in_volumes = False
in_app_service = False

with open(compose_path, "w") as f:
f.writelines(new_lines)

add_db_to_json(
path,
{
"name": db_name,
"database": f"/config/{db_name}",
"type": "sqlite",
"generated_id": str(uuid.uuid4()),
},
)
console.print(f"[success]✔ Added SQLite database ({db_name})[/success]")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Return after the SQLite new branch.

After Line 175, execution falls through into Lines 249-290 with an empty service_name and empty credentials. That adds a second malformed sqlite entry (host=localhost, port=0) and generates an invalid -data volume stanza. Short-circuit here, or guard the shared compose/env block with if db_engine != "sqlite".

🧰 Tools
🪛 Ruff (0.15.6)

[warning] 146-146: Unnecessary mode argument

Remove mode argument

(UP015)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/db.py` around lines 139 - 175, The SQLite branch currently continues
execution and falls through into the shared compose/env handling, producing a
second malformed DB entry and invalid volume stanza; modify the code around the
db_engine == "sqlite" branch (where compose_path is read/written and
add_db_to_json is called) to either return immediately after completing the
sqlite flow or wrap the later shared compose/env block (the code that uses
service_name and credentials) with an explicit if db_engine != "sqlite" guard so
the sqlite path does not execute the shared logic (refer to symbols db_engine,
compose_path, add_db_to_json, service_name).

Comment on lines +260 to +272
vol_snippet = f" {service_name}-data:\n"
vol_pos = new_content.find("volumes:")
if vol_pos != -1:
end_of_volumes = new_content.find("networks:", vol_pos)
if end_of_volumes == -1:
end_of_volumes = len(new_content)
new_content = (
new_content[:end_of_volumes]
+ vol_snippet
+ new_content[end_of_volumes:]
)
else:
new_content += f"\nvolumes:\n{vol_snippet}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Insert the named volume into the root volumes: block.

new_content.find("volumes:") will hit the app service’s bind-mount section first—the same service-level block that the generated agent compose uses for ./databases.json:/config/config.json. Inserting {service_name}-data: there corrupts the YAML instead of extending the top-level volumes: map. Anchor the search to an unindented volumes: section or rewrite the compose as YAML.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@commands/db.py` around lines 260 - 272, The current logic inserts vol_snippet
by finding the first occurrence of "volumes:" in new_content which may match an
indented service-level bind-mount and corrupt the YAML; update the code that
computes vol_pos/end_of_volumes (the block using new_content.find("volumes:"),
vol_snippet, end_of_volumes, and service_name) to instead locate the top-level
(unindented) "volumes:" anchor (for example by using a regex that matches
^volumes: or by parsing new_content with a YAML parser, adding service_name +
"-data" under the root volumes mapping, and serializing back). Ensure you only
modify the root-level volumes map and not any indented service-level volumes so
the inserted "  {service_name}-data:" goes into the top-level volumes section.

Comment on lines +145 to +152
if latest_tag < current and not (
is_prerelease(current) and not is_prerelease(latest_tag)
):
console.print(
f"[warning]⚠ Current version ({current}) appears to be older than the latest remote version ({latest_tag}).[/warning]"
)
if not typer.confirm("Do you want to continue with the update ?"):
return
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the updater.py file around the lines mentioned
head -160 core/updater.py | tail -30

Repository: Portabase/cli

Length of output: 994


🏁 Script executed:

# Check the imports in updater.py to understand version handling
head -30 core/updater.py

Repository: Portabase/cli

Length of output: 748


🏁 Script executed:

# Search for version-related functions and utilities
rg -i "version" core/updater.py -B 2 -A 2

Repository: Portabase/cli

Length of output: 1769


🏁 Script executed:

# Check if there's a versions module or utility in the project
fd -t f "version" . --exclude ".git" | grep -E "\.(py|toml|txt)$"

Repository: Portabase/cli

Length of output: 39


🏁 Script executed:

# Search for semantic version handling across the project
rg -i "semver\|packaging.*version\|distutils.*version" . -t py

Repository: Portabase/cli

Length of output: 39


🏁 Script executed:

# Let's check the entire updater.py to see if there are any version parsing utilities used elsewhere
cat -n core/updater.py | head -80

Repository: Portabase/cli

Length of output: 2620


🏁 Script executed:

# Check if packaging or any version comparison is done elsewhere in the codebase
rg "from packaging\|import packaging\|from distutils\|StrictVersion\|LooseVersion" . -t py

Repository: Portabase/cli

Length of output: 39


🏁 Script executed:

# Let's check the core/utils.py to see how current_version() is implemented
rg "def current_version" . -A 5 -t py

Repository: Portabase/cli

Length of output: 267


🏁 Script executed:

# Check all Python dependencies to see what's available
find . -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" | head -5

Repository: Portabase/cli

Length of output: 73


🏁 Script executed:

# Read the project dependencies
cat pyproject.toml 2>/dev/null || cat setup.py 2>/dev/null || cat requirements.txt 2>/dev/null

Repository: Portabase/cli

Length of output: 384


🏁 Script executed:

# Let's see the full current_version() function implementation
rg "def current_version" . -A 15 -t py

Repository: Portabase/cli

Length of output: 830


🏁 Script executed:

# Let's check the git tags to see what version format is used
git tag -l 2>/dev/null | head -20

Repository: Portabase/cli

Length of output: 39


🏁 Script executed:

# Let's create a test to verify the lexicographic comparison issue with semantic versions
python3 <<'EOF'
# Test lexicographic string comparison with semantic versions
test_cases = [
    ("2.10.0", "2.9.0"),  # 2.10 is newer but sorts lower lexicographically
    ("1.9.1", "1.10.0"),  # 1.10 is newer but sorts lower lexicographically
    ("10.0.0", "2.0.0"),  # 10.0 is newer but sorts lower lexicographically
    ("2.9.0", "2.10.0"),  # 2.10 is newer and sorts higher
]

print("Lexicographic comparison issues (string < comparison):")
for v1, v2 in test_cases:
    result = v1 < v2
    # Semantically, we'd expect proper ordering
    print(f"'{v1}' < '{v2}' = {result}")
    if v1 == "2.10.0" and v2 == "2.9.0":
        print(f"  ^ BUG: 2.10.0 should be > 2.9.0 semantically, but lexicographically it's {result}")
EOF

Repository: Portabase/cli

Length of output: 297


🏁 Script executed:

# Check if there are any version-related tests
find . -name "*test*" -type f -name "*.py" | xargs grep -l "version\|update" 2>/dev/null | head -5

Repository: Portabase/cli

Length of output: 39


Use semantic version comparison, not lexicographic string comparison.

latest_tag < current compares versions as plain strings, causing incorrect ordering: "2.10.0" < "2.9.0" evaluates to True lexicographically, even though 2.10.0 is semantically newer. This breaks the downgrade guard—it may suppress a legitimate downgrade warning or incorrectly trigger one on a valid upgrade, allowing the wrong release to be installed without an accurate prompt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/updater.py` around lines 145 - 152, Replace the lexicographic comparison
of version strings with semantic version comparison: import and use
packaging.version.parse (or packaging.version.Version) to parse latest_tag and
current into Version objects and change the condition to check if
parse(latest_tag) > parse(current) (preserving the existing prerelease guard
with is_prerelease). Update the if at the top of the block (the one referencing
latest_tag, current, and is_prerelease) to use parsed versions, and keep the
console.print and typer.confirm behavior unchanged.

Comment on lines +90 to +91
elif os_type == "Windows":
subprocess.run(["start", "docker"], shell=True, check=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "core" -type d

Repository: Portabase/cli

Length of output: 63


🏁 Script executed:

find . -name "utils.py" -type f | head -5

Repository: Portabase/cli

Length of output: 72


🏁 Script executed:

cat -n core/utils.py | head -110

Repository: Portabase/cli

Length of output: 4776


🌐 Web query:

Windows start command docker desktop vs docker cli behavior subprocess

💡 Result:

The Windows start command (e.g., start /b docker run ...) exhibits different subprocess behavior compared to running docker run directly via the Docker CLI, particularly when Docker Desktop (WSL2 backend) is involved versus a native Docker CLI setup (e.g., Linux or direct Engine).

Citations:


🌐 Web query:

how to programmatically start Docker Desktop on Windows subprocess

💡 Result:

To programmatically start Docker Desktop on Windows using a subprocess, use the official Docker CLI command docker desktop start, available since Docker Desktop 4.37.0. This is the recommended modern approach from official documentation. In PowerShell (preferred for subprocess control): Start-Process docker -ArgumentList "desktop", "start" -NoNewWindow -Wait Or simply: docker desktop start The -d or --detach flag can be used if you don't want to wait synchronously: docker desktop start --detach. For older versions (<4.37) or direct executable launch: Start-Process "C:\ProgramFiles\Docker\Docker\Docker Desktop.exe" Ensure Docker CLI is in your PATH and you are in the docker-users group. Run PowerShell as administrator if service startup is needed. Verify with docker desktop status. Examples work in scripts, CI/CD, or automation.

Citations:


🌐 Web query:

Windows "start docker" command Docker Desktop daemon application

💡 Result:

The command to start Docker Desktop (which runs the Docker daemon) on Windows is docker desktop start. This is part of the Docker Desktop CLI, available in recent versions (e.g., 4.37+). Usage examples: - Synchronous start: docker desktop start - Detached start (background): docker desktop start -d This starts the Docker Desktop application and its daemon. Ensure Docker Desktop is installed and you're running the command from a terminal where the Docker CLI is in your PATH. Alternative methods (older or manual): - Launch "C:\Program Files\Docker\Docker\Docker Desktop.exe" - For the standalone daemon (limited to Windows containers): dockerd.exe (requires admin) Official reference: https://docs.docker.com/reference/cli/docker/desktop/start

Citations:


Fix the Windows Docker Desktop startup command.

On Windows, start docker launches the Docker CLI in a new shell window; it does not start Docker Desktop. This approach fails to bring up the daemon and results in the retry loop timing out. Use the official docker desktop start command (Docker 4.37.0+) or launch the Docker Desktop executable directly at C:\Program Files\Docker\Docker\Docker Desktop.exe.

🧰 Tools
🪛 Ruff (0.15.6)

[error] 91-91: subprocess call with shell=True identified, security issue

(S602)


[error] 91-91: Starting a process with a partial executable path

(S607)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/utils.py` around lines 90 - 91, The Windows branch that currently runs
subprocess.run(["start", "docker"], shell=True, check=True) should be changed to
invoke Docker Desktop properly: call subprocess.run(["docker", "desktop",
"start"], check=True) for newer Docker CLI versions (or fallback to launching
the executable at "C:\\Program Files\\Docker\\Docker\\Docker Desktop.exe" if the
CLI command fails), and remove shell=True when passing an argument list; update
the code in the block handling os_type == "Windows" (the branch using os_type
and subprocess.run) to try the CLI command first and fall back to the executable
path with proper error handling and retries.

Comment on lines +161 to +162
required_fields = ["serverUrl", "agentId", "masterKeyB64"]
return all(field in data for field in required_fields)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Require a mapping before checking Edge Key fields.

all(field in data for field in required_fields) also succeeds for lists/strings that merely contain those tokens, so malformed keys can be accepted. Check isinstance(data, dict) before testing the required keys.

🛡️ Minimal fix
-        required_fields = ["serverUrl", "agentId", "masterKeyB64"]
-        return all(field in data for field in required_fields)
+        required_fields = {"serverUrl", "agentId", "masterKeyB64"}
+        return isinstance(data, dict) and required_fields.issubset(data)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
required_fields = ["serverUrl", "agentId", "masterKeyB64"]
return all(field in data for field in required_fields)
required_fields = {"serverUrl", "agentId", "masterKeyB64"}
return isinstance(data, dict) and required_fields.issubset(data)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/utils.py` around lines 161 - 162, The validator currently checks
required_fields = ["serverUrl", "agentId", "masterKeyB64"] with return all(field
in data for field in required_fields), which wrongly passes for lists/strings;
update the check (in the function that performs this validation in
core/utils.py) to first ensure data is a dict via isinstance(data, dict) and
return False if not, then perform the all(field in data for field in
required_fields) check so only mappings are accepted.

@Asuniia Asuniia merged commit d1d8db9 into main Mar 25, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant