-
-
Notifications
You must be signed in to change notification settings - Fork 881
feat(python): add Python worker support - POC #2661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add Pydantic schemas matching TypeScript message types - Create transport-agnostic IpcConnection interface - Implement StdioIpcConnection for line-delimited JSON communication
|
WalkthroughThis pull request introduces comprehensive Python runtime support to the Trigger.dev platform. It adds a new Python SDK package with task registration, IPC communication, and schema definitions; integrates Python worker execution into the CLI v3 build system, deployment infrastructure, and dev supervisor; implements gRPC and stdio-based IPC protocols for worker-coordinator communication via a new protobuf schema; creates Python entry point workers for task indexing and execution; and includes build tooling for Python bundling and requirements.txt parsing. The changes span configuration files, TypeScript/JavaScript implementations for the CLI, a complete Python SDK package, Python worker scripts, and integration tests. Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Areas requiring extra attention:
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Nitpick comments (19)
packages/python-sdk/README.md (1)
21-28: Consider expanding the Features section with usage examples.The Features section lists capabilities (retry, queue, duration limits) but doesn't show how to use them. For a POC, this is acceptable, but consider adding brief examples for each feature to guide users, especially for the non-trivial ones like retry and queue configuration.
Example addition:
## Features - **Task registration with decorator API**: Simple `@task(id)` decorator for task registration. - **Support for both sync and async functions**: Tasks can be synchronous or asynchronous. - **Retry configuration**: Configure automatic retries on task failure. ```python @task("my-task", retry={"max_attempts": 3}) async def my_task(payload): ...
- Queue configuration: Route tasks to specific queues.
- Task duration limits: Set max_duration for task execution.
- Type-safe with Pydantic models: Leverage Pydantic for payload validation.
Given the POC nature of this PR and the author's acknowledgment that it needs polishing, this can be deferred to a follow-up PR. </blockquote></details> <details> <summary>packages/python-sdk/trigger_sdk/telemetry.py (2)</summary><blockquote> `24-39`: **Consider validating the version field and simplifying the error message.** The parsing logic is sound, but two minor improvements could enhance robustness: 1. The W3C traceparent spec expects version "00" but the code doesn't validate `parts[0]`. 2. The static analysis hint suggests using a shorter exception message to follow Python best practices. Apply this diff to address both points: ```diff @classmethod def from_traceparent(cls, traceparent: str) -> "TraceContext": """ Parse W3C traceparent header. Format: 00-{trace_id}-{span_id}-{flags} """ parts = traceparent.split("-") if len(parts) != 4: - raise ValueError(f"Invalid traceparent format: {traceparent}") + raise ValueError("Invalid traceparent format") + + if parts[0] != "00": + raise ValueError("Unsupported traceparent version") return cls( trace_id=parts[1], span_id=parts[2], trace_flags=parts[3], )
54-61: Consider validating all required fields, not just trace_id.The method only checks
trace_idbefore callingto_traceparent(), but a valid traceparent requires all three fields (trace_id, span_id, trace_flags). Ifto_traceparent()is updated with validation as suggested, this inconsistency will cause unexpected exceptions.Apply this diff to check all required fields:
def inject_env(self) -> Dict[str, str]: """Get environment variables for propagation""" - if not self.trace_id: + if not all([self.trace_id, self.span_id, self.trace_flags]): return {} return { "TRACEPARENT": self.to_traceparent(), }packages/python-sdk/setup.py (1)
1-10: Consider pyproject.toml-only packaging for modern Python projects.This minimal
setup.pywithfind_packages()is valid, but modern Python projects (PEP 517/518) can rely entirely onpyproject.tomlwithout needingsetup.py. Since this appears to be a new package and you have apyproject.toml, you might be able to remove this file unless you need backward compatibility with older tools.packages/core/src/v3/build/runtime.ts (1)
38-39: Consider verifying python3 availability.The hardcoded
"python3"path assumespython3is in the system PATH. While this is standard on most modern systems, some environments may only havepythonavailable.Consider whether you need fallback logic or environment variable support (similar to how Bun handles
BUN_INSTALL). You could verify this works correctly across your target deployment environments.packages/cli-v3/src/indexing/indexWorkerManifest.ts (1)
115-131: Consider logging unparsable output for debugging.The empty catch block at lines 127-129 silently ignores non-JSON or non-IPC lines. This can make troubleshooting difficult during development. Consider logging ignored lines at debug level.
Apply this diff to add debug logging:
} catch { - // Not JSON or not a message, ignore (probably a log) + logger.debug("Python stdout: non-IPC line", { line }); }Additionally, consider wrapping the
parseMessageFromCatalogcall (line 123) in a try-catch, as it can throw on invalid message structures:for (const line of lines) { try { const parsed = JSON.parse(line); // Check if this is an IPC message (not a log) if (parsed.type && parsed.version) { - const message = parseMessageFromCatalog(parsed, indexerToWorkerMessages); - // Trigger the same handler as IPC messages - child.emit("message", message); + try { + const message = parseMessageFromCatalog(parsed, indexerToWorkerMessages); + // Trigger the same handler as IPC messages + child.emit("message", message); + } catch (error) { + logger.error("Failed to parse Python IPC message", { line, error }); + } } } catch { logger.debug("Python stdout: non-IPC line", { line }); }packages/python-sdk/pyproject.toml (1)
1-23: LGTM: Configuration is appropriate for a POC.The dependency versions and Python requirement (>=3.10) are reasonable for an initial implementation. For production release, consider:
- Adding upper bounds to prevent breaking changes (e.g.,
pydantic>=2.0.0,<3.0.0)- Using a lock file (e.g.,
requirements.lockor Poetry'spoetry.lock) to ensure reproducible builds- Verifying
grpcio>=1.68.0compatibility across target deployment environmentspackages/cli-v3/package.json (1)
76-76: Consider cross-platform compatibility for the copy-python script.The current script uses POSIX commands (
mkdir -p,cp) that may not work on Windows without WSL or Git Bash. Consider usingcpy-cli(already in devDependencies) for better cross-platform support:- "copy-python": "mkdir -p dist/esm/entryPoints/python && cp src/entryPoints/python/*.py dist/esm/entryPoints/python/", + "copy-python": "cpy 'src/entryPoints/python/*.py' 'dist/esm/entryPoints/python/'",packages/cli-v3/tests/python-ipc.test.ts (2)
11-51: Consider ensuring cleanup in all scenarios.The test calls
pythonProcess.cleanup()at line 50, but if the assertions at lines 47-48 fail, cleanup might not occur. Consider usingafterEachor a try/finally block to guarantee cleanup:+ let pythonProcess: PythonProcess | undefined; + + afterEach(async () => { + await pythonProcess?.cleanup(); + }); + it("can spawn Python worker and communicate", async () => { const indexWorker = path.join(__dirname, "../src/entryPoints/python/managed-index-worker.py"); const manifestPath = path.join(__dirname, "fixtures/test-manifest.json"); - const pythonProcess = new PythonProcess({ + pythonProcess = new PythonProcess({ workerScript: indexWorker, env: { TRIGGER_BUILD_MANIFEST_PATH: manifestPath, PYTHONPATH: path.join(__dirname, "../../python-sdk"), }, }); // ... rest of test ... expect(result).toHaveProperty("payload"); expect((result as any).payload.manifest.tasks.length).toBeGreaterThan(0); - - await pythonProcess.cleanup(); });
53-117: Ensure cleanup runs reliably withafterEachhook.The task file path at line 90 is correct—
tests/python/test-task.pyexists in the repository.However, cleanup at line 116 may not execute if assertions fail. Move the cleanup to an
afterEachhook to guarantee it runs regardless of test outcome.packages/cli-v3/src/entryPoints/dev-run-controller.ts (1)
601-651: Python execution path has code duplication but is architecturally sound; metrics parameter is intentional.The completion handling logic does duplicate between the Python path (lines 622-648) and Node.js path (lines 746-772). This is acceptable for a proof-of-concept but should be refactored before production.
Key findings:
- Metrics are intentional by design:
PythonTaskRunner.executeTaskaccepts onlyexecution(no metrics parameter), unliketaskRunProcess.execute. This is not an oversight—the method signature doesn't support metrics.- Code duplication is real: Both paths follow identical completion patterns (validate IDs →
completeRunAttempt→handleCompletionResult→ error handling). Extract to a helper method likesubmitAndHandleCompletion()to reduce duplication.- Isolation is good: Python path is well-isolated with early return at line 651, preventing fallthrough to the Node.js pool logic.
Consider extracting shared completion logic into a helper method for both paths—the refactoring example in the original suggestion remains valid.
packages/python-sdk/trigger_sdk/ipc/stdio.py (1)
103-110: Await awaitables from handler wrappers
asyncio.iscoroutinefunction()only returnsTruefor bare coroutine defs; if callers register a wrapped/partial async handler, we’ll treat it as sync, call it, and drop the returned coroutine (raising “coroutine was never awaited” and losing the message). Switching to “call first, then await if the result is awaitable” covers both sync and async handlers without forcing consumers to pass bare functions.Apply this diff to handle both cases:
- if asyncio.iscoroutinefunction(handler): - await handler(message) - else: - handler(message) + result = handler(message) + if inspect.isawaitable(result): + await resultAdd at the top of the file:
-from typing import Any, Callable, Dict +from typing import Any, Callable, Dict +import inspectpackages/python-sdk/trigger_sdk/schemas/__init__.py (1)
51-88: Sort the public export list.Ruff is flagging
__all__(RUF022). Please sort the names alphabetically to satisfy the lint rule and keep the surface tidy.packages/cli-v3/src/ipc/grpcServer.ts (4)
87-87: Redundant stream.end() call.The
stream.end()call is redundant since the 'end' event indicates the stream has already ended. While harmless, removing it would make the intent clearer.Apply this diff:
stream.on('end', () => { logger.debug('Worker stream ended', { connectionId }); this.streams.delete(connectionId); this.emit('disconnect', connectionId); - stream.end(); });
64-64: Consider using crypto.randomUUID() for connection IDs.The current approach using
Date.now()andMath.random()works but could theoretically produce collisions. Usingcrypto.randomUUID()provides better uniqueness guarantees.Apply this diff:
+ import { randomUUID } from 'crypto';- const connectionId = `conn-${Date.now()}-${Math.random().toString(36).substring(7)}`; + const connectionId = `conn-${randomUUID()}`;
161-163: Consider using logger for exception output.The direct
console.errorcall breaks logging consistency. While it ensures visibility, routing through the logger would maintain a unified logging interface.If the intent is to ensure exceptions are always printed, consider configuring the logger accordingly rather than mixing console and logger calls.
230-321: Extract duplicated socket cleanup logic.The socket cleanup code is duplicated in three locations (lines 238-240, 283-291, 308-316). Extracting this to a private helper method would improve maintainability.
Add a private helper method:
private cleanupSocket(): void { if (this.socketPath && fs.existsSync(this.socketPath)) { try { fs.unlinkSync(this.socketPath); logger.debug('Cleaned up socket file', { path: this.socketPath }); } catch (error) { logger.warn('Failed to clean up socket file', { path: this.socketPath, error: error instanceof Error ? error.message : String(error), }); } } }Then replace the duplicated code:
private async startUnixSocket(): Promise<string> { this.socketPath = this.options.socketPath ?? path.join( os.tmpdir(), `trigger-grpc-${this.options.runnerId || process.pid}.sock` ); - // Clean up existing socket if it exists - if (fs.existsSync(this.socketPath)) { - logger.debug('Cleaning up existing socket', { path: this.socketPath }); - fs.unlinkSync(this.socketPath); - } + this.cleanupSocket(); return new Promise((resolve, reject) => {async stop(): Promise<void> { return new Promise((resolve) => { this.server.tryShutdown(() => { - // Clean up Unix socket if it exists - if (this.socketPath && fs.existsSync(this.socketPath)) { - try { - fs.unlinkSync(this.socketPath); - } catch (error) { - logger.warn('Failed to clean up socket file', { - path: this.socketPath, - error: error instanceof Error ? error.message : String(error), - }); - } - } + this.cleanupSocket(); this.streams.clear();forceShutdown(): void { this.server.forceShutdown(); - // Clean up Unix socket - if (this.socketPath && fs.existsSync(this.socketPath)) { - try { - fs.unlinkSync(this.socketPath); - } catch (error) { - logger.warn('Failed to clean up socket file during force shutdown', { - path: this.socketPath, - error: error instanceof Error ? error.message : String(error), - }); - } - } + this.cleanupSocket(); this.streams.clear();packages/python-sdk/trigger_sdk/schemas/messages.py (2)
35-112: Consider using typed models instead of Dict[str, Any].While using
Dict[str, Any]forcompletionandtaskssimplifies serialization, it loses type safety. Since Pydantic can handle nested model serialization automatically, consider using the actual types:completion: TaskRunSuccessfulExecutionResult tasks: list[TaskMetadata]Then serialize when needed:
@classmethod def from_result(cls, result: TaskRunSuccessfulExecutionResult) -> "TaskRunCompletedMessage": return cls(completion=result)This would provide better IDE support, validation, and type checking while Pydantic handles serialization automatically.
119-159: LGTM with suggestion for consistency.The
get_execution()helper method inExecuteTaskRunMessageis a good pattern for on-demand deserialization. For consistency with the suggestion on worker messages, consider storing the typed model directly and letting Pydantic handle serialization.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (8)
packages/python-sdk/trigger_sdk/generated/__init__.pyis excluded by!**/generated/**pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlreferences/hello-world-python/.gitignoreis excluded by!references/**references/hello-world-python/package.jsonis excluded by!references/**references/hello-world-python/requirements.txtis excluded by!references/**references/hello-world-python/src/trigger/another_task.pyis excluded by!references/**references/hello-world-python/src/trigger/example_task.pyis excluded by!references/**references/hello-world-python/trigger.config.tsis excluded by!references/**
📒 Files selected for processing (61)
.gitignore(1 hunks)package.json(1 hunks)packages/cli-v3/package.json(2 hunks)packages/cli-v3/src/build/bundle.ts(4 hunks)packages/cli-v3/src/build/entryPoints.ts(1 hunks)packages/cli-v3/src/build/pythonBundler.ts(1 hunks)packages/cli-v3/src/build/pythonDependencies.ts(1 hunks)packages/cli-v3/src/deploy/buildImage.ts(2 hunks)packages/cli-v3/src/dev/backgroundWorker.ts(1 hunks)packages/cli-v3/src/dev/devSupervisor.ts(4 hunks)packages/cli-v3/src/entryPoints/dev-run-controller.ts(2 hunks)packages/cli-v3/src/entryPoints/managed/execution.ts(2 hunks)packages/cli-v3/src/entryPoints/python/__init__.py(1 hunks)packages/cli-v3/src/entryPoints/python/managed-index-worker.py(1 hunks)packages/cli-v3/src/entryPoints/python/managed-run-worker.py(1 hunks)packages/cli-v3/src/indexing/indexWorkerManifest.ts(2 hunks)packages/cli-v3/src/ipc/grpcServer.ts(1 hunks)packages/cli-v3/src/ipc/protoLoader.ts(1 hunks)packages/cli-v3/src/python/index.ts(1 hunks)packages/cli-v3/src/python/pythonProcess.ts(1 hunks)packages/cli-v3/src/python/pythonTaskRunner.ts(1 hunks)packages/cli-v3/src/python/stdioIpc.ts(1 hunks)packages/cli-v3/tests/fixtures/test-manifest.json(1 hunks)packages/cli-v3/tests/fixtures/test-task.py(1 hunks)packages/cli-v3/tests/grpc-python.test.ts(1 hunks)packages/cli-v3/tests/python-dependencies.test.ts(1 hunks)packages/cli-v3/tests/python-ipc.test.ts(1 hunks)packages/cli-v3/tests/python-runtime.test.ts(1 hunks)packages/cli-v3/tests/python/test-task.py(1 hunks)packages/cli-v3/tests/python/test-workers.sh(1 hunks)packages/core/proto/worker.proto(1 hunks)packages/core/src/v3/build/runtime.ts(3 hunks)packages/core/src/v3/schemas/build.ts(2 hunks)packages/core/src/v3/schemas/schemas.ts(1 hunks)packages/python-sdk/.gitignore(1 hunks)packages/python-sdk/README.md(1 hunks)packages/python-sdk/pyproject.toml(1 hunks)packages/python-sdk/setup.py(1 hunks)packages/python-sdk/tests/test_context.py(1 hunks)packages/python-sdk/tests/test_error_mapping.py(1 hunks)packages/python-sdk/tests/test_ipc_stdio.py(1 hunks)packages/python-sdk/tests/test_logger.py(1 hunks)packages/python-sdk/tests/test_schemas.py(1 hunks)packages/python-sdk/tests/test_task_registration.py(1 hunks)packages/python-sdk/trigger_sdk/__init__.py(1 hunks)packages/python-sdk/trigger_sdk/context.py(1 hunks)packages/python-sdk/trigger_sdk/errors.py(1 hunks)packages/python-sdk/trigger_sdk/ipc/__init__.py(1 hunks)packages/python-sdk/trigger_sdk/ipc/base.py(1 hunks)packages/python-sdk/trigger_sdk/ipc/grpc.py(1 hunks)packages/python-sdk/trigger_sdk/ipc/stdio.py(1 hunks)packages/python-sdk/trigger_sdk/logger.py(1 hunks)packages/python-sdk/trigger_sdk/schemas/__init__.py(1 hunks)packages/python-sdk/trigger_sdk/schemas/common.py(1 hunks)packages/python-sdk/trigger_sdk/schemas/errors.py(1 hunks)packages/python-sdk/trigger_sdk/schemas/messages.py(1 hunks)packages/python-sdk/trigger_sdk/schemas/resources.py(1 hunks)packages/python-sdk/trigger_sdk/task.py(1 hunks)packages/python-sdk/trigger_sdk/telemetry.py(1 hunks)packages/python-sdk/trigger_sdk/types.py(1 hunks)packages/python-sdk/trigger_sdk/version.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Always prefer using isomorphic code like fetch, ReadableStream, etc. instead of Node.js specific code
For TypeScript, we usually use types over interfaces
Avoid enums
No default exports, use function declarations
Files:
packages/core/src/v3/schemas/schemas.tspackages/cli-v3/src/python/index.tspackages/cli-v3/src/dev/devSupervisor.tspackages/cli-v3/src/ipc/protoLoader.tspackages/cli-v3/src/build/entryPoints.tspackages/cli-v3/tests/python-ipc.test.tspackages/core/src/v3/build/runtime.tspackages/cli-v3/src/entryPoints/managed/execution.tspackages/cli-v3/src/build/pythonDependencies.tspackages/cli-v3/src/dev/backgroundWorker.tspackages/cli-v3/src/build/pythonBundler.tspackages/cli-v3/src/build/bundle.tspackages/cli-v3/tests/python-dependencies.test.tspackages/cli-v3/src/entryPoints/dev-run-controller.tspackages/cli-v3/src/python/pythonProcess.tspackages/cli-v3/tests/python-runtime.test.tspackages/cli-v3/src/ipc/grpcServer.tspackages/cli-v3/src/python/pythonTaskRunner.tspackages/cli-v3/src/indexing/indexWorkerManifest.tspackages/cli-v3/src/deploy/buildImage.tspackages/core/src/v3/schemas/build.tspackages/cli-v3/src/python/stdioIpc.tspackages/cli-v3/tests/grpc-python.test.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
We use zod a lot in packages/core and in the webapp
Files:
packages/core/src/v3/schemas/schemas.tspackages/core/src/v3/build/runtime.tspackages/core/src/v3/schemas/build.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Our tests are all vitest
Files:
packages/cli-v3/tests/python-ipc.test.tspackages/cli-v3/tests/python-dependencies.test.tspackages/cli-v3/tests/python-runtime.test.tspackages/cli-v3/tests/grpc-python.test.ts
**/*.{test,spec}.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{test,spec}.{ts,tsx,js,jsx}: Unit tests must use Vitest
Tests should avoid mocks or stubs and use helpers from @internal/testcontainers when Redis or Postgres are needed
Test files live beside the files under test and should use descriptive describe and it blocks
Files:
packages/cli-v3/tests/python-ipc.test.tspackages/cli-v3/tests/python-dependencies.test.tspackages/cli-v3/tests/python-runtime.test.tspackages/cli-v3/tests/grpc-python.test.ts
🧠 Learnings (16)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Import Trigger.dev APIs from "trigger.dev/sdk/v3" when writing tasks or related utilities
Learnt from: zvictor
Repo: triggerdotdev/trigger.dev PR: 1686
File: packages/build/src/extensions/python.ts:0-0
Timestamp: 2025-02-10T11:19:37.014Z
Learning: In the Python extension for Trigger.dev, do not enforce `.py` file extensions for Python scripts to maintain flexibility for developers.
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Import Trigger.dev APIs from "trigger.dev/sdk/v3" when writing tasks or related utilities
Applied to files:
packages/python-sdk/README.mdpackages/cli-v3/src/build/pythonBundler.tspackages/cli-v3/src/build/bundle.tspackages/cli-v3/src/entryPoints/dev-run-controller.tspackages/cli-v3/tests/python-runtime.test.tspackages/python-sdk/trigger_sdk/__init__.pypackages/python-sdk/pyproject.tomlpackages/cli-v3/tests/fixtures/test-manifest.json
📚 Learning: 2025-08-29T10:06:49.293Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-08-29T10:06:49.293Z
Learning: Applies to {apps/webapp/app/**/*.server.{ts,tsx},apps/webapp/app/routes/**/*.ts} : Access environment variables only via the env export from app/env.server.ts; do not reference process.env directly
Applied to files:
packages/cli-v3/src/dev/devSupervisor.ts
📚 Learning: 2025-07-18T17:49:24.468Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T17:49:24.468Z
Learning: Applies to **/*.test.{ts,tsx} : Our tests are all vitest
Applied to files:
packages/cli-v3/tests/python-ipc.test.tspackages/cli-v3/tests/python-dependencies.test.tspackages/cli-v3/tests/python-runtime.test.ts
📚 Learning: 2025-08-29T15:49:22.406Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-08-29T15:49:22.406Z
Learning: Applies to **/*.{test,spec}.{ts,tsx,js,jsx} : Unit tests must use Vitest
Applied to files:
packages/cli-v3/tests/python-ipc.test.tspackages/cli-v3/tests/python-dependencies.test.tspackages/cli-v3/tests/python-runtime.test.ts
📚 Learning: 2025-07-18T17:49:29.234Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/executing-commands.mdc:0-0
Timestamp: 2025-07-18T17:49:29.234Z
Learning: Almost all commands in the monorepo should be executed using `pnpm run ...` from the root of the monorepo.
Applied to files:
package.json
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Inside tasks, prefer logger.debug/log/info/warn/error over ad-hoc console logging for structured logs
Applied to files:
packages/cli-v3/src/dev/backgroundWorker.tspackages/cli-v3/src/build/bundle.tspackages/python-sdk/trigger_sdk/logger.py
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to trigger.config.ts : Declare build options and extensions (external, jsx, conditions, extensions) via the build block in trigger.config.ts rather than custom scripts
Applied to files:
packages/cli-v3/src/build/pythonBundler.tspackages/cli-v3/src/build/bundle.tspackages/cli-v3/src/indexing/indexWorkerManifest.tspackages/core/src/v3/schemas/build.ts
📚 Learning: 2025-02-10T11:19:37.014Z
Learnt from: zvictor
Repo: triggerdotdev/trigger.dev PR: 1686
File: packages/build/src/extensions/python.ts:0-0
Timestamp: 2025-02-10T11:19:37.014Z
Learning: In the Python extension for Trigger.dev, do not enforce `.py` file extensions for Python scripts to maintain flexibility for developers.
Applied to files:
packages/python-sdk/.gitignore.gitignore
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Export every task (including subtasks) defined with task(), schedules.task(), or schemaTask()
Applied to files:
packages/cli-v3/src/build/bundle.ts
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Define tasks using task({ id, run, ... }) with a unique id per project
Applied to files:
packages/cli-v3/src/build/bundle.tspackages/cli-v3/src/entryPoints/dev-run-controller.tspackages/cli-v3/src/python/pythonTaskRunner.tspackages/python-sdk/trigger_sdk/task.pypackages/cli-v3/tests/fixtures/test-manifest.json
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to trigger.config.ts : Provide a valid Trigger.dev configuration using defineConfig with project ref and dirs (e.g., ["./trigger"]; tests/specs auto-excluded)
Applied to files:
packages/cli-v3/src/build/bundle.ts
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use schemaTask({ schema, run, ... }) to validate payloads when input validation is required
Applied to files:
packages/cli-v3/src/build/bundle.tspackages/python-sdk/tests/test_schemas.py
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Do not use client.defineJob or any deprecated v2 patterns (e.g., eventTrigger) when defining tasks
Applied to files:
packages/cli-v3/src/build/bundle.ts
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to trigger.config.ts : Configure global task lifecycle hooks (onStart/onSuccess/onFailure) only within trigger.config.ts if needed, not within arbitrary files
Applied to files:
packages/cli-v3/src/build/bundle.ts
📚 Learning: 2025-08-18T10:07:17.368Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-08-18T10:07:17.368Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use schedules.task(...) for scheduled (cron) tasks; do not implement schedules as plain task() with external cron logic
Applied to files:
packages/cli-v3/src/build/bundle.ts
🧬 Code graph analysis (37)
packages/cli-v3/src/ipc/protoLoader.ts (1)
packages/python-sdk/trigger_sdk/schemas/messages.py (9)
TaskRunCompletedMessage(35-48)TaskRunFailedMessage(51-64)TaskHeartbeatMessage(67-75)IndexTasksCompleteMessage(78-87)LogLevel(23-28)LogMessage(90-102)ExecuteTaskRunMessage(119-131)CancelMessage(134-141)FlushMessage(144-151)
packages/cli-v3/src/build/entryPoints.ts (2)
packages/core/src/v3/schemas/schemas.ts (1)
ResolvedConfig(261-264)packages/core/src/v3/schemas/build.ts (2)
BuildTarget(12-12)BuildTarget(14-14)
packages/cli-v3/tests/python-ipc.test.ts (1)
packages/cli-v3/src/python/pythonProcess.ts (1)
PythonProcess(20-191)
packages/cli-v3/src/entryPoints/managed/execution.ts (2)
packages/cli-v3/src/python/pythonTaskRunner.ts (1)
PythonTaskRunner(77-151)packages/core/src/utils.ts (1)
tryCatch(5-18)
packages/cli-v3/src/dev/backgroundWorker.ts (3)
packages/cli-v3/src/executions/taskRunProcess.ts (2)
data(388-402)data(404-429)packages/python-sdk/trigger_sdk/logger.py (2)
log(117-119)error(113-115)packages/cli-v3/src/utilities/cliOutput.ts (1)
prettyError(73-94)
packages/cli-v3/src/build/pythonBundler.ts (3)
packages/core/src/v3/schemas/schemas.ts (1)
ResolvedConfig(261-264)packages/core/src/v3/schemas/build.ts (4)
BuildTarget(12-12)BuildTarget(14-14)BuildManifest(20-72)BuildManifest(74-74)packages/cli-v3/src/build/pythonDependencies.ts (1)
parseRequirementsTxt(16-52)
packages/cli-v3/src/build/bundle.ts (4)
packages/cli-v3/src/build/entryPoints.ts (1)
createEntryPointManager(48-192)packages/cli-v3/src/utilities/cliOutput.ts (2)
prettyError(73-94)cliLink(140-145)packages/cli-v3/src/cli/common.ts (1)
SkipLoggingError(33-33)packages/cli-v3/src/build/pythonBundler.ts (2)
bundlePython(42-122)createBuildManifestFromPythonBundle(127-197)
packages/cli-v3/tests/python-dependencies.test.ts (1)
packages/cli-v3/src/build/pythonDependencies.ts (4)
parseRequirementsTxt(16-52)PythonDependency(5-9)generateRequirementsTxt(57-70)validateRequirementsTxt(76-105)
packages/cli-v3/src/entryPoints/dev-run-controller.ts (1)
packages/cli-v3/src/python/pythonTaskRunner.ts (1)
PythonTaskRunner(77-151)
packages/python-sdk/tests/test_task_registration.py (1)
packages/python-sdk/trigger_sdk/task.py (5)
task(63-108)Task(15-60)clear_registry(116-118)execute(39-46)get_metadata(48-57)
packages/cli-v3/src/python/pythonProcess.ts (3)
packages/core/src/v3/schemas/build.ts (2)
BuildRuntime(16-16)BuildRuntime(18-18)packages/cli-v3/src/ipc/grpcServer.ts (1)
GrpcWorkerServer(38-343)packages/core/src/v3/build/runtime.ts (1)
execPathForRuntime(23-43)
packages/python-sdk/trigger_sdk/context.py (1)
packages/python-sdk/trigger_sdk/schemas/common.py (5)
TaskInfo(25-28)RunInfo(31-37)AttemptInfo(40-44)BatchInfo(83-85)TaskRunExecution(88-107)
packages/cli-v3/tests/fixtures/test-task.py (1)
references/hello-world-python/src/trigger/example_task.py (1)
hello_task(9-11)
packages/python-sdk/trigger_sdk/errors.py (1)
packages/python-sdk/trigger_sdk/schemas/errors.py (3)
TaskRunBuiltInError(25-35)TaskRunInternalError(38-48)TaskRunStringError(51-59)
packages/python-sdk/trigger_sdk/ipc/grpc.py (3)
packages/python-sdk/trigger_sdk/ipc/base.py (1)
IpcConnection(29-169)packages/cli-v3/src/ipc/protoLoader.ts (14)
WorkerMessage(170-176)TaskRunCompletedMessage(125-129)TaskRunFailedMessage(131-135)TaskHeartbeatMessage(137-141)IndexTasksCompleteMessage(147-151)LogMessage(160-168)ExecuteTaskRunMessage(179-183)CancelMessage(185-188)FlushMessage(190-193)TaskRunSuccessfulExecutionResult(107-113)TaskRunFailedExecutionResult(115-122)TaskRunError(99-105)TaskRunInternalError(89-93)TaskRunExecution(67-77)packages/python-sdk/trigger_sdk/schemas/messages.py (8)
TaskRunCompletedMessage(35-48)TaskRunFailedMessage(51-64)TaskHeartbeatMessage(67-75)IndexTasksCompleteMessage(78-87)LogMessage(90-102)ExecuteTaskRunMessage(119-131)CancelMessage(134-141)FlushMessage(144-151)
packages/python-sdk/trigger_sdk/schemas/errors.py (1)
packages/cli-v3/src/executions/taskRunProcess.ts (1)
code(324-386)
packages/cli-v3/tests/python-runtime.test.ts (1)
packages/core/src/v3/build/runtime.ts (2)
execPathForRuntime(23-43)execOptionsForRuntime(50-86)
packages/python-sdk/trigger_sdk/__init__.py (7)
packages/python-sdk/trigger_sdk/task.py (2)
task(63-108)Task(15-60)packages/python-sdk/trigger_sdk/types.py (3)
TaskConfig(22-27)RetryConfig(7-13)QueueConfig(16-19)packages/python-sdk/trigger_sdk/ipc/base.py (1)
IpcConnection(29-169)packages/python-sdk/trigger_sdk/ipc/grpc.py (1)
GrpcIpcConnection(42-366)packages/cli-v3/src/ipc/protoLoader.ts (3)
WorkerMessage(170-176)CoordinatorMessage(195-199)TaskRunExecution(67-77)packages/python-sdk/trigger_sdk/schemas/common.py (1)
TaskRunExecution(88-107)packages/python-sdk/trigger_sdk/context.py (2)
TaskContext(20-63)get_current_context(66-68)
packages/cli-v3/src/ipc/grpcServer.ts (2)
packages/cli-v3/src/ipc/protoLoader.ts (3)
WorkerMessage(170-176)CoordinatorMessage(195-199)workerProto(45-45)packages/python-sdk/trigger_sdk/schemas/messages.py (1)
LogLevel(23-28)
packages/cli-v3/src/python/pythonTaskRunner.ts (2)
packages/cli-v3/src/ipc/protoLoader.ts (1)
TaskRunExecution(67-77)packages/cli-v3/src/python/pythonProcess.ts (1)
PythonProcess(20-191)
packages/python-sdk/trigger_sdk/logger.py (5)
packages/python-sdk/trigger_sdk/context.py (1)
get_current_context(66-68)packages/python-sdk/trigger_sdk/ipc/base.py (1)
IpcConnection(29-169)packages/cli-v3/src/ipc/protoLoader.ts (1)
LogMessage(160-168)packages/python-sdk/trigger_sdk/schemas/messages.py (2)
LogMessage(90-102)LogLevel(23-28)packages/python-sdk/trigger_sdk/ipc/grpc.py (2)
send(279-293)flush(333-345)
packages/python-sdk/tests/test_schemas.py (4)
packages/python-sdk/trigger_sdk/schemas/common.py (7)
TaskRunExecutionUsage(14-16)TaskInfo(25-28)RunInfo(31-37)AttemptInfo(40-44)TaskRunExecution(88-107)TaskRunSuccessfulExecutionResult(110-123)TaskRunFailedExecutionResult(126-140)packages/python-sdk/trigger_sdk/schemas/errors.py (3)
TaskRunBuiltInError(25-35)TaskRunInternalError(38-48)TaskRunStringError(51-59)packages/python-sdk/trigger_sdk/schemas/messages.py (10)
TaskRunCompletedMessage(35-48)TaskRunFailedMessage(51-64)TaskHeartbeatMessage(67-75)IndexTasksCompleteMessage(78-87)ExecuteTaskRunMessage(119-131)CancelMessage(134-141)FlushMessage(144-151)from_result(46-48)from_result(62-64)get_execution(129-131)packages/python-sdk/trigger_sdk/schemas/resources.py (3)
TaskResource(27-46)QueueConfig(12-15)RetryConfig(18-24)
packages/python-sdk/tests/test_error_mapping.py (3)
packages/python-sdk/trigger_sdk/errors.py (2)
exception_to_task_run_error(20-76)get_error_code_for_exception(79-114)packages/python-sdk/trigger_sdk/schemas/errors.py (3)
TaskRunBuiltInError(25-35)TaskRunInternalError(38-48)TaskRunStringError(51-59)packages/python-sdk/tests/test_schemas.py (1)
test_error_serialization(115-125)
packages/python-sdk/tests/test_context.py (2)
packages/python-sdk/trigger_sdk/context.py (6)
TaskContext(20-63)get_current_context(66-68)set_current_context(71-73)clear_current_context(76-78)is_retry(58-60)from_execution_payload(42-55)packages/python-sdk/trigger_sdk/schemas/common.py (6)
TaskInfo(25-28)RunInfo(31-37)AttemptInfo(40-44)BatchInfo(83-85)TaskRunExecution(88-107)EnvironmentInfo(63-67)
packages/python-sdk/trigger_sdk/task.py (1)
packages/python-sdk/trigger_sdk/types.py (4)
TaskConfig(22-27)TaskMetadata(30-37)RetryConfig(7-13)QueueConfig(16-19)
packages/python-sdk/trigger_sdk/ipc/__init__.py (2)
packages/python-sdk/trigger_sdk/ipc/base.py (1)
IpcConnection(29-169)packages/python-sdk/trigger_sdk/ipc/grpc.py (1)
GrpcIpcConnection(42-366)
packages/cli-v3/src/indexing/indexWorkerManifest.ts (3)
packages/cli-v3/src/entryPoints/managed-index-controller.ts (1)
handleStdout(79-81)packages/core/src/v3/zodMessageHandler.ts (1)
parseMessageFromCatalog(205-231)packages/core/src/v3/schemas/messages.ts (1)
indexerToWorkerMessages(152-160)
packages/cli-v3/src/entryPoints/python/managed-run-worker.py (5)
packages/python-sdk/trigger_sdk/ipc/grpc.py (5)
GrpcIpcConnection(42-366)flush(333-345)stop(347-349)on(78-80)start_listening(295-331)packages/python-sdk/trigger_sdk/context.py (2)
set_current_context(71-73)clear_current_context(76-78)packages/python-sdk/trigger_sdk/schemas/messages.py (2)
ExecuteTaskRunMessage(119-131)get_execution(129-131)packages/cli-v3/src/entryPoints/python/managed-index-worker.py (2)
import_task_file(44-76)main(114-159)packages/python-sdk/trigger_sdk/ipc/base.py (3)
send_heartbeat(161-169)send_completed(88-123)send_failed(125-159)
packages/python-sdk/trigger_sdk/schemas/messages.py (2)
packages/cli-v3/src/ipc/protoLoader.ts (11)
TaskRunSuccessfulExecutionResult(107-113)TaskRunFailedExecutionResult(115-122)TaskRunExecution(67-77)TaskRunCompletedMessage(125-129)TaskRunFailedMessage(131-135)TaskHeartbeatMessage(137-141)IndexTasksCompleteMessage(147-151)LogMessage(160-168)ExecuteTaskRunMessage(179-183)CancelMessage(185-188)FlushMessage(190-193)packages/python-sdk/trigger_sdk/schemas/common.py (3)
TaskRunSuccessfulExecutionResult(110-123)TaskRunFailedExecutionResult(126-140)TaskRunExecution(88-107)
packages/cli-v3/src/python/stdioIpc.ts (2)
packages/cli-v3/src/ipc/protoLoader.ts (1)
WorkerMessage(170-176)packages/cli-v3/src/python/index.ts (2)
StdioIpcOptions(10-10)StdioIpcConnection(6-6)
packages/python-sdk/tests/test_ipc_stdio.py (3)
packages/python-sdk/trigger_sdk/ipc/stdio.py (5)
StdioIpcConnection(24-134)send(41-60)on(37-39)start_listening(62-130)stop(132-134)packages/python-sdk/trigger_sdk/schemas/messages.py (3)
TaskHeartbeatMessage(67-75)ExecuteTaskRunMessage(119-131)CancelMessage(134-141)packages/python-sdk/trigger_sdk/ipc/base.py (7)
send(38-48)send_completed(88-123)send_failed(125-159)send_heartbeat(161-169)on(64-73)start_listening(51-61)stop(76-82)
packages/python-sdk/tests/test_logger.py (3)
packages/python-sdk/trigger_sdk/logger.py (3)
TaskLogger(15-119)info(105-107)log(117-119)packages/python-sdk/trigger_sdk/context.py (3)
TaskContext(20-63)set_current_context(71-73)clear_current_context(76-78)packages/python-sdk/trigger_sdk/schemas/common.py (3)
TaskInfo(25-28)RunInfo(31-37)AttemptInfo(40-44)
packages/python-sdk/trigger_sdk/ipc/stdio.py (3)
packages/python-sdk/trigger_sdk/ipc/base.py (5)
IpcConnection(29-169)on(64-73)send(38-48)start_listening(51-61)stop(76-82)packages/cli-v3/src/ipc/protoLoader.ts (2)
WorkerMessage(170-176)CoordinatorMessage(195-199)packages/python-sdk/tests/test_ipc_stdio.py (2)
handler(103-104)handler(145-146)
packages/cli-v3/src/entryPoints/python/managed-index-worker.py (5)
packages/python-sdk/trigger_sdk/task.py (2)
task(63-108)get_metadata(48-57)packages/cli-v3/src/ipc/protoLoader.ts (1)
IndexTasksCompleteMessage(147-151)packages/python-sdk/trigger_sdk/schemas/messages.py (1)
IndexTasksCompleteMessage(78-87)packages/python-sdk/trigger_sdk/logger.py (2)
error(113-115)debug(101-103)packages/cli-v3/src/entryPoints/python/managed-run-worker.py (1)
import_task_file(53-77)
packages/cli-v3/tests/grpc-python.test.ts (1)
packages/cli-v3/src/ipc/grpcServer.ts (1)
GrpcWorkerServer(38-343)
packages/python-sdk/trigger_sdk/ipc/base.py (4)
packages/python-sdk/trigger_sdk/schemas/messages.py (5)
TaskRunCompletedMessage(35-48)TaskRunFailedMessage(51-64)TaskHeartbeatMessage(67-75)from_result(46-48)from_result(62-64)packages/python-sdk/trigger_sdk/schemas/common.py (3)
TaskRunSuccessfulExecutionResult(110-123)TaskRunFailedExecutionResult(126-140)TaskRunExecutionUsage(14-16)packages/python-sdk/trigger_sdk/ipc/stdio.py (4)
send(41-60)start_listening(62-130)on(37-39)stop(132-134)packages/python-sdk/trigger_sdk/errors.py (1)
exception_to_task_run_error(20-76)
packages/python-sdk/trigger_sdk/schemas/__init__.py (4)
packages/python-sdk/trigger_sdk/schemas/common.py (14)
TaskRunExecutionUsage(14-16)TaskRunExecutionRetry(19-22)TaskInfo(25-28)RunInfo(31-37)AttemptInfo(40-44)BatchInfo(83-85)OrganizationInfo(48-52)ProjectInfo(55-60)EnvironmentInfo(63-67)QueueInfo(70-73)DeploymentInfo(76-80)TaskRunExecution(88-107)TaskRunSuccessfulExecutionResult(110-123)TaskRunFailedExecutionResult(126-140)packages/python-sdk/trigger_sdk/schemas/errors.py (3)
TaskRunBuiltInError(25-35)TaskRunInternalError(38-48)TaskRunStringError(51-59)packages/python-sdk/trigger_sdk/schemas/messages.py (7)
TaskRunCompletedMessage(35-48)TaskRunFailedMessage(51-64)TaskHeartbeatMessage(67-75)IndexTasksCompleteMessage(78-87)ExecuteTaskRunMessage(119-131)CancelMessage(134-141)FlushMessage(144-151)packages/python-sdk/trigger_sdk/schemas/resources.py (3)
QueueConfig(12-15)RetryConfig(18-24)TaskResource(27-46)
🪛 Buf (1.59.0)
packages/core/proto/worker.proto
3-3: Files with package "trigger.worker.v1" must be within a directory "trigger/worker/v1" relative to root but were in directory "packages/core/proto".
(PACKAGE_DIRECTORY_MATCH)
🪛 Ruff (0.14.3)
packages/python-sdk/trigger_sdk/telemetry.py
33-33: Avoid specifying long messages outside the exception class
(TRY003)
packages/cli-v3/tests/python/test-task.py
21-21: Unused function argument: payload
(ARG001)
24-24: Avoid specifying long messages outside the exception class
(TRY003)
packages/python-sdk/trigger_sdk/errors.py
75-75: Use explicit conversion flag
Replace with conversion flag
(RUF010)
packages/python-sdk/trigger_sdk/ipc/grpc.py
65-68: Avoid specifying long messages outside the exception class
(TRY003)
124-124: Do not catch blind exception: Exception
(BLE001)
223-223: Avoid specifying long messages outside the exception class
(TRY003)
284-284: Avoid specifying long messages outside the exception class
(TRY003)
290-290: Do not catch blind exception: Exception
(BLE001)
316-316: Do not catch blind exception: Exception
(BLE001)
325-325: Do not catch blind exception: Exception
(BLE001)
packages/python-sdk/trigger_sdk/__init__.py
12-33: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
packages/python-sdk/trigger_sdk/logger.py
59-59: Local variable loop is assigned to but never used
Remove assignment to unused variable loop
(F841)
61-61: Store a reference to the return value of asyncio.create_task
(RUF006)
66-66: Do not catch blind exception: Exception
(BLE001)
packages/python-sdk/tests/test_error_mapping.py
155-155: Abstract raise to an inner function
(TRY301)
155-155: Avoid specifying long messages outside the exception class
(TRY003)
166-166: Avoid specifying long messages outside the exception class
(TRY003)
packages/python-sdk/trigger_sdk/task.py
31-31: Avoid specifying long messages outside the exception class
(TRY003)
35-35: Avoid specifying long messages outside the exception class
(TRY003)
packages/python-sdk/trigger_sdk/ipc/__init__.py
11-11: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
packages/cli-v3/src/entryPoints/python/managed-run-worker.py
41-41: Unused function argument: frame
(ARG001)
73-73: Consider moving this statement to an else block
(TRY300)
75-75: Do not catch blind exception: Exception
(BLE001)
89-89: Do not catch blind exception: Exception
(BLE001)
110-110: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
110-110: Avoid specifying long messages outside the exception class
(TRY003)
122-122: Abstract raise to an inner function
(TRY301)
122-122: Avoid specifying long messages outside the exception class
(TRY003)
126-126: Abstract raise to an inner function
(TRY301)
126-126: Avoid specifying long messages outside the exception class
(TRY003)
186-186: Do not catch blind exception: Exception
(BLE001)
204-204: Unused function argument: message
(ARG001)
236-236: Do not catch blind exception: Exception
(BLE001)
249-249: Do not catch blind exception: Exception
(BLE001)
packages/python-sdk/tests/test_ipc_stdio.py
254-254: Unused function argument: message
(ARG001)
255-255: Avoid specifying long messages outside the exception class
(TRY003)
packages/python-sdk/trigger_sdk/ipc/stdio.py
57-57: Do not catch blind exception: Exception
(BLE001)
124-124: Do not catch blind exception: Exception
(BLE001)
packages/cli-v3/src/entryPoints/python/managed-index-worker.py
72-72: Consider moving this statement to an else block
(TRY300)
74-74: Do not catch blind exception: Exception
(BLE001)
108-108: Do not catch blind exception: Exception
(BLE001)
168-168: Do not catch blind exception: Exception
(BLE001)
packages/python-sdk/trigger_sdk/schemas/__init__.py
51-88: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
🪛 Shellcheck (0.11.0)
packages/cli-v3/tests/python/test-workers.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 52-52: i appears unused. Verify use (or export if used externally).
(SC2034)
| async (newEntryPoints) => { | ||
| // TODO: Implement proper watch mode for Python (file copying + manifest regeneration) | ||
| logger.debug("Python entry points changed, rebuilding"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore watch-mode rebuilds for Python bundles
In watch mode this callback fires on every .py change, but we only emit a debug log. That means dev-mode hot reload never refreshes the copied sources or manifest—developers must restart the CLI to pick up edits. Please mirror the Node path by re-running bundlePython (and rewriting the manifest/output) whenever entry points change before we ship this. Until we do, Python dev workflows break after the first build.
🤖 Prompt for AI Agents
In packages/cli-v3/src/build/bundle.ts around lines 428 to 431, the watch-mode
callback for Python entry point changes currently only logs a debug message and
does not rebuild assets; update the callback to mirror the Node path by invoking
bundlePython (or the existing Python bundling routine) when newEntryPoints are
passed, ensure the bundle process copies updated .py sources and
regenerates/writes the manifest and output files, handle errors by logging and
not crashing the watcher, and keep behavior consistent with the Node watcher
(debounce rapid events if necessary) so dev hot-reload picks up Python edits
without restarting the CLI.
| // Calculate overall content hash from all file hashes | ||
| const hasher = createHash("md5"); | ||
| for (const entry of bundle.entries) { | ||
| hasher.update(entry.contentHash); | ||
| } | ||
| const contentHash = hasher.digest("hex"); | ||
|
|
||
| // Build sources map (file path -> content + hash) | ||
| const sources: Record<string, { contents: string; contentHash: string }> = {}; | ||
| for (const entry of bundle.entries) { | ||
| sources[entry.relativePath] = { | ||
| contents: entry.content, | ||
| contentHash: entry.contentHash, | ||
| }; | ||
| } | ||
|
|
||
| // Build files array | ||
| const files = bundle.entries.map((entry) => ({ | ||
| entry: entry.entry, | ||
| out: entry.out, | ||
| filePath: entry.relativePath, | ||
| })); | ||
|
|
||
| const buildManifest: BuildManifest = { | ||
| target, | ||
| packageVersion: sdkVersion ?? CORE_VERSION, | ||
| cliPackageVersion: VERSION, | ||
| contentHash, | ||
| runtime: "python", | ||
| environment, | ||
| branch, | ||
| config: configManifest, | ||
| files, | ||
| sources, | ||
| outputPath: outputDir, | ||
| // Python entry points - absolute paths to Python scripts in CLI package | ||
| runWorkerEntryPoint: path.join(sourceDir, "entryPoints/python/managed-run-worker.py"), | ||
| indexWorkerEntryPoint: path.join(sourceDir, "entryPoints/python/managed-index-worker.py"), | ||
| configPath: config.configFile || "trigger.config.ts", | ||
| build: {}, | ||
| deploy: { | ||
| env: {}, | ||
| }, | ||
| customConditions: config.build?.conditions ?? [], | ||
| otelImportHook: { | ||
| include: config.instrumentedPackageNames ?? [], | ||
| }, | ||
| requirementsContent: bundle.requirementsContent, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include requirements.txt in the manifest hash
contentHash only incorporates the task file hashes. If a developer edits requirements.txt (add/remove/upgrade deps) while the Python sources stay the same, the manifest hash remains unchanged, so downstream build/deploy steps will reuse stale environments and never reinstall the new dependencies. That breaks the guarantee that the bundle reflects the declared packages.
Please fold bundle.requirementsContent into the hash so dependency changes invalidate the build artifact.
for (const entry of bundle.entries) {
hasher.update(entry.contentHash);
}
+ if (bundle.requirementsContent) {
+ hasher.update(bundle.requirementsContent);
+ }🤖 Prompt for AI Agents
In packages/cli-v3/src/build/pythonBundler.ts around lines 146 to 194, the
contentHash currently only hashes bundle.entries file hashes so changes to
requirements.txt (bundle.requirementsContent) won't change the manifest hash;
update the hashing step to also incorporate bundle.requirementsContent (use an
empty string if undefined) into the same hasher before calling digest so any
change to requirements invalidates the build manifest.
| const match = trimmed.match(/^([A-Za-z_][A-Za-z0-9_-]*)(?:\[([^\]]+)\])?(?:([<>=!~]+)(.+))?$/); | ||
|
|
||
| if (match) { | ||
| const [, name, extras, operator, version] = match; | ||
|
|
||
| if (name) { | ||
| dependencies.push({ | ||
| name: name.trim(), | ||
| extras: extras?.split(",").map((e) => e.trim()), | ||
| version: version ? `${operator}${version.trim()}` : undefined, | ||
| }); | ||
| } | ||
| } else { | ||
| // Try simple package name (reusing the same regex pattern) | ||
| const simpleMatch = trimmed.match(/^([A-Za-z_][A-Za-z0-9_-]*)/); | ||
| if (simpleMatch?.[1]) { | ||
| dependencies.push({ | ||
| name: simpleMatch[1], | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allow PEP 508-compliant names (e.g. zope.interface) in parser/validator
The current regex rejects perfectly valid requirement names that contain dots. For example, zope.interface==6.1 never matches, so parseRequirementsTxt() silently omits it and the bundler drops the dependency; validateRequirementsTxt() then reports it as invalid. That breaks real-world projects.
Please update both regexes to accept the full [A-Za-z0-9._-] character set (and digits as the leading character) and mirror the same logic in the fallback pattern.
- const match = trimmed.match(/^([A-Za-z_][A-Za-z0-9_-]*)(?:\[([^\]]+)\])?(?:([<>=!~]+)(.+))?$/);
+ const match = trimmed.match(
+ /^([A-Za-z0-9](?:[A-Za-z0-9._-]*[A-Za-z0-9])?)(?:\[([^\]]+)\])?(?:\s*([<>=!~]{1,2})\s*(.+))?$/
+ );
@@
- const simpleMatch = trimmed.match(/^([A-Za-z_][A-Za-z0-9_-]*)/);
+ const simpleMatch = trimmed.match(/^([A-Za-z0-9](?:[A-Za-z0-9._-]*[A-Za-z0-9])?)/);
@@
- const match = line.match(/^(\w[\w\-]*)(?:\[([^\]]+)\])?(?:([<>=!~]+)(.+))?$/);
+ const match = line.match(
+ /^([A-Za-z0-9](?:[A-Za-z0-9._-]*[A-Za-z0-9])?)(?:\[([^\]]+)\])?(?:\s*([<>=!~]{1,2})\s*(.+))?$/
+ );
@@
- const simpleMatch = line.match(/^(\w[\w\-]*)/);
+ const simpleMatch = line.match(/^([A-Za-z0-9](?:[A-Za-z0-9._-]*[A-Za-z0-9])?)/);Also applies to: 88-94
🤖 Prompt for AI Agents
In packages/cli-v3/src/build/pythonDependencies.ts around lines 28 to 47 (also
apply same change to lines 88 to 94), the current regexes reject valid PEP 508
names containing dots and disallow digits as leading characters; update both
patterns to allow the full character class [A-Za-z0-9._-] for the name portion
and mirror that change in the fallback pattern so names like
"zope.interface==6.1" match; ensure extras and version capture groups remain
unchanged and keep the same parsing logic for operator/version and extras.
| // Check if this is a Python task - use PythonTaskRunner instead of TaskRunProcess | ||
| if (this.workerManifest.runtime === "python") { | ||
| this.sendDebugLog("executing Python task", { taskId: execution.task.id }); | ||
|
|
||
| const pythonRunner = new PythonTaskRunner(); | ||
| const completion = await pythonRunner.executeTask(execution); | ||
|
|
||
| this.sendDebugLog("completed Python run attempt", { attemptSuccess: completion.ok }); | ||
|
|
||
| const [completionError] = await tryCatch(this.complete({ completion })); | ||
|
|
||
| if (completionError) { | ||
| this.sendDebugLog("failed to complete Python run", { error: completionError.message }); | ||
| } | ||
|
|
||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python tasks ignore cancel/kill paths
This branch bypasses TaskRunProcess, so RunExecution.cancel()/kill() never reach the Python worker—this.taskRunProcess stays undefined, executionAbortController listeners do nothing, and the gRPC loop inside PythonTaskRunner.executeTask() keeps running until the task finishes or times out. In practice, any cancelled run will remain executing on the Python side. Please keep a handle to the Python process/server, hook it into executionAbortController, and make cancel() / kill() terminate the worker (e.g. expose a cleanup()/abort() on PythonTaskRunner).
🤖 Prompt for AI Agents
In packages/cli-v3/src/entryPoints/managed/execution.ts around lines 613 to 629,
the Python branch creates a PythonTaskRunner but never wires its process/server
into the RunExecution cancel/kill flow, so cancellations don’t reach the Python
worker; modify PythonTaskRunner to expose a cleanup/abort method (or return a
handle to the underlying process/server), store that handle on the RunExecution
instance (e.g., this.taskRunProcess or this.pythonRunnerHandle), register a
listener on executionAbortController (and ensure kill()/cancel() invokes it)
that calls the runner’s abort/cleanup, and ensure you call the runner cleanup in
all exit paths (success, error, or abort) so the Python side is terminated when
the run is cancelled or killed.
| context = TaskContext( | ||
| task=execution.task, | ||
| run=execution.run, | ||
| attempt=execution.attempt, | ||
| batch=execution.batch, | ||
| environment=execution.environment or {}, | ||
| ) | ||
| set_current_context(context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use TaskContext.from_execution_payload to normalize environment
Creating TaskContext manually leaves context.environment as a Pydantic EnvironmentInfo, so downstream code expecting a dict (e.g. context.environment.get("slug")) will raise. TaskContext.from_execution_payload() already performs the model→dict conversion and stays aligned with future schema changes. Please reuse it here.
- context = TaskContext(
- task=execution.task,
- run=execution.run,
- attempt=execution.attempt,
- batch=execution.batch,
- environment=execution.environment or {},
- )
+ context = TaskContext.from_execution_payload(execution)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/cli-v3/src/entryPoints/python/managed-run-worker.py around lines 131
to 138, the code constructs TaskContext manually which leaves
context.environment as a Pydantic EnvironmentInfo instead of a plain dict;
replace the manual construction with
TaskContext.from_execution_payload(execution) to get a normalized dict-based
environment and then call set_current_context(context) so downstream code that
expects dict access (e.g. context.environment.get("slug")) works correctly.
| ## Requirements | ||
|
|
||
| - Python >= 3.10 | ||
| - pydantic >= 2.0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify minimum Python version and dependency constraints.
Confirm that Python >= 3.10 is the intended minimum supported version (excluding Python 3.9 and earlier). Additionally, verify that pydantic >= 2.0.0 is the only production dependency or if other dependencies (e.g., for gRPC, IPC) should be listed.
🏁 Script executed:
#!/bin/bash
# Description: Verify the minimum Python version and dependencies.
# Check setup.py or pyproject.toml for the actual Python version and dependencies.
fd -t f '(setup\.py|pyproject\.toml)' packages/python-sdk --exec cat {} \;Length of output: 659
Update Requirements section in README.md to include all production dependencies.
The README lists only Python >= 3.10 and pydantic >= 2.0.0, but pyproject.toml defines four production dependencies: pydantic, typing-extensions, grpcio, and grpcio-tools. Update lines 30–33 to include all four:
## Requirements
- Python >= 3.10
- pydantic >= 2.0.0
- typing-extensions >= 4.5.0
- grpcio >= 1.68.0
- grpcio-tools >= 1.68.0
🤖 Prompt for AI Agents
In packages/python-sdk/README.md around lines 30 to 33, the Requirements section
only lists Python and pydantic but omits other production dependencies defined
in pyproject.toml; update that section to list all four production dependencies
by adding entries for typing-extensions (>= 4.5.0), grpcio (>= 1.68.0), and
grpcio-tools (>= 1.68.0) alongside the existing Python >= 3.10 and pydantic >=
2.0.0 so the README matches pyproject.toml.
| def test_get_task_metadata(): | ||
| """Test task metadata generation""" | ||
| @task("test-metadata", max_duration=300) | ||
| async def my_task(payload): | ||
| return payload | ||
|
|
||
| metadata = my_task.get_metadata() | ||
| assert metadata.id == "test-metadata" | ||
| assert metadata.exportName == "test-metadata" | ||
| assert metadata.maxDuration == 300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the maxDuration assertion to account for seconds-to-milliseconds conversion.
The test passes max_duration=300 (seconds) to the decorator, which internally converts it to milliseconds (300 * 1000 = 300000). The assertion should expect the converted value.
Apply this diff:
- assert metadata.maxDuration == 300
+ assert metadata.maxDuration == 300000 # 300 seconds converted to millisecondsBased on the task decorator implementation:
# Convert max_duration from seconds to milliseconds (TypeScript expects ms)
max_duration_ms = max_duration * 1000 if max_duration is not None else None🤖 Prompt for AI Agents
In packages/python-sdk/tests/test_task_registration.py around lines 91 to 100,
the test asserts metadata.maxDuration == 300 but the decorator converts seconds
to milliseconds; update the assertion to expect 300000 (300 * 1000) so it
matches the internal seconds-to-milliseconds conversion performed by the task
decorator.
| elif isinstance(message, TaskRunFailedMessage): | ||
| completion = message.completion | ||
| proto_msg.task_run_failed.type = message.type | ||
| proto_msg.task_run_failed.version = message.version | ||
|
|
||
| result = proto_msg.task_run_failed.completion | ||
| result.id = completion["id"] | ||
|
|
||
| # Set error | ||
| error_data = completion["error"] | ||
| if error_data.get("type") == "BUILT_IN_ERROR": | ||
| result.error.built_in_error.name = error_data["name"] | ||
| result.error.built_in_error.message = error_data["message"] | ||
| result.error.built_in_error.stack_trace = error_data["stackTrace"] | ||
| elif error_data.get("type") == "INTERNAL_ERROR": | ||
| result.error.internal_error.code = error_data["code"] | ||
| result.error.internal_error.message = error_data.get("message", "") | ||
| result.error.internal_error.stack_trace = error_data.get("stackTrace", "") | ||
| elif error_data.get("type") == "STRING_ERROR": | ||
| result.error.string_error.raw = error_data["raw"] | ||
|
|
||
| if completion.get("usage"): | ||
| result.usage.duration_ms = completion["usage"].get("durationMs", 0) | ||
| if completion.get("taskIdentifier"): | ||
| result.task_identifier = completion["taskIdentifier"] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing retry/skippedRetrying propagation to coordinator
On Line 169, the gRPC payload never forwards completion["retry"] or completion["skippedRetrying"]. When a Python task declares a retry schedule, the coordinator never receives it, so follow-up attempts are silently skipped. Please copy those fields into the protobuf result.
elif isinstance(message, TaskRunFailedMessage):
completion = message.completion
proto_msg.task_run_failed.type = message.type
proto_msg.task_run_failed.version = message.version
result = proto_msg.task_run_failed.completion
result.id = completion["id"]
# Set error
error_data = completion["error"]
if error_data.get("type") == "BUILT_IN_ERROR":
result.error.built_in_error.name = error_data["name"]
result.error.built_in_error.message = error_data["message"]
result.error.built_in_error.stack_trace = error_data["stackTrace"]
elif error_data.get("type") == "INTERNAL_ERROR":
result.error.internal_error.code = error_data["code"]
result.error.internal_error.message = error_data.get("message", "")
result.error.internal_error.stack_trace = error_data.get("stackTrace", "")
elif error_data.get("type") == "STRING_ERROR":
result.error.string_error.raw = error_data["raw"]
+ retry = completion.get("retry")
+ if retry:
+ result.retry.timestamp = retry.get("timestamp", 0)
+ result.retry.delay = retry.get("delay", 0)
+
+ if completion.get("skippedRetrying") is not None:
+ result.skipped_retrying = completion["skippedRetrying"]
+
if completion.get("usage"):
result.usage.duration_ms = completion["usage"].get("durationMs", 0)
if completion.get("taskIdentifier"):
result.task_identifier = completion["taskIdentifier"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif isinstance(message, TaskRunFailedMessage): | |
| completion = message.completion | |
| proto_msg.task_run_failed.type = message.type | |
| proto_msg.task_run_failed.version = message.version | |
| result = proto_msg.task_run_failed.completion | |
| result.id = completion["id"] | |
| # Set error | |
| error_data = completion["error"] | |
| if error_data.get("type") == "BUILT_IN_ERROR": | |
| result.error.built_in_error.name = error_data["name"] | |
| result.error.built_in_error.message = error_data["message"] | |
| result.error.built_in_error.stack_trace = error_data["stackTrace"] | |
| elif error_data.get("type") == "INTERNAL_ERROR": | |
| result.error.internal_error.code = error_data["code"] | |
| result.error.internal_error.message = error_data.get("message", "") | |
| result.error.internal_error.stack_trace = error_data.get("stackTrace", "") | |
| elif error_data.get("type") == "STRING_ERROR": | |
| result.error.string_error.raw = error_data["raw"] | |
| if completion.get("usage"): | |
| result.usage.duration_ms = completion["usage"].get("durationMs", 0) | |
| if completion.get("taskIdentifier"): | |
| result.task_identifier = completion["taskIdentifier"] | |
| elif isinstance(message, TaskRunFailedMessage): | |
| completion = message.completion | |
| proto_msg.task_run_failed.type = message.type | |
| proto_msg.task_run_failed.version = message.version | |
| result = proto_msg.task_run_failed.completion | |
| result.id = completion["id"] | |
| # Set error | |
| error_data = completion["error"] | |
| if error_data.get("type") == "BUILT_IN_ERROR": | |
| result.error.built_in_error.name = error_data["name"] | |
| result.error.built_in_error.message = error_data["message"] | |
| result.error.built_in_error.stack_trace = error_data["stackTrace"] | |
| elif error_data.get("type") == "INTERNAL_ERROR": | |
| result.error.internal_error.code = error_data["code"] | |
| result.error.internal_error.message = error_data.get("message", "") | |
| result.error.internal_error.stack_trace = error_data.get("stackTrace", "") | |
| elif error_data.get("type") == "STRING_ERROR": | |
| result.error.string_error.raw = error_data["raw"] | |
| retry = completion.get("retry") | |
| if retry: | |
| result.retry.timestamp = retry.get("timestamp", 0) | |
| result.retry.delay = retry.get("delay", 0) | |
| if completion.get("skippedRetrying") is not None: | |
| result.skipped_retrying = completion["skippedRetrying"] | |
| if completion.get("usage"): | |
| result.usage.duration_ms = completion["usage"].get("durationMs", 0) | |
| if completion.get("taskIdentifier"): | |
| result.task_identifier = completion["taskIdentifier"] |
🤖 Prompt for AI Agents
In packages/python-sdk/trigger_sdk/ipc/grpc.py around lines 148 to 173, the
TaskRunFailed completion payload is missing propagation of completion["retry"]
and completion["skippedRetrying"] to the protobuf result; add assignments to
copy completion.get("retry") into the appropriate proto field on result (e.g.,
result.retry) and completion.get("skippedRetrying") into the appropriate proto
field (e.g., result.skipped_retrying or result.skippedRetrying) so the
coordinator receives retry/suppression info; mirror the existing pattern used
for usage and taskIdentifier and provide sensible defaults when the keys are
absent.
| elif isinstance(message, IndexTasksCompleteMessage): | ||
| proto_msg.index_tasks_complete.type = message.type | ||
| proto_msg.index_tasks_complete.version = message.version | ||
| for task in message.tasks: | ||
| task_meta = proto_msg.index_tasks_complete.tasks.add() | ||
| for key, value in task.items(): | ||
| task_meta.fields[key] = str(value) | ||
|
|
||
| elif isinstance(message, LogMessage): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Serialize task metadata with JSON, not str()
On Lines 183-186, converting non-string values with str() yields Python reprs (single quotes, non-JSON). The Node coordinator expects JSON-compatible strings, so complex metadata (dicts, lists, booleans) breaks decoding. Please JSON-encode anything that isn’t already a plain string.
-import os
-import asyncio
-import sys
-import traceback
-import logging
+import os
+import asyncio
+import json
+import sys
+import traceback
+import logging
@@
elif isinstance(message, IndexTasksCompleteMessage):
proto_msg.index_tasks_complete.type = message.type
proto_msg.index_tasks_complete.version = message.version
for task in message.tasks:
task_meta = proto_msg.index_tasks_complete.tasks.add()
for key, value in task.items():
- task_meta.fields[key] = str(value)
+ if isinstance(value, str):
+ task_meta.fields[key] = value
+ else:
+ task_meta.fields[key] = json.dumps(value)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/python-sdk/trigger_sdk/ipc/grpc.py around lines 179 to 187, the code
currently does task_meta.fields[key] = str(value) which produces Python reprs;
instead, if value is not already a str, JSON-encode it so the coordinator
receives valid JSON strings. Replace the str() call with logic that leaves
strings as-is and calls json.dumps(value, separators=(",", ":"),
ensure_ascii=False) for other types; add/import json at the top if missing and
ensure None/booleans/lists/dicts are serialized to proper JSON.
| def to_traceparent(self) -> str: | ||
| """Convert to W3C traceparent header""" | ||
| return f"00-{self.trace_id}-{self.span_id}-{self.trace_flags}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate required fields before formatting traceparent.
The method will produce an invalid traceparent string (e.g., "00-None-None-None") if any field is None, since the constructor accepts Optional[str] for all parameters.
Apply this diff to add validation:
def to_traceparent(self) -> str:
"""Convert to W3C traceparent header"""
+ if not all([self.trace_id, self.span_id, self.trace_flags]):
+ raise ValueError("Cannot convert incomplete trace context to traceparent")
return f"00-{self.trace_id}-{self.span_id}-{self.trace_flags}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def to_traceparent(self) -> str: | |
| """Convert to W3C traceparent header""" | |
| return f"00-{self.trace_id}-{self.span_id}-{self.trace_flags}" | |
| def to_traceparent(self) -> str: | |
| """Convert to W3C traceparent header""" | |
| if not all([self.trace_id, self.span_id, self.trace_flags]): | |
| raise ValueError("Cannot convert incomplete trace context to traceparent") | |
| return f"00-{self.trace_id}-{self.span_id}-{self.trace_flags}" |
🤖 Prompt for AI Agents
In packages/python-sdk/trigger_sdk/telemetry.py around lines 50 to 52, the
to_traceparent method currently formats traceparent even when trace_id, span_id,
or trace_flags may be None; add validation at the start of the method to ensure
trace_id, span_id, and trace_flags are not None (and raise a ValueError with a
clear message if any are missing), then return the formatted string only after
validation; keep error text descriptive (e.g., indicate which field is missing).
Related #862 and full python support from roadmap
This is a POC and discussion PR for full Python worker support. I leveraged Claude Code (supervised, reviewed and tested) to create a quick POC to validate the approach before investing significant time, since I wasn't sure if you'd be okay with this direction. Sorry about the PR size, but this was the minimum version needed to make it functional.
I used gRPC for Node↔Python IPC, which opens the door for more runtimes like Go, Ruby, etc.
I'm aware it needs more polishing, cleanup, testing, and additional features to be production-ready. I'm open to discussion and feedback. Please let me know what you think!
Feel free to close if it doesn't fit your vision or approach, but I hope we can make it work because I believe full Python support would benefit the community. I've found current python support very limited and hard to work with in complex workflows.
What works? (only dev mode)
What needs to be done?
Some flows to visualize the differences:
Build Time:
Index Phase
Execution Phase
✅ Checklist
Testing
Tested with references/hello-world-python:
Changelog
Added Python worker support with gRPC communication.
Screenshots
Indexed tasks:

Samples from running the hello-task-v2