-
Notifications
You must be signed in to change notification settings - Fork 2.9k
feat: remove Roomote Control from extension #11271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Remove all Roomote Control (remote control) functionality: - Remove BridgeOrchestrator and entire bridge directory from @roo-code/cloud - Remove remoteControlEnabled, featureRoomoteControlEnabled from extension state - Remove extensionBridgeEnabled from CloudUserInfo and user settings - Remove roomoteControlEnabled from organization/user feature schemas - Remove enableBridge from Task and ClineProvider - Remove remote control toggle from CloudView UI - Remove remoteControlEnabled message handler - Remove extension bridge disconnect on logout/deactivate - Update CloudTaskButton to show for all logged-in users - Remove remote control translation strings from all locales - Update all related tests CLO-765
All previously flagged issues have been resolved. Lockfile update correctly removes
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
Co-authored-by: Roo Code <roomote@roocode.com>
Migrates the IO Intelligence provider from legacy BaseOpenAiCompatibleProvider (direct openai SDK) to OpenAICompatibleHandler (Vercel AI SDK). - Extends OpenAICompatibleHandler instead of BaseOpenAiCompatibleProvider - Uses getModelParams for model parameter resolution - Updates tests to mock ai module's streamText/generateText
* refactor: migrate featherless provider to AI SDK * fix: merge consecutive same-role messages in featherless R1 path convertToAiSdkMessages does not merge consecutive same-role messages like convertToR1Format did. When the system prompt is prepended as a user message and the conversation already starts with a user message, DeepSeek R1 can reject the request. Add mergeConsecutiveSameRoleMessages helper that collapses adjacent Anthropic messages sharing the same role before AI SDK conversion. Includes a test that verifies no two successive messages share a role.
…lent temperature overrides (#11218) * fix: DeepSeek temperature defaulting to 0 instead of 0.3 Pass defaultTemperature: DEEP_SEEK_DEFAULT_TEMPERATURE to getModelParams() in DeepSeekHandler.getModel() to ensure the correct default temperature (0.3) is used when no user configuration is provided. Closes #11194 * refactor: make defaultTemperature required in getModelParams Make the defaultTemperature parameter required in getModelParams() instead of defaulting to 0. This prevents providers with their own non-zero default temperature (like DeepSeek's 0.3) from being silently overridden by the implicit 0 default. Every provider now explicitly declares its temperature default, making the temperature resolution chain clear: user setting → model default → provider default --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>
* feat: migrate Bedrock provider to AI SDK Replace the raw AWS SDK (@aws-sdk/client-bedrock-runtime) Bedrock handler with the Vercel AI SDK (@ai-sdk/amazon-bedrock). Reduces provider from 1,633 lines to 575 lines (65% reduction). Key changes: - Use streamText()/generateText() instead of ConverseStreamCommand/ConverseCommand - Use createAmazonBedrock() with native auth (access key, secret, session, profile via credentialProvider, API key, VPC endpoint as baseURL) - Reasoning config via providerOptions.bedrock.reasoningConfig - Anthropic beta headers via providerOptions.bedrock.anthropicBeta - Thinking signature captured from providerMetadata.bedrock.signature on reasoning-delta stream events - Thinking signature round-tripped via providerOptions.bedrock.signature on reasoning parts in convertToAiSdkMessages() - Redacted thinking captured from providerMetadata.bedrock.redactedData - isAiSdkProvider() returns true for reasoning block preservation - Keep: getModel, ARN parsing, cross-region inference, cost calculation, service tier pricing, 1M context beta Tests: 83 tests skipped (mock old AWS SDK internals, need rewrite for AI SDK mocking). 106 tests pass. 0 tests fail. * fix: address review feedback for Bedrock AI SDK migration - Wire usePromptCache into AI SDK via providerOptions.bedrock.cachePoint on system prompt and last two user messages - Remove debug logger.info that fires on every stream event with providerMetadata - Tighten isThrottlingError to match 'rate limit' instead of broad 'rate'/'limit' substrings that false-positive on context length errors - Use shared handleAiSdkError utility for consistent error handling with status code preservation for retry logic * fix: bedrock AI SDK migration - fix usage metrics, rewrite tests, remove dead code - Fix reasoningTokens always 0 (usage.details?.reasoningTokens → usage.reasoningTokens) - Fix cacheReadInputTokens always 0 (read from usage.inputTokenDetails instead of providerMetadata) - Fix invokedModelId not extracted for prompt router cost calculation - Rewrite all 6 skipped bedrock test suites for AI SDK mocking pattern (140 tests pass) - Remove dead code: bedrock-converse-format.ts, cache-strategy/ (6 files, ~2700 lines) * chore: remove dead @anthropic-ai/bedrock-sdk dep and stale AWS SDK mocks * chore: update pnpm-lock.yaml after removing @anthropic-ai/bedrock-sdk * fix: compute cache point indices from original Anthropic messages before AI SDK conversion The previous approach naively targeted the last 2 user messages in the post-conversion AI SDK array, but convertToAiSdkMessages() splits user messages containing tool_results into separate tool + user messages, causing cache points to land on the wrong messages (tiny text fragments instead of the intended meaty user turns). Now we identify the last 2 user messages in the original Anthropic message array (matching the Anthropic provider's caching strategy) and build a parallel-walk mapping to apply cachePoint to the correct corresponding AI SDK message. * perf: optimize prompt caching with 3-point message strategy + anchor for 20-block window Previous approach only cached the last 2 user messages (using 2 of 4 available cache checkpoints for messages). This left significant cache savings on the table for longer conversations. New strategy uses up to 3 message cache points (+ 1 system = 4 total): - Last user message: write to cache for next request - Second-to-last user message: read from cache for current request - Anchor message at ~1/3 position: ensures the 20-block lookback window from the second-to-last breakpoint hits a stable cache entry, covering all assistant/tool messages in the middle of the conversation Also extracted the parallel-walk mapping logic into a reusable applyCachePointsToAiSdkMessages() helper method. Industry benchmarks show 70-95% token cache rates are achievable; this change should significantly improve our 39% baseline for longer multi-turn conversations. * chore: remove stale bedrock-sdk external, fix arnInfo property name, remove unused exports --------- Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>
) * feat: add disabledTools setting to globally disable native tools Add a disabledTools field to GlobalSettings that allows disabling specific native tools by name. This enables cloud agents to be configured with restricted tool access. Schema: - Add disabledTools: z.array(toolNamesSchema).optional() to globalSettingsSchema - Add disabledTools to organizationDefaultSettingsSchema.pick() - Add disabledTools to ExtensionState Pick type Prompt generation (tool filtering): - Add disabledTools to BuildToolsOptions interface - Pass disabledTools through filterSettings to filterNativeToolsForMode() - Remove disabled tools from allowedToolNames set in filterNativeToolsForMode() Execution-time validation (safety net): - Extract disabledTools from state in presentAssistantMessage - Convert disabledTools to toolRequirements format for validateToolUse() Wiring: - Add disabledTools to ClineProvider getState() and getStateToPostToWebview() - Pass disabledTools to all buildNativeToolsArrayWithRestrictions() call sites EXT-778 * fix: check toolRequirements before ALWAYS_AVAILABLE_TOOLS Moves the toolRequirements check before the ALWAYS_AVAILABLE_TOOLS early-return in isToolAllowedForMode(). This ensures disabledTools can block always-available tools (switch_mode, new_task, etc.) at execution time, making the validation layer consistent with the filtering layer.
Add GetCommands, GetModes, and GetModels to the IPC protocol so external clients can fetch slash commands, available modes, and Roo provider models without going through the internal webview message channel. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…eatures grid (#11280) Co-authored-by: Roo Code <roomote@roocode.com>
* refactor: migrate baseten provider to AI SDK * refactor(baseten): migrate to native @ai-sdk/baseten package Replace OpenAICompatibleHandler with dedicated @ai-sdk/baseten package, following the same pattern used by other native AI SDK providers (groq, deepseek, etc.). This uses createBaseten() for provider initialization and extends BaseProvider directly instead of the generic OpenAI-compatible handler.
…11285) Co-authored-by: Roo Code <roomote@roocode.com>
* refactor: migrate zai provider to AI SDK using zhipu-ai-provider * Update src/api/providers/zai.ts Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: remove unused zai-format.ts (knip) --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
…#11295) * feat: add lock toggle to pin API config across all modes in workspace Add a lock/unlock toggle inside the API config selector popover (next to the settings gear) that, when enabled, applies the selected API configuration to all modes in the current workspace. - Add lockApiConfigAcrossModes to ExtensionState and WebviewMessage types - Store setting in workspaceState (per-workspace, not global) - When locked, activateProviderProfile sets config for all modes - Lock icon in ApiConfigSelector popover bottom bar next to gear - Full i18n: English + 17 locale translations (all mention workspace scope) - 9 new tests: 2 ClineProvider, 2 handler, 5 UI (77 total pass) * refactor: replace write-fan-out with read-time override for lock API config The original lock implementation used setModeConfig() fan-out to write the locked config to ALL modes globally. Since the lock flag lives in workspace- scoped workspaceState but modeApiConfigs are in global secrets, this caused cross-workspace data destruction. Replaced with read-time guards: - handleModeSwitch: early return when lock is on (skip per-mode config load) - createTaskWithHistoryItem: skip mode-based config restoration under lock - activateProviderProfile: removed fan-out block - lockApiConfigAcrossModes handler: simplified to flag + state post only - Fixed pre-existing workspaceState mock gap in ClineProvider.spec.ts and ClineProvider.sticky-profile.spec.ts
* feat(history): render nested subtasks as recursive tree * fix(lockfile): resolve missing ai-sdk provider entry * fix: address review feedback — dedupe countAll, increase SubtaskRow max-h - HistoryView: replace local countAll with imported countAllSubtasks from types.ts - SubtaskRow: increase nested children max-h from 500px to 2000px to match TaskGroupItem
…dle empty streams (#11303) * fix: validate Gemini thinkingLevel against model capabilities and handle empty streams getGeminiReasoning() now validates the selected effort against the model's supportsReasoningEffort array before sending it as thinkingLevel. When a stale settings value (e.g. 'medium' from a different model) is not in the supported set, it falls back to the model's default reasoningEffort. GeminiHandler.createMessage() now tracks whether any text content was yielded during streaming and handles NoOutputGeneratedError gracefully instead of surfacing the cryptic 'No output generated' error. * fix: guard thinkingLevel fallback against 'none' effort and add i18n TODO The array validation fallback in getGeminiReasoning() now only triggers when the selected effort IS a valid Gemini thinking level but not in the model's supported set. Values like 'none' (explicit no-reasoning signal) are no longer overridden by the model default. Also adds a TODO for moving the empty-stream message to i18n. * fix: track tool_call_start in hasContent to avoid false empty-stream warning Tool-only responses (no text) are valid content. Without this, agentic tool-call responses would incorrectly trigger the empty response warning message.
* chore(cli): prepare release v0.0.52 * Update CHANGELOG for build cleanup and Linux support Removed unused dependency from build configuration and added Linux support.
…ooCodeInc/Roo-Code into feature/CLO-765-remove-roomote-control
|
@roomote apply the feedback |
Removed dead |
…ontrol' into feature/CLO-765-remove-roomote-control
|
Generated with ❤️ by ellipsis.dev |
|
🚀 Preview deployed! Your changes have been deployed to Vercel: Preview URL: https://roo-code-website-cv0q7coew-roo-code.vercel.app This preview will be updated automatically when you push new commits to this PR. |
|
@roomote apply the feedback and make sure tests pass |
Fixed the reported issues. All local checks passed. |
…nd stale BridgeOrchestrator mock
|
Generated with ❤️ by ellipsis.dev |
…ontrol' into feature/CLO-765-remove-roomote-control
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
All previously flagged issues have been resolved. One remaining cleanup item carried over from earlier reviews.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
Remove all Roomote Control (remote control) functionality from the extension.
Changes
Types (
packages/types/)extensionBridgeEnabledfromCloudUserInfointerfaceroomoteControlEnabledfromorganizationFeaturesSchemaanduserFeaturesSchemaextensionBridgeEnabledfromuserSettingsConfigSchemaremoteControlEnabledandfeatureRoomoteControlEnabledfromExtensionState"remoteControlEnabled"from webview message typesExtensionBridgeEventNameenum,extensionBridgeEventSchema, and all bridge-related typesCloud Package (
packages/cloud/)bridge/directory (BridgeOrchestrator, ExtensionChannel, TaskChannel, SocketTransport, BaseChannel)StaticSettingsService(remove roomoteControlEnabled and extensionBridgeEnabled)StaticTokenAuthService(remove extensionBridgeEnabled from user info)WebAuthService(remove extensionBridgeEnabled logic,isExtensionBridgeEnabledForOrganization, and deadgetOrganizationMetadatamethod)socket.io-clientdependency frompackage.jsonExtension (
src/)extension.tsdeactivateremoteControlEnabled()calls from auth/settings handlersClineProvider.remoteControlEnabled()method entirelyremoteControlEnabledandfeatureRoomoteControlEnabledfromgetState()enableBridgefrom Task class and all bridge subscription logic"remoteControlEnabled"message handler from webviewMessageHandlerWebView UI (
webview-ui/)CloudTaskButtoncomponent and its tests entirely (was the QR-code / "open in cloud" button)CloudTaskButtonimport fromTaskActionsremoteControlEnabled,featureRoomoteControlEnabledfrom ExtensionStateContextopenInCloudandopenInCloudIntrotranslation keys from all 18 localechat.jsonfilescloud.jsonfilesqrcodeandqrcode.reactpackages frompackage.jsonDocumentation (
README.md,locales/*/README.md)Tests
CloudTaskButton.spec.tsx97 files changed, 87 insertions(+), 3,471 deletions(-)