Make chat and embed interfaces provider-agnostic using pydantic_ai#200
Make chat and embed interfaces provider-agnostic using pydantic_ai#200gvanrossum merged 24 commits intomainfrom
Conversation
|
pyproject.toml or lockfile was not touched so the dependecy to pydantic_ai is missing. |
tools/ingest_vtt.py
Outdated
| print("Setting up conversation settings...") | ||
| try: | ||
| embedding_model = AsyncEmbeddingModel(model_name=embedding_name) | ||
| embedding_model = create_embedding_model(model_name=embedding_name) |
There was a problem hiding this comment.
the pydantic.ai API uses a ctor call to instantiate models or agents and not a method so this looks kind of strange.
e.g.
model = os.getenv('PYDANTIC_AI_MODEL', 'openai:gpt-5.2')
print(f'Using model: {model}')
agent = Agent(model, output_type=MyModel)
There was a problem hiding this comment.
the pydantic.ai API uses a ctor call to instantiate models or agents and not a method so this looks kind of strange.
I don't understand your comment. This is the (new) idiomatic way to create an embedding model. Unless there's already an embedding model available in the settings?
Fixed. |
…t-embedding-3-small)
|
found another issue, not from this PR, but we should fix before release: will fix this in the bugfix branch src\typeagent\knowpro\interfaces_search.py |
|
LGTM, please merge to main |
Support multiple AI providers via pydantic_ai
create_embedding_model(), and configure_models() accept
"provider:model" specs (e.g. "anthropic:claude-sonnet-4-20250514"),
delegating to pydantic_ai's 25+ provider registry.
monolithic AsyncEmbeddingModel class; CachingEmbeddingModel
wraps any IEmbedder with cache logic.
create_typechat_model(), hardcoded embedding_size,
max_retries, and model_registry.py.
env var; create_test_embedding_model() for tests.