Add support for LiteLLM in the `llm/response_api.py` for better extensibility and maintainability of newer LLMs. - [ ] Add new Model class for LiteLLM similar to ChatCompletionModel. - [ ] Include support for anthropic related parameters (cache_breakpoints etc). - [ ] Include other parameters like (reasoning effort) etc. - [ ] Add cost tracking in `tracking.py` for litellm and use litellm [helper functions](https://docs.litellm.ai/docs/completion/prompt_caching#calculate-cost). - [ ] Write tests using 'mock' param of liteLMM completion. - [ ] Test function calling support. - [ ] Replicate OAI/Anthropic Agents using LiteLLM and compare outputs.
Add support for LiteLLM in the
llm/response_api.pyfor better extensibility and maintainability of newer LLMs.tracking.pyfor litellm and use litellm helper functions.