-
-
Notifications
You must be signed in to change notification settings - Fork 23.6k
Fix/langfuse trace usage cost #5628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix/langfuse trace usage cost #5628
Conversation
…ad; pass full output to analytics so Langfuse can display tokens and cost\n\n- Map Flowise usage metadata to Langfuse usage fields (promptTokens, completionTokens, totalTokens)\n- Include model from response metadata when available for cost mapping\n- Send complete output objects from LLM/Agent/ConditionAgent to analytics\n- No functional behavior change outside analytics payloads
Track usage and cost information in langfuse traces even when LLM calls result in errors. This ensures complete cost/usage analytics for all LLM operations, including failed attempts. - Extract usage metadata (promptTokens, completionTokens, totalTokens) from error objects - Extract model information from response metadata - Set error level to 'ERROR' in langfuse generation end payload - Maintain consistency with onLLMEnd handling of usage metadata
Summary of ChangesHello @vellanki-santhosh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a bug related to Langfuse trace usage cost by enhancing the data sent to Langfuse for LLM interactions. The changes ensure that detailed usage metrics and model information are accurately captured and reported, providing better observability and cost analysis for LLM applications. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix an issue with Langfuse cost and usage tracking by passing more detailed output objects to the analytics handler. The changes are generally in the right direction. However, I've identified a redundant function call in Agent.ts that should be removed for better performance and maintainability. More critically, there's a bug in handler.ts where token counts of 0 are not being correctly handled, which would lead to inaccurate usage tracking. I've provided suggestions to fix these issues. Additionally, there is some code duplication between the onLLMEnd and onLLMError methods in handler.ts that could be a candidate for future refactoring.
|
Thanks for the review! 🙏 |
|
Thanks for the feedback!
|
I TIRED TO FIX THE BUG , PLS REVIEW THE CHANGES