Skip to content

QA1: support running llm_hub tool#282

Merged
martenson merged 5 commits intomainfrom
llm_hub
Feb 26, 2026
Merged

QA1: support running llm_hub tool#282
martenson merged 5 commits intomainfrom
llm_hub

Conversation

@ljocha
Copy link
Collaborator

@ljocha ljocha commented Feb 16, 2026

All the required config changes needed to run llm_hub tool:

  • TPV configured to run this tool locally
  • tool data tables extended to pick the specific tool table
  • list of available LLMs generated on installation but calling OpenAI API /models
  • API endpoint and credentials stored in config file, path to it is set for the tool

Currently works on galaxy-qa1; placeholder credentials are set in group_vars/galaxyservers.yml, wherever it should be deployed, true credentials should be provided in secrets.yml.

Copy link
Member

@martenson martenson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deployed on qa1, tested

@martenson
Copy link
Member

as one of the next steps I believe the local role should be extracted and put to https://galaxy.ansible.com and installed from there

@martenson martenson changed the title support running llm_hub tool QA1: support running llm_hub tool Feb 26, 2026
@martenson martenson merged commit b64352e into main Feb 26, 2026
@martenson martenson deleted the llm_hub branch February 26, 2026 11:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants