Store information about vector database embedding model
Is your feature request related to a problem? Please describe. Currently, the whoami packages are created using openai's embedding models. Using those with other vendors is impossible.
Describe the solution you'd like Internal tools used for rag should make sure that the model is compatible with the vdatabase.
Describe alternatives you've considered Creating a log file with information about the generation (vendor, models etc)
Additional context Add any other context or screenshots about the feature request here.
@maciejmajek
I think current development branch allows using different embedding model than openai. See:
- config.toml
- Turtlebot demo already uses local embeddings in the example (point 3 here)
Yup, user can configure the vendor now. However if the config (embeddings model) would change after the vdb is created, the queries won't work, as the embeddings in the database were created using different models.