Docs: improve organization and instructions on ChatQnA Readme
https://github.com/opea-project/GenAIExamples/edit/main/ChatQnA/README.md
Please re-organize the material to clearly demarcate
- what is the chatQnA pipeline
- deployment options: Docker versus Kubernetes or orchestration option
- deployment hardware options: Xeon, Gaudi, Nvidia GPU etc
For instance, there are instructions on environment variable set up, is it specific to docker deployments only? Are they relevant to Kubernetes?
With and without GMC on Kubernetes, it is confusing when a link points to different places but reads the same as Kubernetes install. Change the link text appropriately.
@tomlenth please help on this issue.
@mkbhanda @tomlenth
What is the status now? Could this be closed?
Will check latest tomorrow.
Hi @mkbhanda and doc team, please help to consider the request with latest doc updates. thank you [remind]
https://github.com/opea-project/GenAIExamples/pull/1755
Hi @mkbhanda and All thank you much for raising the issue! The doc team refined the Readme of ChatQnA and other 7 example for 1.3 release, https://opea-project.github.io/latest/GenAIExamples/ChatQnA/README.html will to improve the Readme for more examples in later version. I close the issue. please feel free to let us know if any improvement suggestion.