Chun Tao
Chun Tao
Comment to review
> Hi @ctao456 , thanks for contribution. The talking avatar with Wav2Lip-GFPGAN looks good. Before reviewing I have a few questions. I notice that you use HPU to run Wav2Lip...
> > to move TEI embedding microservice to CPU > > Why? Is TEI-embedding Gaudi utilization too low for it to make sense, or is there some other reason? Please...
@ezelanza Could you try following steps in https://github.com/opea-project/GenAIExamples/blob/v1.0/ChatQnA/docker_compose/intel/hpu/gaudi/README.md? `docker compose up -d` automatically pulls the necessary docker images on docker hub. Our support team tested it to be working.
Hi @lianhao thanks for pointing this out. Understand the issue. @xiguiw thanks for reminding. However, the OPEA animation microservice has the following defined input and output datatypes: ```python @register_microservice( name="opea_service@animation",...
Hi @lianhao. We investigated that we can possibly encode the generated mp4 video content as byte64 str. But that will increase the fill size by 33%. And some videos we...