smith-coding
smith-coding
Given pairs of (image,text), I have to detect near duplicate using both features. Pairs ``` (image1, text1) (image2, text2) ... (imageN, textN) ``` I am thinking to compute embedding using...
Given pair of images, my use case is to detect whether they are duplicate or not. (imageX, imageY) = verdict/score verdict = duplicate/not duplicate/near duplicate How can I use BLIP...
How to fine tune BLIP to generate embeddings for new image-text pairs? Can anyone provide me with code snippets or examples?
I have a couple of questions: a) How can I use CodeGen to extract embedding for JavaScript and Python code? b) Can I feed incomplete code JavaScript and Python snippet...
I would like to fine tune the Codegen model. Can you provide any documentation in this regard?
I am trying to apply CLIP on a **very** specific dataset and need to fine tune. I am doing fine tuning following the steps here https://github.com/openai/CLIP/issues/83. But cannot figure out...
Given pair of images, my use case is to detect whether they are duplicate or not. (imageX, imageY) = verdict/score verdict = duplicate/not duplicate/near duplicate How can I use CLIP...
I see there are other implementations of CLIP: - @Zasder3 have created a PyTorch lighting version to train the CLIP https://github.com/Zasder3/train-CLIP - researchers at UW, Google, Stanford, Amazon, Columbia, and...
I was using Transformer model a text generation task. However, my input data has relational information. This semantic information is encoded between tokens using different kind of edges. I cannot...
I am using GCN as my encoder. Graph stats: - I have around 400 nodes in the graph per data point. - In the current graph, on average, a node...