vec2text icon indicating copy to clipboard operation
vec2text copied to clipboard

utilities for decoding deep representations (like sentence embeddings) back to text

Results 32 vec2text issues
Sort by recently updated
recently updated
newest added

Hi Jack, Thanks for the great work and sharing the code! I am trying to reproduce results from the paper and want to confirm if I am doing it correctly....

question

Very cool project and I've been having a fun time exploring the repo. I'd like to run some additional examples but I'm having a difficulty reproducing any results using your...

Hello! Thanks for your great paper and sharing the codes. I have a question about the languages it supports. Does it apply to other languages, such as Chinese? Thanks!

You say: "Currently we only support models for inverting OpenAI text-embedding-ada-002 embeddings but are hoping to add more soon. (We can provide the GTR inverters used in the paper upon...

> I ran the following training script python run.py --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_seq_length 128 --model_name_or_path t5-small --dataset_name msmarco --embedder_model_name **sentence-transformers/all-MiniLM-L6-v2** --num_repeat_tokens 16 --embedder_no_grad True --num_train_epochs 1 --max_eval_samples 500 --eval_steps...

bug
documentation

I have been trying to use your example codes and I can get the `vec2text` to load on the new openai library(`>0.28`) but not the old one. However the `get_embeddings_openai`...

Hello, I would like to look at the hypothesis embeddings, for example, to see how the cosine similarity changes per iteration. It looks to me like `invert_embeddings()` only returns the...

enhancement

Hi Jack, Thanks for the great work. Do u guys plan to release the inversion models that were trained to invert LLAMA-2 7b embeddings? It would be very helpful for...

enhancement

Hi Thank you for presenting your research. I have a question regarding the embedding_transform in inversion.py. As per my understanding, this function corresponds to the MLP model described in your...

documentation

I didn't find the example that clear but I have a guess at what's happening. Might be worth spelling out something to the effect of trying to map text to...

documentation