The initial token is always empty.
Hello,
I noticed something when trying the chat with Bob is that I always get the first token as empty.
1 -> ''
4103 -> ' Trans' 924 -> 'cript' 310 -> ' of' 263 -> ' a' 7928 -> ' dialog'
So the result is this:

There's this little space at the begining of the text. Maybe this alone can significantly impact the quality of the output, that's why I decided to post this issue.
I'm on a windows 10 using WSL to emulate the linux environnement (the main.exe is not as good as the linux main atm).
I'm using a file that is the result of all those manipulations:
- I have first a llama-7b-4bit.pt file
- I converted it with the gptq-to-ggml converter (convert-gptq-to-ggml.py)
- I converted it again into the new version of ggml with this script https://github.com/ggerganov/llama.cpp/issues/324#issuecomment-1476227818
Here's the .sh command (7B_CHAT_Bob.sh):
#!/bin/bash
dos2unix 7B_CHAT_Bob.sh
./main -m ./models/llama7b-4bit-GPTQ.bin -t 14 -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
Everything is updated on this repository as I apply a git pull everytime I launch the powershell.
Please review the issue reporting guidelines in #239 and provide a better description of the issue you are observing.
Please review the issue reporting guidelines in #239 and provide a better description of the issue you are observing.
I added more details based on your guideline, I hope that'll help
Hello,
I noticed something when trying the chat with Bob is that I always get the first token as empty.
1 -> ''4103 -> ' Trans' 924 -> 'cript' 310 -> ' of' 263 -> ' a' 7928 -> ' dialog'
So the result is this:
()Transcript of a dialog, where the User...
There's this little space at the begining of the text. Maybe this alone can significantly impact the quality of the output, that's why I decided to post this issue.
I'm on a windows 10 using WSL to emulate the linux environnement (the main.exe is not as good as the linux main atm).
I'm using a file that is the result of all those manipulations:
- I have first a llama-7b-4bit.pt
- I converted it with the gptq-to-ggml converter (convert-gptq-to-ggml.py)
- I converted it again into the new version of ggml with this script Breaking change of models since PR #252 #324 (comment)
Here's the .sh command (7B_CHAT_Bob.sh):
#!/bin/bash dos2unix 7B_CHAT_Bob.sh ./main -m ./models/llama7b-4bit-GPTQ.bin -t 14 -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txtEverything is updated on this repository as I apply a git pull everytime I launch the powershell.
The Token with ID 1 is a custom token called BOD (Begin Of Document) and is one of the two tokens which are required in the token vocabulary. The second is EOD (End Of Document) with ID 2.
So to say, this is a normal behaviour.
@PriNova I see, thanks for your answer I learned something today! But still I can see a space at the begining of the text, I think I hadn't that before, it's a bit ugly to look at... but if it doesn't change the output I'm ok with that.
You can make token 1 go away by commenting out in utils.cpp llama_tokenize():
if (bos) {
// output.push_back(1);
}
It's probably more correct with it there, but also doesn't seem to break anything if removed (if only submitting one whole document per session at least).
As for the leading space, look at your initial tokens above of:
4103 -> ' Trans'
924 -> 'cript'
The space is inside the first token, so it is being printed. Technically if the first token starts with a space the output could skip over it when printing.
The leading space is intentional and a result of https://github.com/ggerganov/llama.cpp/blob/d5850c53ca179b9674b98f35d359763416a3cc11/main.cpp#L232-L233
not not sure if we should just not print the first character (the space) or not.
This issue was closed because it has been inactive for 14 days since being marked as stale.