Obvious wrong anwser by stablelm-tuned-alpha.

This is an early Alpha training checkpoint. It is going to be rough around the edges until training finishes.
Looking forward to a better one^_^
You didn't need to go this far, it fails with questions like "4+8=?", answering "10" and "26"
@FerMG This example is provided in the RedaMe, but I didn't get the result as claimed
Language Models give random outputs. To get the exact output in their ReadMe you would have to know what random seed lead to that output and then set the same seed.
Saying you get a different output when you ask the same question is like saying you get a different Minecraft world than the one in a trailer video. They're random, even with the same input, since there is a random seed.
Thanks for explaining. It's kind of wierd to me that when I ask a robot the same question, I get different anwsers, and sometimes it's right and sometimes it's wrong. By the way, I tested ChatGPT, its answers are consistent.
ChatGPT randomizes too. There's even a regenerate button on ChatGPT's site to get a different random answer. (It does tend to be more often right, though, of course - but that's the whole This is just an Alpha thing)