its5Q
its5Q
@kartikitrak, replace tensorflow.contrib.seq2seq with tensorflow.contrib.legacy_seq2seq Works for me
No, LHR cards don't limit hashing performance, they only limit mining algorithms. For example, hashcat - a popular hash cracking software - has the same performance on LHR and non-LHR...
I've done some digging around, and I think I found the cause of this problem. https://github.com/ppy/osu/blob/9ac322d337e348f63d824e5995942c2ee367dc2a/osu.Game/Screens/OnlinePlay/Components/StarRatingRangeDisplay.cs#L95-L100 There, it's trying to get beatmaps from room's playlist, but as said in the...
Hey, just want to confirm, I have an exact same issue with my Llama model. Inference on single samples works fine, but produces garbage on batches of multiple samples. I'm...
Awesome, I'll test it as soon as I get to it
Tried it myself and I'm getting the same weird output as before. One thing that I've noticed is that the weird output only comes from the samples that are padded,...
> Also @its5Q you need to use padding_side = "left" or else the results will be wrong Oh yeah, that the problem, thanks. Now batched inference works as expected for...
I've been bored lately, delving deeper in memory forensics, and decided to make a [notepad plugin](https://github.com/its5Q/volatility3-plugins/blob/main/plugins/windows/notepad.py) for volatility3 myself. It doesn't parse any heap structures or anything fancy like that,...
> Yes please! We're always happy to review contributions! I can't say whether it'll get included, but at least if there's a PR people may find it. If you could...
No it is not. It is CHECKPOINT_PATH