Chris

Results 23 comments of Chris

(Update: Still on the parsing screen its now been 24 minutes)

> This is a know issue, and I am working on fixing this. My parser (actually mostly the tokenizer) is really slow. This is also a problem in other obfuscators...

> This is a know issue, and I am working on fixing this. My parser (actually mostly the tokenizer) is really slow. Also do you know when you will push...

> I am currently not abled to work on Prometheus a lot, because I am in an Internship where I have to work full time, but I try to adress...

> I am currently not abled to work on Prometheus a lot, because I am in an Internship where I have to work full time, but I try to adress...

> Or to split the entire file into multiple chunks that require each other That could work but its kinda cheap, rewriting the tokenizer and parser is needed, thanks!

Odd, sadly levno doesn't work on this that match meaning its going to be awhile before a patch is released.

![image](https://user-images.githubusercontent.com/111275373/195800967-83b1dc98-003e-4cae-bb72-ba5125adcbb7.png) This is also something else

> Hello This error is maybe caused because of the use of two different versions of Python. For example: obfuscating the file with 3.9 and running it with 3.10 can...

**Update: after some testing, obfuscation in 3.10 and running in 3.10 != 3.x will cause errors **BUT**... for some reason this still sometimes happens...