Ubuntu - Not Discovering Dumps
Love the software. Been following it on Twitter for some time.
I am having issues getting my own instances to get up and running. Dependencies are installed. The script will get up and run but it's not discovering dumps.
I am watching the @dumpmon, I see my instance "Checking" those same links but with no hits. I validated that it was over the email thresh-hold from the config.
I think I had the same problem. Most of the leaks come from Pastebin and I found out that it was returning a "Please refresh the page to continue..." message instead of the raw content.
I'm guessing that the problem was using the same requests.Session() for all the sites since the Slexy headers (with a referer field) were being used in the other sites too.
So, an easy solution is creating a new connection for every request.
But I recommend you to create an instance of requests.Session() in the initialization of the sites so eachone has his own and give these instances when you call the helper.download() function instead of using a global one in helper.
If I'm wrong, please correct me, Jordan.