djm34

Results 94 comments of djm34

just checked, if it has only 4Gb of vram, the answer is no, as mtp require a minimum of 4.4Gb... if it has more than 4Gb then it should work...

hmm... what do you have in mind exactly ? lol

well beside taking the memory on one gpu and soldering it to the board of the other gpu assuming it would recognize the extra memory, I don't think much could...

if running 2 instances works, I guess it is the way to go... These possible limitation are in part due to cpu limited number of threads and memory (however 16Gb...

may-be it is something to try. (I can't test it myself as I don't have such rig. ) I would be interesting to check with the latest release of the...

ccminer requires around 4.5-4.7Gb of virtual memory per gpu (this is the way cuda allocates vram). If it is with only one card, this a little strange... If I use...

I will look into something like this, I indeed saw something a little weird in the number of threads which were created (sorry for the delay in replaying, I don't...

actually, it seems I have found a way to reduce cpu usage. I will try to commit the change soon. If you can compile ccminer, you can try it: first...

ccminer needs to be recompiled with latest cuda version and sm version supporting the 30x serie

will try over the weekend, but considering the current availability of the 30x, it won't be optimized at all...