Michael Allman

Results 35 comments of Michael Allman

Well now this is interesting... https://github.com/apple/coremltools/blob/6.0b1/coremltools/converters/mil/frontend/torch/test/test_torch_ops.py#L4229-L4286 🤔

> Well now this is interesting... > > https://github.com/apple/coremltools/blob/6.0b1/coremltools/converters/mil/frontend/torch/test/test_torch_ops.py#L4229-L4286 > > 🤔 So apparently this is for some specific padding ops, and this test only passes for pytorch < 1.12....

I tried rebuilding the image and I got the same error running it.

Ooooo... the GRDB 6 WIP... how tantalizing! Is there a roadmap? As a heavy-duty, industrial-strength consumer of GRDB perhaps I can offer feedback based on my experience?

> [X] XGBoost4j-spark-GPU dose not support multi-worker training. Since this is checked off does this mean xgboost4j-spark-gpu supports multi-worker training? I have not been able to get anything other than...

Hi @wbo4958. I think there's some ambiguity in my question. Let me clarify. What I want to do is run distributed training with a single worker per executor, like we...

Hi @wbo4958. If I do that, all of the xgboost tasks run on a single executor, but no progress is made. I don't get an error either. It just waits.

@wbo4958 I'm sorry, but I don't know when I'll return to this effort. But basically the question is whether one can run distributed xgboost with gpus without sacrificing task-parallelism in...

I've spent some time debugging this. I haven't gotten it to work, but I do see what looks to be at least part of the problem. If you don't set...