ehedlin
ehedlin
``` import igl import numpy as np ray_origins = np.load("ray_origins.npy") ray_directions = np.load("ray_directions.npy") mesh_verts = np.load("mesh_verts.npy") mesh_faces = np.load("mesh_faces.npy") hits = igl.ray_mesh_intersect(ray_origins, ray_directions, mesh_verts, mesh_faces) print(len(hits)) ``` This prints 3048....
> > Are you compiling the most recent version? Or using the one on conda? > > conda on ubuntu. Same here. And when visualizing the points it shows that...
I am taking the results passed through the evaluation script and the bounding boxes from the lib/data_utils/threedpw_utils.py I want to keep the estimates from the evaluation script exactly as is...
I had the same problem with python 3.7. Downgrading to 3.6 did the trick
I got similar results although I tried with QKVAttentionLegacy and XformersAttention and suspect that the [issues raised here](https://github.com/sony/ctm/issues/5#issue-2234909098) could be hurting results? Which form of attention did you use?
I also realised that the inference code doesnt seem to use rejection sampling as shown in the paper. [This line](https://github.com/sony/ctm/blob/87cca3be93df8dc1e6797175bf4f404dfb536f28/code/cmd/ImageNet64/FID/sampling.sh#L69C21-L69C37) seems to show that rejection sampling was ran at a...
I ran the [classifier rejection](https://github.com/sony/ctm/blob/87cca3be93df8dc1e6797175bf4f404dfb536f28/code/classifier_rejection.py) code but it seemed to produce similar results so I emailed the authors to ask about the difference in performance.
I was able to get the published performance by running ` --eval_num_samples=50000` when generating samples (default is 6400). Im assuming that's what was intended as that number seems to be...
No, I just used `code/image_sample.py` and `code/evaluations/evaluator.py`.
I have fixed the installs but it seems stable diffusion isn't hosted publicly on huggingface anymore. It looks likely like the model will need to be installed locally and the...