A. Leonard Nicusan

Results 11 comments of A. Leonard Nicusan

I understand, thank you very much for your input! It's one of those projects where you're abusing every single piece of software, but it's almost doable. I'll try [tethex](https://github.com/martemyev/tethex) to...

The main advantage of the IB in our cases was that we can avoid the hexahedral meshing of very complex geometries; it's great that we can use a refined, simple...

On the CPU the while loop misses an index into the thread-local `@private` variable: ```julia while s > 0 begin var"##N#486" = length((KernelAbstractions.__workitems_iterspace)(__ctx__)) begin #= /home/andreinicusan/.julia/packages/KernelAbstractions/DqITC/src/macros.jl:264 =# for var"##I#485" =...

On the GPU I see: ```julia [...] bs = ((#= /home/andreinicusan/.julia/packages/KernelAbstractions/DqITC/src/KernelAbstractions.jl:127 =# (KernelAbstractions.groupsize)(__ctx__)))[1] [...] var"#177#s" = KernelAbstractions.bs ÷ 2 while s > 0 [...] ``` Why is `bs` the only...

Thanks for including me as a contributor - of course, very happy with the relicensing!

Would using [ImplicitBVH.jl](https://github.com/StellaOrg/ImplicitBVH.jl) be useful here? The `accel` directory seems to implement something fairly close to it.

What will the low-level interface look like? Also, I’m concerned about maintaining the current performance levels as new features are added to KA. The 285% performance regression we experienced with...

What is the exact use case for this? Querying whether a backend supports a datatype - be it for atomics or normal use - means you're writing type-generic code anyways....

Thanks for `@`-ing me! In my mind there are two layers to expose from KA: 1. Backend capabilities for algorithm *writing*. 2. Device settings relevant to algorithm *running*. For 1.,...

Will KA/KI still be a greatest common denominator of the GPU backends, or are you looking to introduce optional intrinsics? How will the groupreduce API do in terms of portability?