tortoise.cpp icon indicating copy to clipboard operation
tortoise.cpp copied to clipboard

Mac OS: Bringing metal support

Open baileyheading opened this issue 1 year ago • 51 comments

Work is underway to get metal working for this project. Keen to get help from anyone interested.

TODO:

  • [ ] add conv_tranpose_1d for metal (already has cpu implementation)
  • [ ] modify ggml_pad backend and added ggml_pad_ext for metal
  • [ ] add pad_reflect_1d to metal (already has cpu implementation)
  • [ ] add unfold_1d to metal (already has cpu implementation)

Existing PRs from @balisujohn decent example to follow for the changes done to CUDA already: https://github.com/ggerganov/ggml/pulls/balisujohn

baileyheading avatar Jul 14 '24 09:07 baileyheading

that was fast! before it had problem haha

gpt_vocab_init: loading vocab from '../models/tokenizer.json'
gpt_vocab_init: vocab size = 255
autoregressive_model_load: loading model from '../models/ggml-model.bin'
autoregressive_model_load: ggml tensor size    = 432 bytes
autoregressive_model_load: backend buffer size = 1889.29 MB
autoregressive_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
autoregressive_model_load: model size  =  1510.54 MB
ggml_metal_free: deallocating
 41984 diffusion_model_load: loading model from '../models/ggml-diffusion-model.bin'
diffusion_model_load: ggml tensor size    = 432 bytes
diffusion_model_load: backend buffer size = 689.28 MB
diffusion_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
diffusion_model_load: model size  =   688.49 MB
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0

baileyheading avatar Jul 14 '24 10:07 baileyheading

Some lldb (debug) info

GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
Process 38817 stopped
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
    frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`:
->  0x18e276a60 <+8>:  b.lo   0x18e276a80               ; <+40>
    0x18e276a64 <+12>: pacibsp 
    0x18e276a68 <+16>: stp    x29, x30, [sp, #-0x10]!
    0x18e276a6c <+20>: mov    x29, sp
Target 0: (tortoise) stopped.
(lldb) tb
No breakpoints currently set.
(lldb) bt
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
  * frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x000000018e2aec20 libsystem_pthread.dylib`pthread_kill + 288
    frame #2: 0x000000018e1bba30 libsystem_c.dylib`abort + 180
    frame #3: 0x0000000100677378 libggml.dylib`__ggml_metal_graph_compute_block_invoke.cold.40 + 88
    frame #4: 0x000000010066287c libggml.dylib`__ggml_metal_graph_compute_block_invoke + 21672
    frame #5: 0x000000018e0fe428 libdispatch.dylib`_dispatch_client_callout2 + 20
    frame #6: 0x000000018e112850 libdispatch.dylib`_dispatch_apply_invoke3 + 336
    frame #7: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #8: 0x000000018e0ffc68 libdispatch.dylib`_dispatch_once_callout + 32
    frame #9: 0x000000018e1119b8 libdispatch.dylib`_dispatch_apply_redirect_invoke + 260
    frame #10: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #11: 0x000000018e110080 libdispatch.dylib`_dispatch_root_queue_drain + 864
    frame #12: 0x000000018e1106b8 libdispatch.dylib`_dispatch_worker_thread2 + 156
    frame #13: 0x000000018e2aafd0 libsystem_pthread.dylib`_pthread_wqthread + 228

baileyheading avatar Jul 14 '24 10:07 baileyheading

The GGML Assert is from ggml.metal.m I think. I don't know if that causes the abort or something else

EDIT: yep, the asserts cause the block. that means there is probably an issue somewhere, but I can at least proceed to the next stages for now by skipping over it

baileyheading avatar Jul 14 '24 11:07 baileyheading

@balisujohn so this is what you were referring to about the unsupported ggml operations? It's very fast at least up until the problems

ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:918: !"unsupported op"

baileyheading avatar Jul 14 '24 11:07 baileyheading

It says it's not supported, but it looks supported to me!

image

EDIT: right, _EXT is different

baileyheading avatar Jul 14 '24 11:07 baileyheading

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

balisujohn avatar Jul 15 '24 06:07 balisujohn

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

any thoughts on where to look?

baileyheading avatar Jul 15 '24 07:07 baileyheading

Actually! I'm comparing the CUDA code CUDA

        case GGML_OP_UPSCALE:
            ggml_cuda_op_upscale(ctx, dst);
            break;

METAL image

METAL checks for support with a function to return a bool, but I tried setting it to "true" already with no luck

EDIT: METAL Actually it has this image

baileyheading avatar Jul 15 '24 07:07 baileyheading

I believe that PAD is also supported

ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"

Update: I'm removing some of the GGML_ASSERTs to see whether it's just the detection of support that is broken

baileyheading avatar Jul 15 '24 07:07 baileyheading

It seems to me like these numbers are somehow wrong for the Metal implementation

        case GGML_OP_UPSCALE:
            return true;
        case 49:
            return true;

I did some printouts and apparently the upscale one it wants is 49, which doesn't seem to be what GGML_OP_UPSCALE is defined as. I added 49 for debugging, but I'm gonna swap that op out for it to try and get by the error

Update: Yeh it got past, but then got error for a different op that also exists, so I'm going to figure there's something wrong with these numbers somewhere

baileyheading avatar Jul 15 '24 07:07 baileyheading

Possibly i made a mistake adding other new ops. Though not sure why it would work in cuda and not metal.

balisujohn avatar Jul 15 '24 08:07 balisujohn

@balisujohn they're out by two.. which seems consistent with this kind of thing

dst->op in default case: 49 (what was sent from main.cpp presumably)
dst->op in default case: 51 (Metals definition of the number)
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'

but if they're only defined in one place, I don't understand how this would happen

baileyheading avatar Jul 15 '24 08:07 baileyheading

I numbered them. What confuses me is that it believes 51 is upscale in metal but the incoming operation thing says it is 49 and it has the name "UPSCALE". where are they defined differently?

image

baileyheading avatar Jul 15 '24 08:07 baileyheading

These are out of order with the OP enum, possibly leading to at a minimum the wrong op names being printed. All of the things I linked should reference the same ops in the same order.

https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2701 https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424 https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2613 image

balisujohn avatar Jul 15 '24 08:07 balisujohn

(I think the op failing on metal actually isn't upscale)

balisujohn avatar Jul 15 '24 08:07 balisujohn

so what would need to be reordered to see what is actually failing?

baileyheading avatar Jul 15 '24 08:07 baileyheading

The other two things I linked need to be added to and reordered to have the same number of elements as https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424, and in the same order. A very silly mistake on my end to not have these matching already.

balisujohn avatar Jul 15 '24 08:07 balisujohn

Should be adding these two I think

image

@balisujohn are they familiar?

baileyheading avatar Jul 15 '24 08:07 baileyheading

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

balisujohn avatar Jul 15 '24 08:07 balisujohn

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

and I've confirmed they are not a part of the metal code of course.. so these are the ones I need to add. Which other ones did you add?

baileyheading avatar Jul 15 '24 08:07 baileyheading

I added a cuda kernel for conv_transpose_1d I don't know if there's a metal implementation, but there was an existing CPU implementation. I modified the ggml_pad backend and added ggml_pad_ext so the metal kernel may need to be modified, assuming there is an existing one. I modified the ggml_get_rows cuda kernel to fix a bug, it's unlikely the same bug exists on metal.

balisujohn avatar Jul 15 '24 08:07 balisujohn

https://github.com/ggerganov/ggml/pulls/balisujohn

Useful for reference, though missing the getrows change The branch tortoise is all of these PR's squished together plus the getrows fix.

balisujohn avatar Jul 15 '24 08:07 balisujohn

I've started a todo list at top of the issue. is there a better part of the repo to have such a thing?

baileyheading avatar Jul 15 '24 09:07 baileyheading

I think the top of the issue is fine. Here's a link to join the Discord; a while back someone in the discord expressed interest in metal so they might be willing to help https://discord.gg/q9zufuhc

balisujohn avatar Jul 15 '24 09:07 balisujohn

Conversion without understanding is an interesting task haha. Technically, I can run this through by CPU to get expected outputs for different inputs.. then I can translate code the best I can, and then test them one by one. Have you got a more refined approach you could recommend? @balisujohn given you've just done this for CUDA

edit: I did find the tests finally! I guess that's something

baileyheading avatar Jul 15 '24 11:07 baileyheading

This is hardly going to get the desired result, but I do at least know where to make the modifications.

                case GGML_OP_PAD_REFLECT_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_PAD_REFLECT_1D\n");
                }
                break;
                case GGML_OP_CONV_TRANSPOSE_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_CONV_TRANSPOSE_1D\n");
                }
                break;
                case GGML_OP_UNFOLD_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_UNFOLD_1D\n");
                }
                break;

it ran through to the end and made the .wav file (albeit, static noises only haha) (obviously it fails the tests)

GGML_OP_PAD_REFLECT_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D

I see these run a few times. I wonder how far off the time for this to run (with those parts of the code missing) is from the time it would take when it's done. It took ~26 seconds on M3 Max.

If this were to run on a server, would some of this just need to load one? or do all these steps need to run each time? and is it possible to stream or is everything done all at once?

baileyheading avatar Jul 15 '24 14:07 baileyheading

You could avoid repeating the model loads if streaming, you will still need to run the autoregressive model, diffusion model, and vocoder for each generation. I am working on making tortoise.cpp faster.

balisujohn avatar Jul 15 '24 16:07 balisujohn

Wrt how to get the ops working, each op has tests in the tests folder, you should be able to define a flag at the top of a particular test file to make it build with metal, then you can run that test from the ggml/build/bin folder after calling make in ggml/build. See for example https://github.com/balisujohn/ggml/blob/tortoise/tests/test-conv-transpose-1d.cpp which is currently set to build with the cuda backend.

balisujohn avatar Jul 15 '24 16:07 balisujohn

I'll get up to speed with those tests next time I work on the operations.

What do you have planned to make tortoise faster?

baileyheading avatar Jul 16 '24 08:07 baileyheading

there are unnecessary ops which can be removed, in particular some copies from float32 to float16 that can be removed once ggml_conv_1d supports float32 weight tensors. Also probably some unnecessary transposes when the weights can be saved already transposed.

balisujohn avatar Jul 16 '24 09:07 balisujohn