Node JS Unhandled 'error' event when running Figma Connect for Swift package.
While this is a Swift Package and not an app I am able to get further then previously with the new 1.1.2 update.
But I'm getting the following node errors when running the initial figma connect and having trouble making sense of them.
Found Code Connect Swift package at ., building parser binary. This may take a few minutes if this is the first time you've run Code Connect.
node:events:497
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:95:16)
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:169:8)
at emitErrorCloseNT (node:internal/streams/destroy:128:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
for reference my figma.connect.json
{
"codeConnect": {
"parser": "swift",
"swiftPackagePath": "Package.swift",
"include": ["Sources/Pedal/Components/**"],
"exclude": ["Tests/**", "docs/**", "build/**", "PedalShowcase/**"]
}
}
Figma -v: 1.1.2 MacOSX 14.6.1 (23G93)
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1561
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:x: 4 New Failures, 5 Cancelled Jobs
As of commit f506e224547c7ebe72396408b49c452d41bda291 with merge base 9a863c8bd41e2efecc3de475a791226f4a154358 ():
NEW FAILURES - The following jobs have failed:
-
GPU tests / gpu_test (3.9, stable) (gh)
tests/torchtune/modules/test_transformer_decoder.py::TestTransformerDecoder::test_kv_cache_reset_values -
Recipe Tests / recipe_test (3.11) (gh)
tests/recipes/test_ppo_full_finetune_single_device.py::TestPPOFullFinetuneSingleDeviceRecipe::test_loss -
Unit Test / unit_tests (3.10) (gh)
tests/torchtune/modules/test_transformer_decoder.py::TestTransformerDecoder::test_kv_cache_reset_values -
Unit Test / unit_tests (3.11) (gh)
tests/torchtune/modules/test_transformer_decoder.py::TestTransformerDecoder::test_kv_cache_reset_values
CANCELLED JOBS - The following jobs were cancelled. Please retry:
-
GPU tests / gpu_test (3.10, stable) (gh)
##[error]The operation was canceled. -
GPU tests / gpu_test (3.11, stable) (gh)
tests/torchtune/modules/test_transformer_decoder.py::TestTransformerDecoder::test_kv_cache_reset_values -
Recipe Tests / recipe_test (3.10) (gh)
##[error]The operation was canceled. -
Recipe Tests / recipe_test (3.9) (gh)
##[error]The operation was canceled. - Unit Test / unit_tests (3.9) (gh)
This comment was automatically generated by Dr. CI and updates every 15 minutes.
Hey @dvorjackz can you share the motivation for this PR? (I know it's still a draft, just wanna understand what the goal is for when I do eventually review it)
@ebsmothers added context to the pr description!
left another nit. Looks good to me. Make sure it is exportable. And want to hear thoughts from Tune folks
hey @dvorjackz , thanks for the PR and the extra context! It makes sense to me to make it swappable, but this is a relatively large change. I am not sure how this will interact with compile + multimodal. There are also other ongoing PRs that are modifying kv_cache. We may need to align on the design a bit.
I want to minimize the amount of work you have to do, but if your version is working, adding testing will make it much easier to approve. (e.g. you could run supervised training for vision model for 50 steps with / without the PR, to compare that everything works fine, and then run a generation task.) But we should align on the design first.
Sorry for the delay in getting to this one. Tbh I am not sure I like the way we are refactoring the multi-head attention here. I think there is a pretty canonical MHA flow that folks in OSS are used to (see e.g. litgpt's CausalSelfAttention or transformers's LlamaAttention) and this would be diverging from that in a meaningful way. I am OK with such divergence if it makes the code easier to understand, but in this case we are actually adding another layer of abstraction that's not very clear (why do we need a separate module to handle a couple reshapes + SDPA? why do we call it SDPA when it in fact contains a call to
nn.functional.scaled_dot_product_attention, which we then callself._attn_fn?)Anyways this is not to be too harsh on this PR cause I do understand the motivation from the ET perspective. Just so that I am more well-informed here, can you share the optimized SDPA op from ET so I can look at it as a reference? Then maybe we can brainstorm about a less intrusive way we can achieve this.
Evan, Your point is quite fair. See the custom SDPA implementation here. https://github.com/pytorch/executorch/blob/main/examples/models/llama2/source_transformation/sdpa.py#L19. Really at the heart of it is SDPA with kv cache. So the point of the PR is to refactor SDPA to own the cache. Crux of the issue is that when the model is exported, updates from the kv cache result in ton of copies that causes issues for both memory footprint and latency. The refactored SDPA allows ET to apply ET specific module swap that does kv cache update via custom op.
But I would not use that as an argument to necessitate this change. This change should also stand on its own merit. To me decoupling attention from q, k, v projections plus rope is good. For example if I were to apply sliding window attention like the one in here https://github.com/Lightning-AI/litgpt/blob/main/litgpt/model.py#L274, I will have to make more intrusive changes. If I were to have ring buffer kv cache, then also I will have to update MHA implementation. So to me decoupling projection + rope from SDPA helps. In this case we add kv cache to SDPA as well because it is really relevant to attention.
Thanks @kimishpatel for the explanation and the code pointer, this helps a lot. I have a handful of follow-up comments:
(1) Sliding window attention is an interesting example. I had been thinking recently about this, but from the perspective of using FlexAttention to enable it (instead of what they're doing in litgpt). I saw there is an SDPAFlex in ET, I'm curious how you're currently using that.
(2) Kind of related to (1).. I realized my claim of "we shouldn't add layers of abstraction to MHA" wasn't 100% honest since we've already kinda done it (sorry about that). We do already have SDPA parametrized for flex attention support. I think the thing I don't like is that in this PR we are keeping the existing parametrization of FlexAttention vs functional.scaled_dot_product_attention and then wrapping that in another abstraction (and for some reason putting all the reshapes in there too). It seems to me like the SDPA class defined in this PR should (a) take in q, k, v in their SDPA-ready shapes, (b) do the KV caching, (c) check if the mask is a block mask or not, and (d) dispatch to either our compiled flex attention or vanilla functional.scaled_dot_product_attention based on (c). There may be some subtleties here around compiling FlexAttention that I'm missing, cc @RdoubleA for a sanity check there.
(3) I do agree with @felipemello1's previous comment about some of our ongoing KV cache refactors (also about interactions with other features, but that can come after imo). One comment I have up front is that it's a bit weird to make SDPA stateful (but I'm just biased and like to keep everything functional). Ultimately I care more about correctness: @SalmanMohammadi and @joecummings have been doing a lot here recently, so I wanna have them take a look to be sure this makes sense from their perspective.
Sorry I know there's a lot here, but TLDR is I generally agree with the idea of factoring out the actual SDPA op provided we can do it cleanly and consistently. I think there are a few changes to make in this PR to get there, but happy to get more into the weeds with the design than I have here if that'd help. Other than that the main open question I have is around the KV cache and ensuring we don't break anything there.
@ebsmothers
I saw there is an SDPAFlex in ET, I'm curious how you're currently using that.
Ignore that. I dont know why we named it such. Not really a flex attention and not a good example to focus
It seems to me like the SDPA class defined in this PR should (a) take in q, k, v in their SDPA-ready shapes, (b) do the KV caching, (c) check if the mask is a block mask or not, and (d) dispatch to either our compiled flex attention or vanilla functional.scaled_dot_product_attention based on (c)
This does make sense. I need to think a bit on how to make that work with SDPA that we have which is really why all the reshape and tranposes have moved inside, but your point is fair. Let me think about it.
One comment I have up front is that it's a bit weird to make SDPA stateful
Yeah thats true and this part is probably very specific to what ET wants. Let me actually spend some time on this to iron out details
Hi folks, shelving this for now since we chose to just add a clone of MultiHeadAttention into ET to source transform in https://github.com/pytorch/executorch/pull/6719