sof icon indicating copy to clipboard operation
sof copied to clipboard

ace: zephyr: alloc: Use virtual memory heap for buffers

Open softwarecki opened this issue 1 year ago • 17 comments

The buffer allocation method for ace platform has been changed. They are allocated using the virtual memory heap. This consists of a set of buffers with a predefined size.

Dividing memory into buffers of a predefined size is necessary for the proper operation of the allocator, which creates a buffer with a size given as a range (buffer_alloc_range https://github.com/thesofproject/sof/pull/9145) - a functionality used for deep buffering.

PR needed before merge this PR:

  • [x] https://github.com/thesofproject/sof/pull/9114
  • [x] https://github.com/zephyrproject-rtos/zephyr/pull/72511

softwarecki avatar May 22 '24 15:05 softwarecki

Dividing memory into buffers of a predefined size is necessary for the proper operation of the allocator, which creates a buffer with a size given as a range

@softwarecki sorry, could you explain why it's necessary? Your buffer size as a range PR is separate, so it also works with the standard Zephyr heap allocator, right?

lyakh avatar Jun 04 '24 06:06 lyakh

@lyakh In a situation where a preferred buffer size given by the IPC is too large to be allocated, the size of a buffer is reduced step by step until allocation is successful. Using this mechanism on a regular allocator will allocate all available memory. As a result, it will not be possible to allocate next buffers due to lack of available memory. In a situation where we have memory divided earlier into buffers of specific sizes, allocating the largest possible buffer leaves the remaining buffers available.

softwarecki avatar Jun 04 '24 11:06 softwarecki

@lyakh In a situation where a preferred buffer size given by the IPC is too large to be allocated, the size of a buffer is reduced step by step until allocation is successful. Using this mechanism on a regular allocator will allocate all available memory. As a result, it will not be possible to allocate next buffers due to lack of available memory. In a situation where we have memory divided earlier into buffers of specific sizes, allocating the largest possible buffer leaves the remaining buffers available.

@softwarecki sorry, still don't understand. You have your range allocator. First you request a buffer of size min=16k, max=128k. Say, we have more than 128k available, so we allocate the maximum. With both allocators, right? Then comes another request for the same size range. Now we don't have another 128k available any more, but we do have 16k, so we allocate that - with either allocator. What's the difference? Why are you saying that "on a regular allocator will allocate all available memory?" Why should one allocation take all the available memory? Confused.

lyakh avatar Jun 05 '24 06:06 lyakh

@lyakh Let me explain. For example, we want to make two allocations (min=16k, preferred=128k) but we only have 96k available. In the case of a classic allocation using sys_heap, we scrape all the available memory and the next allocation fails. If we have previously divided this space into three buffers of 32k each, both allocations will succeed and there will still be something left. The difference can be seen when the preferred size exceeds the available memory.

softwarecki avatar Jun 05 '24 12:06 softwarecki

@lyakh Let me explain. For example, we want to make two allocations (min=16k, preferred=128k) but we only have 96k available. In the case of a classic allocation using sys_heap, we scrape all the available memory and the next allocation fails. If we have previously divided this space into three buffers of 32k each, both allocations will succeed and there will still be something left. The difference can be seen when the preferred size exceeds the available memory.

@softwarecki ok, I see, but TBH this seems to be a rather artificial example to me. In this "lucky" situation - yes, that would be the case. But in many other cases the VMH would fail too. In fact, doesn't VMH add fragmentation? E.g. if you allocate buffers of 24 or 48k and you use 32 or 64k VMH buffers for them?

lyakh avatar Jun 06 '24 07:06 lyakh

@lyakh Fragmentation applies only to the virtual address space. For example, if for a 48k buffer, vmh allocated a 64k buffer then only a 48k of physical memory will be mapped.

softwarecki avatar Jun 12 '24 13:06 softwarecki

@lyakh Fragmentation applies only to the virtual address space. For example, if for a 48k buffer, vmh allocated a 64k buffer then only a 48k of physical memory will be mapped.

@softwarecki ahaaa, right, thanks, I missed that, that's better then. So to avoid situations where you leave memory unused because you ran out of "blocks" even though you still have free memory, you'd need more blocks than the total memory available. So it has to be "hand-tuned" somehow and therefore will sometimes hit "unfavourable" configurations. But I'm not a memory allocator expert, so cannot really be sure, which one would be more optimal in our use-cases.

lyakh avatar Jun 13 '24 06:06 lyakh

SOFCI TEST

softwarecki avatar Jun 13 '24 09:06 softwarecki

@softwarecki I think the west update can probably be dropped now as Zephyr should be up to date.

lgirdwood avatar Jun 25 '24 14:06 lgirdwood

@softwarecki can you check internal CI. Thanks !

lgirdwood avatar Jul 05 '24 14:07 lgirdwood

@softwarecki I'm good for this today, but long term we should move all our memory logic into Zephyr i.e. all SOF application would be a client of Zephyr memory mgmt logic APIs.

@nashif @teburd @andyross fwiw, we are good to take this into SOF atm, but can we plan initial Zephyr support for virtual memory in next merge window ? I assume we are still pending some dependencies for virtual memory heaps in Zephyr ? @kv2019i topic for next Zephyr/SOF call ?

lgirdwood avatar Jul 08 '24 09:07 lgirdwood

@lgirdwood In my opinion, the current implementation has too restricted requirements for a configuration of memory division. This is partly due to the use of mem_blocks (buffer size must be a power of two) as well as design assumptions (buffer bundle size must be aligned to the page size).

Last year I proposed my allocator for zephyr (https://github.com/zephyrproject-rtos/zephyr/pull/54714) with predefined buffers, but it was met with criticism that it did not use zephyr's existing allocators as a base. It supports any division configuration and does not use dynamic memory allocation. Virtual memory support (page mapping) was supposed to be added in the next layer but it turned out that colleagues were already working on a similar solution and I did not continue the work.

softwarecki avatar Jul 08 '24 15:07 softwarecki

@lgirdwood In my opinion, the current implementation has too restricted requirements for a configuration of memory division. This is partly due to the use of mem_blocks (buffer size must be a power of two) as well as design assumptions (buffer bundle size must be aligned to the page size).

Last year I proposed my allocator for zephyr (zephyrproject-rtos/zephyr#54714) with predefined buffers, but it was met with criticism that it did not use zephyr's existing allocators as a base. It supports any division configuration and does not use dynamic memory allocation. Virtual memory support (page mapping) was supposed to be added in the next layer but it turned out that colleagues were already working on a similar solution and I did not continue the work.

ok, we can keep here until everything can be aligned with Zephyr folks and migrated in the future.

lgirdwood avatar Jul 09 '24 11:07 lgirdwood

@softwarecki can you check internal CI and confirm, we could split the PR here if its only 1 patch that causing the failure ?

lgirdwood avatar Jul 09 '24 11:07 lgirdwood

@softwarecki still DNM ?

lgirdwood avatar Sep 02 '24 15:09 lgirdwood

Release reminder - one week to v2.11-rc1.

kv2019i avatar Sep 06 '24 10:09 kv2019i

@softwarecki any ETA, stable v2.11 now forked so less risk now.

lgirdwood avatar Sep 17 '24 13:09 lgirdwood

@kv2019i @lgirdwood I think its good to merge now.

softwarecki avatar Nov 28 '24 10:11 softwarecki

@kv2019i @lgirdwood I think its good to merge now.

Ack - just waiting on CI now.

lgirdwood avatar Nov 28 '24 15:11 lgirdwood

SOFCI TEST

kv2019i avatar Dec 03 '24 08:12 kv2019i

sof-docs fail and Intel LNL fails all known and tracked in https://github.com/thesofproject/sof/issues?q=is%3Aissue+is%3Aopen+label%3A%22Known+PR+Failures%22+

kv2019i avatar Dec 03 '24 10:12 kv2019i