Rk rm 102021
Autumn roadmap update
The revision related to the implementation of static memory allocation has been postponed to a longer term. Is this due to a large amount of work?
In this regard, maybe it makes sense to consider using some existing optimized dynamic memory manager in the CycloneDDS?
@i-and unfortunately, yes. We had to postpone the work on it because the actual implementation of XTypes uncovered too many dark corners — one would've thought revision 1.3 would be implementable as described, but not really. It isn't so much that it is unclear what is expected when an application does the obvious thing, but rather dealing with the interactions from non-obvious combinations of features. We chose to worry about those details now, however unlikely it may be that anyone would actually try it out.
Anyway, that's mostly behind us now, and the static memory allocation is back on the agenda. The plan is anyway to use typed allocators allocating from pools that are sized based on resource limits (and the obvious extensions for the number of entities and so on). That way you can split the process lifetime in initialization, where these entities get created and the pools get allocated (you could even make that statically allocated, but that's not step one anyway), followed by an operational phase where the only "allocations" that take place are ones from these pools.
As Long-term milestones, "Time-sensitive Networking support DDS-TSN" is specified. Is there now an understanding of how this can be implemented (taking into account the fact that the standardization of DDS-TSN https://www.omgwiki.org/ddsf/doku.php?id=ddsf:public:guidebook:06_append:01_family_of_standards:05_wip:ddstsn, as far as I understand, is at the beginning of the path)?
@i-and apologies for not responding sooner. I did see your question, it was just that I saw it an inconvenient time and then forgot about it.
The DDS TSN standard work is indeed at the beginning of the path, and to is moreover aiming extremely low. So low that that I don't understand the point of it ... I'd say it is only about configuration and being able to reserve some bandwidth for a subset of the traffic.
Today you should already be able to do that, because there are various ways in which you can force traffic into a TSN flow (not sure if that's the right word), via ethernet multicast addreses, VLAN tagging and (I believe) even via UDP ports. In theory that means it should work if you use the raw ethernet interface mode, set "prefer multicast" (IIRC, that's an option that exists because you had use for it!) and define network partitions that map the topics to multicast addresses corresponding to the flows. Needless to say, that does require some polishing ...
What I personally think is the interesting case is where you can bound the time from scheduling some piece of the application to when the data is handed off to the OS network stack. Ideally, one would then also use some kernel network stack bypass like XDP to eliminate as much jitter as possible.
To properly bound that time, some things still need to be dealt with. Firstly, allocations in the data path: while in practice you hardly ever see memory allocations for small messages because it can mostly get the data from a cached pool, "hardly ever" is not good enough. Another issue is that if local readers exist and a writer disappears, you can run into a reader history cache that is locked for a duration related to the number of instances. From day 0 there's been a plan for dealing with that, but getting all the identifiers in the built-in topics to behave rode rough-shod over that. Fortunately, the original plan is still possible, it just requires a few more internal identifiers.
Anyway, those are some obvious changes that are needed to make the claim of supporting TSN a meaningful one, rather than a tick-the-box exercise. I'd love to have it sooner rather than later, but it simply will take some time.
While an update is still required this one has been well overtaken by events.