RFC: A better event system
Introduction
Events are at the very heart of Mita. They map well to the IoT world where devices often have to lay dormant until some event arises. Events are also well suited as blocking signal mechanism (think Go channels or mutexes) where, when an event is triggered, some barrier is passed e.g., the entry into an event handler.
Current Situtation
Mita currently has a crude form of event mechanism where platform components can define events which can be handled once on the top-level. This approach suffers several drawbacks:
- Events can only be defined and triggered by platform components. This takes great power away from the user who might be inclined to use the event system to structure their own code.
- Events can only be handled at the top-level. For example, at the moment one can not write a function where we wait for a sequence of events to happen.
- Events have no parameters which limits their usefulness. For example in case of a sensor event it is only possible to express that an event happened but not which event that was.
Proposal
Events with payload
Events can, but don't have to, deliver a single piece of data. Thus events can be typed to that datum (called event value). Upon their triggering that data is distributed to all subscriber to that event. There is only a event value that can be sent as payload. Due to heaplessness at the moment we can only send value types as event values (otherwise we'd need to find space on the stack which is guaranteed to still exist when the event handler starts).
User defined events
Users can define events at the top level. Events can be exported from a package (see RFC000) so that others can await them. Events are defined with the "event" keyword. Events can be triggered using the "trigger" keyword. Users can only trigger events defined in the same package.
Await events
Users can await events in functions or other event handlers. Users can choose to use or ignore the value that comes with the event. Awaiting an event halts the entire call chain until the event is received. Users can specify how long they want to wait for an event. If the event does occur within that timeframe, an AwaitTimeoutException is thrown. Awaiting a timeout (time-based event) is a special case of this behavior where no exception is thrown.
Limits and Exclusion
"Qualified events" e.g., events declared in a struct and triggered on an instance of that struct, could be a powerful tool. However it is unclear how to implement this. Also, it raises further questions:
- Can events be passed around like values? I.e. assigned to variables or returned by functions?
- How would one "branch off" from the main execution to not block the current call chain while waiting for a "local" event?
Implementation
Events exist only in Mita and have no direct representation in C other than maybe a constant used for identifying an event.
Events with payloads
The event payload would be passed as copy to the event handler. Event handler have the event value available as it variable.
User defined events
User-defined events have little impact on code generation (global events exist, if they're defined in a library of by the user hardly matters).
Await events
When a function awaits for an event we store it's location using setjmp and restore it using lngjmp. Instead of directly enqueueing the event handler we generate a wrapper which either calls the event handler afresh or uses lngjump to return to the previous location. This wrapper also takes care of unwrapping/ providing the event value (Note: in the XDK platform we're already generating this kind of behavior).
It is unclear how to handle multiple calls to the same function from different places. For example:
event FooEvent
func WaitsForFoo(name : string) {
println(`Enter ${name}`)
await FooEvent
println(`Exit #{name}`)
}
every button_one.pressed {
WaitForFoo("f2")
}
every system.startup {
WaitForFoo("f1")
}
Assuming that both system.startup and button_one.pressed events have been triggered we'll need to store two stacks/jmp information for the same function. The setjmp/lngjmp is bound to the function call, not the function itself. However, using static code analysis we should be able to compute the worst-case scenario for function execution and prepare enough memory.
Code Example
event shockEvent : i32
func computeShockSeverity() : i32 {
return (accelerometer.magnitude / 0.1) as i32
}
every accelerometer.activity {
var severity = computeShockSeverity()
if severity > 0 {
trigger shockEvent, severity
}
}
every button.pressed {
if(it == 0) {
// button one was pressed. Let's give the user some time to put the device out of their hand
// and wait for a shock event to happen.
await 30 seconds
try {
var severity = await shockEvent, 1 minute
println(`Shock detected at level ${severity}`)
} catch(AwaitTimeoutException) {
println("No shock happened.")
}
}
}
Comments
@32leaves:
It would seem that implementing this feature in a platform independent manner is hard. The platform generator would certainly have to be involved. There might also be platforms which do not support such kind of behavior.
For example, on FreeRTOS platforms using setjmp/longjpmp is not a good idea. Maybe we want to create tasks per event or based on some form of sophisticated static execution analysis which get deleted when all events have triggered (see http://www.freertos.org/a00126.html).
Alternatively, we could aim to implement something akin to Blech and flatten the execution so that we can implement the await using switch/case or goto. With the current architecture (single thread execution) we cannot use mutexes to implement the await as that would block the whole execution until the event happens.
Another way of building this is using a "worker pool" in which we inject functions. There are different ways to choose which functions to inject:
- Inject all functions irregardless if would be required or not. This would work, but would also multiply problems concurrency problems. Many times even invisible to our users and terrible to debug.
- Inject all functions executed with a special statement, think something like go-routines. Those functions would not be allowed access to global memory and would have no return values (everything else is just terribly hard). This severely limits those functions to reactive branches only.
- Do the same as in 2. but analyze the whole call graph and see if there is an await statement in there.
If we open the concurrency can of worms we really need to thing about synchronization and data race guraentaees.
Language wise I would suggest to add some more syntax to differentiate things.
The trigger syntax, as in trigger shockEvent, severity reads "trigger the events shockEvent and severity" although the latter one is a parameter and not an event. Hence I suggest something like trigger shockEvent(severity).
Instead of trigger, we can think of using raise or send. That depends on which wording the users are used to from other frameworks.
The await syntax could also be clearer for me, as await shockEvent, 1 minute reads "wait for event shockEvent or for 1 minute". I suggest to add a timeout keyword, which would also differentiate from a "normal" time-based waiting as in await 30 sec which does not trigger a timeout exception, and a timeout-aware wait await shockEvent timeout 1 minute.