feat(web): the gesture-recognition engine 🐵
This feature branch is devoted toward development of a module to facilitate #5029 in addition to our existing OSK gestures.
By developing an isolated module for gesture support, we'll be able to demo and test gesture behavior outside of KeymanWeb. One of my design goals with this is actually to enable gesture unit tests by recording tracked input sequence data. This may be achieved by keeping the core logic headless and replicating recorded input sequences in a headless environment.
This branch should not be merged into master until the new module is sufficiently ready and has been integrated into KeymanWeb properly. Until then, it should live as a feature branch.
Phase 1: the base GestureRecognizer
This section will be devoted to establishing the interface for the basic infrastructure needed to properly support gestures given the constraints needed by KeymanWeb and ensuring that future work on this module will be testable independently of Web.
Goals:
- a host page that can replicate the needs of a touch-platform OSK, including screen edge limitations, etc.
- Such a page will be quite useful for demoing gesture behavior without the need to worry about how it will affect Web.
- Either the same page or an extended page that allows recording of inputs received, for use in unit-test development (much like
web/tools/recorder's role for KeymanWeb's engine tests) - The ability to support tracking of multiple touchpoints.
PRs for this phase:
- #6842 (this one!)
- For future reference: through commit https://github.com/keymanapp/keyman/pull/6842/commits/d5222dfaadf94989b5c230a863cf81ffbd73a1bd.
- #6843
- #6844
- #6878
- Hooray for something interactive!
- Has an initial set of user tests.
- #6914
- #6952
- Adds an input-sequence recorder page useful for unit-test preparation
- #6970
- Actual unit tests!
Phase 2: the GestureSegmenter
This section will be devoted to analyzing raw input-event data and mapping those time-based input sequences to gesture "segments" - small-scale gesture components that will serve as "building blocks" for complex gesture types, their differentiation, and state modeling.
- #7058
- More coming...
Phase 3: the GestureSynthesizer
Once the "building block" logic is sound and stable, we may then use those components to construct the complex gestures needed by KeymanWeb's OSK and the internal state machine(s) needed for differentiation, detection, recognition, cancellation, and resolution.
- TBD.
For this initial PR, the focus is on:
- establishing baseline infrastructure for this module
- ensuring that CI builds verify that the module builds properly
- copying over the relevant base classes from KeymanWeb as a starting point, without unnecessary changes.
As the actual TypeScript code here is unchanged from KeymanWeb, outside of a few blocks being commented out due to external references, do not feel a need to consider it during the review process.
@keymanapp-test-bot skip
Just merged in the changes from #6986 and updated the build script accordingly; this'll be needed to properly address requested changes in #6914 and #6952.
Now that we're moving this to a more "proper" feature branch, it's time 'merge' this and let the already-merged stuff sit in its new home.