feature request: loadByCompositeKeyAsync
https://github.com/expo/universe/pull/10823/files#r1020707597
Previously discussed in https://github.com/expo/entity/pull/123#issuecomment-847502513
In a subset of cases where we have unique composite keys (
UNIQUE (a, b)), we could tell the Entity framework that the pair(a, b)is cacheable and have the cache invalidated when the same way single-field keys get their cache entries invalidated. I think this autoinvalidation could be built on top of the more flexible, generic secondary cacher in this PR. (Autoinvalidation would not work for the case like "ORDER BY created_at DESC LIMIT 1" where there is no unique key.)To accomplish this I think a generalized approach would be better than separate implementation. I could see it being done by factoring out what already exists in the loader into things like
loadByFieldsCompositeKey(Tuple<FieldName, FieldValue>[])and then makeloadByFieldEqualingthe n=1 case that calls into it (and doing similar conversions for other existing methods).The tough thing will be figuring out what the set of keys to auto-invalidate are:
- Columns:
a,b,c- User specifies
a, bas composite key- User updates column
bto new value, we need to invalidate the cartesian product of old and new sub-column values (though I haven't verified this yet)
Thinking about this, let's say we have an entity definition where an entity is unique name per account:
TestEntity
columns:
id: cache - true
other_unique_field: cache - true
name
account_id
composite_keys:
[account_id, name]
Before composite keys:
- Create { id: 1, other_unique_field: 'blah', name: 'hello', account_id: 1 }
Invalidations:
- testentity:id:1 (dataloader, cache)
- testentity:other_unique_field:blah (dataloader, cache)
- testentity:name:hello (dataloader)
- testentity:account_id:1 (dataloader)
- Update { id: 1, other_unique_field: 'blah2', name: 'hello2', account_id: 1 }
Invalidations:
- testentity:id:1 (dataloader, cache)
- testentity:other_unique_field:blah (dataloader, cache)
- testentity:other_unique_field:blah2 (dataloader, cache)
- testentity:name:hello (dataloader)
- testentity:name:hello2 (dataloader)
- testentity:account_id:1 (dataloader)
- Delete
Invalidations:
- testentity:id:1 (dataloader, cache)
- testentity:other_unique_field:blah2 (dataloader, cache)
- testentity:name:hello2 (dataloader)
- testentity:account_id:1 (dataloader)
After composite key on (account_id, name), additional invalidations needed for above scenario (need to come up with some mechanism for cache key creation that ensures no conflicts based on field values). There'd also be a composite dataloader for each composite key:
- Create
- composite:testentity:account_id,name:1,hello (dataloader, cache)
- Update
- composite:testentity:account_id,name:1,hello (dataloader, cache)
- composite:testentity:account_id,name:1,hello2 (dataloader, cache)
- Delete
- composite:testentity:account_id,name:1,hello2 (dataloader, cache)
I think this might be fairly straightforward. But hard to know. It'd be good to spend some time coming up with an edge case.
Okay, got this implemented on a branch: https://github.com/expo/entity/compare/main...%40wschurman/03-21-chore_refactor_common_adapter_loading_and_caching
Planning to reorder the PRs to ease review, but the general approach is to genericize the keys/values interface for the batched/cached pipeline (data manager -> cache adapter -> database adapter). To do this, we need to add a "holder object" concept that both single-field and composite-field loads/invalidations implement. And then to make the holders work in existing code, we need to make them "hashable" in the sense that we need to override equality for the relevant internal operations, which turn out to mostly be using these holders in Maps as keys.
So, concretely the plan is:
- Introduce some generic utils that will be used further up the stack (#269)
- Convert the current single-value batched/cached pipeline to use holders (#271).
- Add a new composite value holder. Add new loader methods for loading by composite value (#272).