Daniel K
Daniel K
The backend part of forward lineage (aka impact) on the service-side merged to `develop`. Note, that there is no UI support for this feature, yet.
> @dk1844 Currently the build is failing because the license is missing in some files. @cerveada, thanks for noticing, fixed.
I have tested both the `sh` and `cmd` script to see that they correctly pass the datetime configuration fields down to the `spark-submit`. I have not run the actual job....
What needs to be done prior to this task: 1. configuration of spline 0.6 2. migration of spline data
Does this include rollback of `migrated=true` and `migratedHash` as well? Because since in https://github.com/AbsaOSS/enceladus/issues/2015 we are suggesting to decide on `migrated` flag instead of locks (because some entities may not...
Another take on (previously migrated) shared mapping: when rollbacking, take into account the migratedHash of entities specifically mentioned in the rollback, e.g. `-d dataset1` - only entities matching `dataset1`'s `migratedHash`...
@benedeki, although @Zejnilovic came with the use-case of mapping tables, I have tried to abstract and find a general approach. The options you are suggesting are possible from locking-perpective, with...
May be solved by https://github.com/AbsaOSS/spark-data-standardization/issues/7
I have added pre- and post- counting/hash script results in the section _Recovery Point Validation_.