CU-8695hghww backwards compatibility workflow
Add a new step to GHA workflow that does the following:
- Downloads all (fake) models for various versions
- 1.2.9, 1.3.1, 1.4.1, 1.5.3, 1.6.1, 1.7.0, 1.8.0, 1.9.0, 1.10., 1.11.0, 1.12.0
- These are fake models created with the fake data as per #470
- They do not contain MetaCATs or RelCATs
- Makes sure they all work
- Runs each model through the fake-model based regression suite as per #470
- This is to ensure they all run, at least in principle
- Cleans up afterwards
This PR depends on #470 and the GHA workflow will fail without it having been merged in since some of the necessary parts don't exist or are in a different state.
NOTE: Perhaps I'd want to include some (basic) legacy DeID models as well to make sure that works?
Task linked: CU-8695hghww Add model compatibility workflow
Updates the zip in S3 bucket to include 1.13 built model pack.
Reran the latest GHA workflow to run it through that as well.
LGTM. Does this mean each future release of MedCAT will request a new fake model be created and uploaded to S3? Is this part of the standard release process?
There is currently nothing in the release documentation to do this. But if/when this gets merged in, I think such a step should be added to the release documentations. Though there is currently nothing to ensure that the most recent release has a fake model in the bucket. That might be a good addition.
With that said, adding a compulsory step for this could complicate the release process. After all, whoever is doing the release would need to download the existing models, unzip, create a new model with the new release, upload back on to S3. This requires additional time as well as access to the S3 bucket. But I think it would still make sense to add it in the (minor or major) release documentation. With perhaps a note to contact someone else to do the model updates if needed.
After all, whoever is doing the release would need to download the existing models, unzip, create a new model with the new release, upload back on to S3.
Yes, documentation will help. Also, some automation could be done as part of the release workflow, ref. configure-aws-credentials.