feat: delete orphaned files
Closes #1200
Rationale for this change
Ability to do more table maintenance from pyiceberg (iceberg-python?)
Are these changes tested?
Added a test!
Are there any user-facing changes?
Yes, this is a new method on the Table class.
a meta question, wydt of moving the orphan file function to its own file/namespace, similar to how to use .inspect.
i like the idea of having all the table maintenance functions together, similar to delta table's optimize
a meta question, wydt of moving the orphan file function to its own file/namespace, similar to how to use
.inspect.i like the idea of having all the table maintenance functions together, similar to delta table's optimize
I think that makes sense -- would https://github.com/apache/iceberg-python/pull/1880 end up there too?
Also ideally there is a CLI that exposes all the maintenance actions too right?
I think moving things to a new OptimizeTable class in a new namespace optimize.py makes a lot of sense, can be modeled very similar to the InspectTable and generally makes things cleaner -- I think it still makes sense to have the all_known_files inside of inspect though, and can still use that in the new OptimizeTable
i like the idea of having all the table maintenance functions together, similar to delta table's optimize
That's a good point. However, I think we should be able to either run them separate as well. For example, delete orphan files won't affect the speed of the table, so it is more of a maintenance feature to reduce object storage costs. Delete orphan files can also be pretty costly because of the list operation, ideally you would delegate this to the catalog that uses, for example, s3 inventory.
@Fokko we probably also want pyiceberg to have some idea about https://iceberg.apache.org/spec/#delete-formats right? Is it currently aware of those files?
@jayceslesar I believe the merge-on-read delete files (positional deletes, equality deletes, and deletion vectors) are returned by the all-files. The only part that's missing is the partition statistics files.
@jayceslesar I believe the merge-on-read delete files (positional deletes, equality deletes, and deletion vectors) are returned by the all-files. The only part that's missing is the partition statistics files.
Sounds good, I will add the partition statistics files when that is merged!
Once issue I've found with this PR is that the catalog properties need to propagate to PyArrowFileIO(properties=...) otherwise endpoint/authentication/etc to things like s3 simply fail ...
Going to get around adding tests for both types of FileIO... @Fokko @kevinjqliu anything else you think we need here?
@jayceslesar how's this coming? Let me know if i can help with anything. Id like to use this in prod as well!