[SUPPORT] Data deduplication caused by drawback in the delete invalid files before commit
Dear community,
Our user complained that after their daily run job which written to a Hudi cow table finished, the downstream reading jobs find many duplicate records today. The daily run job has been already online for a long time, and this is the first time of such wrong result.
He gives a detailed deduplicated record as example to help debug. The record appeared in 3 base files which belongs to different file groups.
I find the today's writer job, the spark application finished successfully.
In the driver log, I find those two files marked as invalid files which to delete, only one file is valid files.
And in the clean stage task log, those two files are also marked to be deleted and there is no exception in the task either.
Those two files already existed on the hdfs before the clean stage began, but they still existed after the clean stage.
Finally, found the root cause is some corner case happened in hdfs. And fs.delete does not throw any exception, only return false if the hdfs does not delete the file successfully.
And I check the
fs.delete api, the definition is reasonable.
I think we should check the return value offs.delete in HoodieTable#deleteInvalidFilesByPartitions to avoid wrong results. Besides, it's necessary to check all places which called fs.delete.
Any suggestion?
you are right, we already got a fix recently: https://github.com/apache/hudi/pull/11343
@danny0405 Thanks for your attention.
I checked #11343, it could not fix the current issues. The issue should be fixed in HoodieTable#deleteInvalidFilesByPartitions to avoid fail to delete the invalid files, while #11343 aims to fix clean service.
hmm, would you mind to fire a fix for it?
I would like to fire a fix recently.
cc @yihua @nsivabalan @codope @xushiyan
thanks @beyond1920 . please put out a patch. I would like to review as well.
thanks @nsivabalan.
I think the underlying file system should ensure that fs.delete should throw exception instead of return false if it fail to delete the file.
But it might need a long time to discuss and push all the file system types to agree this rule.
Should we introduce new delete API in hoodie HoodieStorage ensure this rules or changed existed HoodieStorage#deleteDirectory and HoodieStorage#deleteFile API to avoid all unexpected behavior when call fs.delete.
Or just simply fix the current bug?
Should we introduce new delete API in hoodie HoodieStorage ensure this rules or changed existed HoodieStorage#deleteDirectory and HoodieStorage#deleteFile API to avoid all unexpected behavior when call fs.delete.
+1 for this way.
is the main reason, diff file system schemes treat file not found differently during fs.delete()? and you are proposing HoodieStorage#deleteFile to unify that?
This issue is now resolved and closed, following the merge of PR #11445.