Dataset hashes can change with package upgrades
The current way we handle data hashing doesn't survive package upgrades. For example, with pandas, we have been dumping dataframes and the hashes change (even if the data itself doesn't) with upgrades to pandas.
Possible references:
- https://stackoverflow.com/questions/31567401/get-the-same-hash-value-for-a-pandas-dataframe-each-time#47800021
- http://drorata.github.io/posts/2017/May/26/when-trying-to-hash-a-data-frame/index.html
Example: In https://github.com/acwooding/ReproAllTheThings, we get different hashes with pandas==1.0.5 and pandas 1.3.2 on MacOS.
Another potential culprit: https://github.com/joblib/joblib/pull/1136
The risk, (which is the reason, I assume, it was not done this way already) is that the pickle memoization process will interfere will hashing and create spurious changes in pickle string of dtypes with the final consequence of assigning different hash values for seemingly identical objects
I think there's a really deep issue here, and that's that in order to be truly reproducible here, we need a hash that's more aware of the data, as certain data formats will change version-to-version even through the underlying raw data is identical.