openhouse icon indicating copy to clipboard operation
openhouse copied to clipboard

Downgrade hadoop to 2.8 for docker setup

Open teamurko opened this issue 1 year ago • 5 comments

Summary

Maintenance apps happened to work on hadoop3 docker cluster, DLO apps didn't work due to incompatibility between source that depends on 2.10 and runtime jars on the cluster. DLO app example error:

2024-08-08 09:07:22 2024-08-08 16:07:22,005 INFO utils.LineBufferedStream: 2024-08-08 16:07:22,004 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (172.20.0.8 executor 0): java.lang.IllegalAccessError: class org.apache.hadoop.hdfs.web.HftpFileSystem cannot access its superinterface org.apache.hadoop.hdfs.web.TokenAspect$TokenManagementDelegator
2024-08-08 09:07:22 2024-08-08 16:07:22,005 INFO utils.LineBufferedStream:      at java.lang.ClassLoader.defineClass1(Native Method)
2024-08-08 09:07:22 2024-08-08 16:07:22,005 INFO utils.LineBufferedStream:      at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
2024-08-08 09:07:22 2024-08-08 16:07:22,005 INFO utils.LineBufferedStream:      at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.security.AccessController.doPrivileged(Native Method)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.lang.Class.forName0(Native Method)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.lang.Class.forName(Class.java:348)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:3217)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3262)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.iceberg.hadoop.Util.getFs(Util.java:56)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.iceberg.hadoop.HadoopInputFile.fromLocation(HadoopInputFile.java:56)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.iceberg.hadoop.HadoopFileIO.newInputFile(HadoopFileIO.java:90)
2024-08-08 09:07:22 2024-08-08 16:07:22,006 INFO utils.LineBufferedStream:      at org.apache.iceberg.spark.source.BaseDataReader.lambda$new$2(BaseDataReader.java:87)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1720)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:295)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:207)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:162)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:301)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.lang.Iterable.forEach(Iterable.java:74)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.iceberg.relocated.com.google.common.collect.Iterables$5.forEach(Iterables.java:752)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.iceberg.spark.source.BaseDataReader.<init>(BaseDataReader.java:93)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.iceberg.spark.source.RowDataReader.<init>(RowDataReader.java:57)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.iceberg.spark.source.SparkBatchScan$RowReader.<init>(SparkBatchScan.java:301)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.iceberg.spark.source.SparkBatchScan$ReaderFactory.createReader(SparkBatchScan.java:278)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:60)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.scheduler.Task.run(Task.scala:131)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream:      at java.lang.Thread.run(Thread.java:748)
2024-08-08 09:07:22 2024-08-08 16:07:22,007 INFO utils.LineBufferedStream: 

Works after downgrade.

Changes

  • [ ] Client-facing API Changes
  • [ ] Internal API Changes
  • [x] Bug Fixes
  • [ ] New Features
  • [ ] Performance Improvements
  • [ ] Code Style
  • [ ] Refactoring
  • [ ] Documentation
  • [ ] Tests

For all the boxes checked, please include additional details of the changes made in this pull request.

Testing Done

  • [x] Manually Tested on local docker setup. Please include commands ran, and their output.
  • [ ] Added new tests for the changes made.
  • [ ] Updated existing tests to reflect the changes made.
  • [ ] No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • [ ] Some other form of testing like staging or soak time in production. Please explain.

For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.

Additional Information

  • [ ] Breaking Changes
  • [ ] Deprecations
  • [ ] Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

teamurko avatar Sep 17 '24 00:09 teamurko

It looks like we might be hiding an issue with downgrade being an workaround. Can we try to see why this stopped working?

sumedhsakdeo avatar Sep 19 '24 21:09 sumedhsakdeo

It looks like we might be hiding an issue with downgrade being an workaround. Can we try to see why this stopped working?

My understanding is it was not supposed to work but it turned out to be working for the first app we had, stats collector app and DLO app started hitting code path that surfaced incompatibility. From SO posts that surface similar problem, when downgrading helped, e.g. https://stackoverflow.com/questions/62880009/error-through-remote-spark-job-java-lang-illegalaccesserror-class-org-apache-h

teamurko avatar Sep 19 '24 21:09 teamurko

I see, why not go to the documented supported version 2.10

sumedhsakdeo avatar Sep 19 '24 21:09 sumedhsakdeo

My understanding is it was not supposed to work but it turned out to be working for the first app we had, stats collector app and DLO app started hitting code path that surfaced incompatibility

StatsCollector was working before. For eg: https://github.com/linkedin/openhouse/pull/72 . Post this there were no significant changes in statscollector. So I am not sure this is actually a day 0 issue. This appears to be something recent.

maluchari avatar Sep 19 '24 21:09 maluchari

My understanding is it was not supposed to work but it turned out to be working for the first app we had, stats collector app and DLO app started hitting code path that surfaced incompatibility

StatsCollector was working before. For eg: #72 . Post this there were no significant changes in statscollector. So I am not sure this is actually a day 0 issue. This appears to be something recent.

Thank you for sharing @maluchari

teamurko avatar Sep 19 '24 21:09 teamurko

Can we keep both hadoop configs in the code, so we can easily switch locally? Also, why don't we use bde2020/2.0.0-hadoop2.7.4-java8 which is a newer image and match the spark-3.1.1-bin-hadoop2.7 version?

jiang95-dev avatar Feb 20 '25 18:02 jiang95-dev

Can we keep both hadoop configs in the code, so we can easily switch locally? Also, why don't we use bde2020/2.0.0-hadoop2.7.4-java8 which is a newer image and match the spark-3.1.1-bin-hadoop2.7 version?

I was not able to make 2.0.0-hadoop2.7.4 work, spark that is built for hadoop 2.7 is compatible with hadoop 2.8

teamurko avatar Feb 27 '25 22:02 teamurko