FLINK-36540.[Runtime / Task] Add Support for Hadoop Caller Context when using Flink to operate hdfs.
… yarn
What is the purpose of the change
As described in FLINK-36540. When we use Flink to delete or write or modify files on Hadoop filesystem, callerContext is a helpful feature if we want to trace who did the operation or count how many files an application can create on hadoop filesystem. So I add a new Option so that when we enable callerContext in Flink, we can set a ThreadLocal value for each task so that we can trace it in hadoop's audit.log.
What's more, with this new feature and history json files in history server, we can calculate how many read operations and write operations a Flink application did to hdfs, and find out if there is a pressure or bottleneck to operate on hdfs files.
Brief change log
- Add a new HadoopOption
-
Add a new static method
setCallerContextin HadoopUtils.java -
modify
startTaskThreadin Task.java and add a new methodgetIsCallerContextEnabledin Task.java
Verifying this change
Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.
This change added tests and can be verified as follows:
(example:)
- Added new Unit Test in HadoopUtilsTest.java and TaskTest.java
- Tested on our YARN CLUSTER
I rebuild this project, and test the new jar file in my cluster, it prints out the correct caller context as expected
Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (yes / no)
- The public API, i.e., is any changed class annotated with
@Public(Evolving): (yes / no) - The serializers: (yes / no / don't know)
- The runtime per-record code paths (performance sensitive): (yes / no / don't know)
- Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
- The S3 file system connector: (yes / no / don't know)
Documentation
- Does this pull request introduce a new feature? (yes / no)
- If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
CI report:
- e3075d7bf9d035b209d7ce84c161bebe37a5bd18 Azure: SUCCESS
Bot commands
The @flinkbot bot supports the following commands:-
@flinkbot run azurere-run the last Azure build
@flinkbot run azure re-run the last Azure build
@xintongsong Hi, would you please help me check this issue?
@dmvk @fapaul @ferenc-csaky Hi, would you please help me check this issue?
@liangyu-1
Sorry for the late response. I was on a business travel for the past 2 weeks.
I'm not entirely sure about this feature. Why do we need thread level audit logs? Wouldn't it be good enough to launch Flink processes with different UGIs?
My biggest concern for this PR is that, it invades the common modules (i.e. flink-core & flink-runtime) with Hadoop specific logics. E.g., when generating the context string in startTaskThread(), it assumes that TM is running on yarn and certain container naming convention is applied. Such logics complicates the system with unnecessary dependencies and assumptions, which makes the code base hard to maintain.
I haven't look into whether there's a better way to implement this. But for the current implementation, I'd be negative for merging it.
I agree with @xintongsong's main concern regarding the submitted code. We should definitely avoid adding implementation specific logic to the general base components flink-core and flink-runtime.
This PR is being marked as stale since it has not had any activity in the last 90 days. If you would like to keep this PR alive, please leave a comment asking for a review. If the PR has merge conflicts, update it with the latest from the base branch.
If you are having difficulty finding a reviewer, please reach out to the community, contact details can be found here: https://flink.apache.org/what-is-flink/community/
If this PR is no longer valid or desired, please feel free to close it. If no activity occurs in the next 30 days, it will be automatically closed.
This PR has been closed since it has not had any activity in 120 days. If you feel like this was a mistake, or you would like to continue working on it, please feel free to re-open the PR and ask for a review.