core-dump-handler
core-dump-handler copied to clipboard
Save core dumps from a Kubernetes Service or RedHat OpenShift to an S3 protocol compatible object store
Dynamic code analysis is a SUGGESTED requirement for OSSF Best Practices This issue is open to evaluate the DAST tools listed here to see which ones are fit for purpose...
As part of the discussion in https://github.com/IBM/core-dump-handler/issues/44#issuecomment-1007283866 it was suggested that on systems using systemd the agent could open a socket directly to systemd-coredump via its socket (/run/systemd/coredump) and process...
This Tool looks like the exact thing i need now for my AKS Clusters , but unfortunately Im not able to find any option to send the core dump to...
Following on from the discussion in https://github.com/IBM/core-dump-handler/discussions/61 Opening this issue to track this feature for release in 9.0.0
It would be great to get a demo of installing the tool and running the sample segfault app for people interested in using the project. Link to the video should...
It would be great if there were markdown files for each of the working cloud providers that contained documentation for setting the project up on the different k8s services and...
This MR fixes two issues seen in a production environment: * We have the problem that big core dumps were corrupted which can be fixed by using `io::copy` instead of...
Hi, I set the filenameTemplate: "{uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}-{podname}-{namespace}", but I get the filename like this: "9a1fc79c-758c-4599-a22d-2e94444a3250-dump-1657867608-segfaulter-segfaulter-1-4-unknown-unknown.zip". How to fix it?