Option to disable or redirect log messages
Am seeing logging messages sent to terminal much like those described in this mailing list post (snippet of one example below). Is there some option to turn them off or redirect them to a log file at run time? Thanks.
Job <38534717> is submitted to default queue <normal>.
E #7a96 [ 0.00] * fsd_exc_new(1006,Vector have no more elements.,0)
@jakirkham,
Please see the following IBM KB article. You can also find other references by searching Google using the following search string: "BSUB_QUIET site:ibm.com", for our environment variable reference.
Small KB article: http://www-01.ibm.com/support/docview.wss?uid=isg3T1015935
Environment variable reference: https://www.ibm.com/support/knowledgecenter/en/SSWRJV_10.1.0/lsf_config_ref/lsf_envars_ref.html
Thanks @adamsla.
So that does get rid of the Job <38534717> is submitted to default queue <normal>. log messages. Though it does not get rid of the E #7a96 [ 0.00] * fsd_exc_new(1006,Vector have no more elements.,0) log messages. Do you have any idea as to how to get rid of this second set of log messages?
You can redirect job output either to a file or /dev/null depending on what you want with that data. I suspect this is standard error. So, a few options:
-e /dev/null -e /some/path/%J.e
Choose your poison.
Also, check the user manual. That might already be supported as a standard drmaa option.
LSF supports a variety of replacement variables:
%J - Job Id %I - Job Index (for array jobs) %H - Job Execution Host %U - Job Execution User
You can use the variables programmatically in for you standard output (-o), standard error (-e), and local working directory (-cwd)
So we are already dumping stdout and stderr to log files with DRMAA, but we are still seeing this logged outside of those files.
What does the deck look like? This is an end of list exception. So, it's a drmaa internal error that leads me to believe their is something wrong with the way your have submitted the workload.
Sorry to be dense, but what do you mean by "deck"?
How are you submitting the jobs where this error is occurring.
There are a few layers of indirection unfortunately. Jobs are started by a runBulkJobs call in a library called dask-drmaa, which is calling into a Python DRMAA binding library that calls into libdrmaa roughly here. The c there is just a wrapper for calling C functions and is uninteresting for our purposes.