Hide device info prints in Python
🐛 Bug / feature(?)
Whenever running COMET from the huggingface-evaluate library, I get the following output:
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
When running the evaluation iteratively, it breaks the tqdm progress bar. I tried overriding the std* buffer before and flushing it before I revert it back, but the output is still there.
sys.stderr = io.StringIO()
comet_metric.compute(...)
sys.stderr.flush()
sys.stderr = sys.__stderr__
To Reproduce
Simply run the scoring from Python.
Expected behaviour / feature description
To be able to pass a quiet or verbose=0 flag to the scoring function to not output the device info.
Screenshots
39%|████████████████▋ | 14/36 [00:24<00:26, 1.19s/it]GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
42%|█████████████████▉ | 15/36 [00:25<00:23, 1.14s/it]GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
44%|███████████████████ | 16/36 [00:26<00:24, 1.21s/it]GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
47%|████████████████████▎ | 17/36 [00:28<00:25, 1.35s/it]GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
50%|█████████████████████▌ | 18/36 [00:29<00:24, 1.35s/it]GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
53%|██████████████████████▋ | 19/36 [00:31<00:24, 1.42s/it]GPU available: True, used: True
Both pytorch lightning and transformers are very verbose.
I am adding a --quiet flag that sets all loggers to ERROR level. It will be included in the next release
@ricardorei thanks for solving this issue a while back. Is it possible to add it to the documentation, how to actually use this flag? I'm using version 2.0.0 from python and it's not obvious for me how to hide the device printouts.
Its in the README:
comet-score -s src.de -t hyp1.en -r ref.en --quiet --only_system
@ricardorei but the issue mentions calling comet from python. So am I. Not the cli
You are right. It's just solved for the CLI interface. Ill reopen the issues as the current behaviour is still not the expected behaviour.
Meanwhile I'll leave here a snippet on how to run it in a quiet mode and silencing the underlying libraries.
import logging
loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict]
for logger in loggers:
logger.setLevel(logging.WARNING)
Finally, you can run COMET with progress_bar=False :
model_output = model.predict(data, batch_size=8, gpus=1, progress_bar=False)
I'm also getting a pytorch_lightning warning once I implement the code you suggested. Everything else is gone.
PossibleUserWarning: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
rank_zero_warn(