fanqiNO1

Results 10 issues of fanqiNO1

def corrplot(data, size_scale=500, marker='s'): corr = pd.melt(data.reset_index(), id_vars='index').replace(np.nan, 0) corr.columns = ['x', 'y', 'value'] heatmap( corr['x'], corr['y'], color=corr['value'], color_range=[-1, 1], palette=sns.diverging_palette(20, 220, n=256), size=corr['value'].abs(), size_range=[0,1], marker=marker, x_order=data.columns, y_order=data.columns[::-1], size_scale=size_scale )...

Competition_3v3snakes/rl_trainer/common.py Line 11 device = torch.device("cuda:1") if torch.cuda.is_available() else torch.device("cpu") "cuda:1" should be replcaed by "cuda:0", or there is an error "CUDA error: invalid device ordinal" Because "cuda:1" chooses the...

### Motivation 由于量化时需要 load calib 数据集,而目前默认是从 hf 进行 load,这可能会对无法连接到 hf 的用户造成一定的困扰。 因此,能否加入从 ModelScope load 相关数据集的逻辑? ### Related resources _No response_ ### Additional context 以下是个人一些不成熟的 idea: 对于 https://github.com/InternLM/lmdeploy/blob/main/lmdeploy/lite/utils/calib_dataloader.py 文件,以 c4 数据集为例,#L93-#L105...

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand...

# Background Since early stopping requires validation loss as a possible metric, mmengine currently does not support calculating and parsing validation loss as a metric. However, due to the inconsistency...

# Background As the model inference process requires more and more CUDA memory, we need a way to complete the model inference process in a variety of CUDA memory situations,...

If the LLM is too big to be loaded in a single GPU, we need `device_map = 'auto'` to avoid OOM. According to the issue #715.

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand...