Yingqiang Ge
Yingqiang Ge
race | sex | LSAT | UGPA | region_first | ZFYA | sander_index | first_pf Specially, UGPA, ZFYA, sander_index, first_pf. Thx~
https://stackoverflow.com/questions/50257614/tensorflow-eager-and-tensorboard-graphs This link seems to point out the problem. Right now, I disable tensorflow eager execution by tf.compat.v1.disable_eager_execution().
I had maybe a new error on Linux... Tensorflow Attribute Error: module 'tensorflow_core._api.v2.train' has no attribute 'RMSProp Optimizer' https://github.com/google/dopamine/issues/121 This link relates to this error, unfortunately, no one answers... _Originally...
I am currently in the process of replacing GPT with Vicuna in my project. While Vicuna is able to successfully generate the required action and action input, I am encountering...
the instruction code for mpt-7b works fine when using older version 20240123, but when updating to the latest branch, using the new code, always have OOM error with multiple gpus,...
setting: aws g5.48xlarge This code worked fine when using single gpu, and failed when trying to use more. I also increase the --shm-size to 20G, not working. Can this problem...
Having problems when using MPT. Setting: AWS g5.48xlarge, CUDA 12.1.0, Ubuntu 22.04, python 3.10, pytorch 2.1.2. ``` root@7f51eddb66f5:/TensorRT-LLM/examples/mpt# trtllm-build --checkpoint_dir=./ft_ckpts/mpt-7b/fp16 \ --max_batch_size 32 \ --max_input_len 1024 \ --max_output_len 512 \...
Ran the following code to quantize MPT-7B, met the following error. from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_path = "mosaicml/mpt-7b" quant_path = './mpt_7b_awq' # Load model model =...