[BUG] <运行官方示例,第二轮对话错误,python cannot pickle 'generator' object>
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [x] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [x] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
`The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here! These formations often feature dramatic shapes like the "fairy chimneys" seen reflected in the water, which are typical of regions where soluble rocks (like limestone) have been eroded over millions of years.
This scene closely resembles famous karst areas such as Guilin-Li River in China, known for its iconic "karst peaks rising from a river." The smooth, reflective surface of the water enhances the mirror-like effect, making the mountains appear even more striking against the colorful sunset sky.Traceback (most recent call last):
File "/home/admin/software/bigModel/Test/load_openbmb.py", line 40, in
期望行为 | Expected Behavior
round2 When traveling to a karst landscape like this, here are some important tips:
- Wear comfortable shoes: The terrain can be uneven and hilly.
- Bring water and snacks for energy during hikes or boat rides.
- Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots.
- Respect local customs and nature regulations by not littering or disturbing wildlife.
By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains.
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS:Anolis OS
- Python:3.11.0
- Transformers:4.56.1
- PyTorch:2.7.1+cu128
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.8
Package Version Editable project location
---------------------------------------- ---------------- ---------------------------------
accelerate 1.10.0
aiofiles 24.1.0
aiohappyeyeballs 2.6.1
aiohttp 3.12.15
aiosignal 1.4.0
airportsdata 20250811
albucore 0.0.24
albumentations 2.0.8
annotated-types 0.7.0
anthropic 0.64.0
antlr4-python3-runtime 4.9.3
anyio 4.10.0
ascii_colors 0.11.4
astor 0.8.1
asttokens 3.0.0
attrs 25.3.0
beautifulsoup4 4.13.5
blake3 1.0.5
blobfile 3.0.0
boto3 1.40.16
botocore 1.40.16
Brotli 1.1.0
build 1.3.0
cachetools 6.1.0
cbor2 5.7.0
certifi 2025.8.3
cffi 1.17.1
charset-normalizer 3.4.3
click 8.2.1
clip 1.0
cloudpickle 3.1.1
coloredlogs 15.0.1
colorlog 6.9.0
compressed-tensors 0.10.2
configparser 7.2.0
contourpy 1.3.3
cryptography 45.0.6
cuda-bindings 12.9.2
cuda-pathfinder 1.1.0
cuda-python 12.9.0
cupy-cuda12x 13.5.1
cycler 0.12.1
datasets 4.0.0
decorator 5.2.1
decord 0.6.0
Deprecated 1.2.18
depyf 0.19.0
dill 0.3.8
diskcache 5.6.3
distro 1.9.0
dnspython 2.7.0
doclayout_yolo 0.0.4
docling 2.48.0
docling-core 2.45.0
docling-ibm-models 3.9.0
docling-parse 4.2.3
docx2pdf 0.1.8
dotenv 0.9.9
easyocr 1.7.2
effdet 0.4.1
einops 0.8.1
email_validator 2.2.0
et_xmlfile 2.0.0
executing 2.2.0
fast-langdetect 0.2.5
fastapi 0.116.1
fastapi-cli 0.0.8
fastapi-cloud-cli 0.1.5
fastrlock 0.8.3
fasttext-predict 0.9.2.4
ffmpy 0.6.1
filelock 3.18.0
filetype 1.2.0
flashinfer-python 0.2.14.post1
flatbuffers 25.2.10
fonttools 4.59.1
frozenlist 1.7.0
fschat 0.2.36
fsspec 2025.3.0
ftfy 6.3.1
future 1.0.0
gguf 0.17.1
googleapis-common-protos 1.70.0
gradio 5.42.0
gradio_client 1.11.1
gradio_pdf 0.0.22
groovy 0.1.2
grpcio 1.74.0
h11 0.16.0
hf_transfer 0.1.9
hf-xet 1.1.7
httpcore 1.0.9
httptools 0.6.4
httpx 0.28.1
huggingface-hub 0.34.4
humanfriendly 10.0
idna 3.10
imageio 2.37.0
importlib_metadata 8.0.0
interegular 0.3.3
iopath 0.1.10
ipython 9.4.0
ipython_pygments_lexers 1.1.1
jedi 0.19.2
Jinja2 3.1.6
jiter 0.10.0
jmespath 1.0.1
joblib 1.5.1
json_repair 0.49.0
jsonlines 3.1.0
jsonref 1.1.0
jsonschema 4.25.0
jsonschema-specifications 2025.4.1
kiwisolver 1.4.9
lark 1.2.2
latex2mathml 3.78.0
layoutparser 0.3.4
lazy_loader 0.4
lightrag-hku 1.4.7 /home/admin/software/LightRAG
llguidance 0.7.30
llvmlite 0.44.0
lm-format-enforcer 0.10.12
loguru 0.7.3
lxml 5.4.0
magic-pdf 1.3.12
markdown-it-py 4.0.0
markdown2 2.5.4
marko 2.2.0
MarkupSafe 3.0.2
matplotlib 3.10.5
matplotlib-inline 0.1.7
mdurl 0.1.2
milvus-lite 2.5.1
mineru 2.1.11 /home/admin/software/MinerU
mistral_common 1.8.3
modelscope 1.29.1
mpire 2.10.2
mpmath 1.3.0
msgpack 1.1.1
msgspec 0.19.0
multidict 6.6.4
multiprocess 0.70.16
nano-vectordb 0.0.4.3
nest-asyncio 1.6.0
networkx 3.5
nh3 0.3.0
ninja 1.13.0
numba 0.61.2
numpy 2.2.6
nvidia-cublas-cu12 12.8.3.14
nvidia-cuda-cupti-cu12 12.8.57
nvidia-cuda-nvrtc-cu12 12.8.61
nvidia-cuda-runtime-cu12 12.8.57
nvidia-cudnn-cu12 9.7.1.26
nvidia-cudnn-frontend 1.14.0
nvidia-cufft-cu12 11.3.3.41
nvidia-cufile-cu12 1.13.0.11
nvidia-curand-cu12 10.3.9.55
nvidia-cusolver-cu12 11.7.2.55
nvidia-cusparse-cu12 12.5.7.53
nvidia-cusparselt-cu12 0.6.3
nvidia-ml-py 12.575.51
nvidia-nccl-cu12 2.26.2
nvidia-nvjitlink-cu12 12.8.61
nvidia-nvtx-cu12 12.8.55
ollama 0.5.3
omegaconf 2.3.0
onnxruntime 1.22.1
open_clip_torch 3.1.0
openai 1.99.1
openai-harmony 0.0.4
opencv-python 4.12.0.88
opencv-python-headless 4.12.0.88
openpyxl 3.1.5
opentelemetry-api 1.26.0
opentelemetry-exporter-otlp 1.26.0
opentelemetry-exporter-otlp-proto-common 1.26.0
opentelemetry-exporter-otlp-proto-grpc 1.26.0
opentelemetry-exporter-otlp-proto-http 1.26.0
opentelemetry-proto 1.26.0
opentelemetry-sdk 1.26.0
opentelemetry-semantic-conventions 0.47b0
opentelemetry-semantic-conventions-ai 0.4.12
orjson 3.11.2
outlines 0.1.11
outlines_core 0.2.10
packaging 25.0
pandas 2.3.1
parso 0.8.5
partial-json-parser 0.2.1.1.post6
pdf2image 1.17.0
pdfminer.six 20250506
pdfplumber 0.11.7
pdftext 0.6.3
peft 0.17.0
pexpect 4.9.0
pillow 11.3.0
pip 25.2
pipmaster 0.9.2
pluggy 1.6.0
portalocker 3.2.0
prometheus_client 0.22.1
prometheus-fastapi-instrumentator 7.1.0
prompt_toolkit 3.0.51
propcache 0.3.2
protobuf 6.32.0
psutil 7.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
py-cpuinfo 9.0.0
pyarrow 21.0.0
pybase64 1.4.2
pyclipper 1.3.0.post6
pycocotools 2.0.10
pycountry 24.6.1
pycparser 2.22
pycryptodomex 3.23.0
pydantic 2.11.7
pydantic_core 2.33.2
pydantic-extra-types 2.10.5
pydantic-settings 2.10.1
pydub 0.25.1
Pygments 2.19.2
pylatexenc 2.10
pymilvus 2.6.0
PyMuPDF 1.24.14
pynvml 12.0.0
pyparsing 3.2.3
pypdf 6.0.0
pypdfium2 4.30.0
pyproject_hooks 1.2.0
pytesseract 0.3.13
python-bidi 0.6.6
python-dateutil 2.9.0.post0
python-docx 1.2.0
python-dotenv 1.1.1
python-json-logger 3.3.0
python-multipart 0.0.20
python-pptx 1.0.2
pytz 2025.2
pyuca 1.2
PyYAML 6.0.2
pyzmq 27.0.1
raganything 1.2.7 /home/admin/software/RAG-Anything
rapid-table 1.0.5
ray 2.48.0
referencing 0.36.2
regex 2025.7.34
reportlab 4.4.3
requests 2.32.4
rich 14.1.0
rich-toolkit 0.15.0
rignore 0.6.4
robust-downloader 0.0.2
rpds-py 0.27.0
rtree 1.4.1
ruff 0.12.9
s3transfer 0.13.1
safehttpx 0.1.6
safetensors 0.6.2
scikit-image 0.25.2
scikit-learn 1.7.1
scipy 1.16.1
seaborn 0.13.2
semantic-version 2.10.0
semchunk 2.2.2
sentence-transformers 5.1.0
sentencepiece 0.2.1
sentry-sdk 2.35.0
setproctitle 1.3.6
setuptools 80.9.0
sgl-kernel 0.3.5
sglang 0.5.1.post2
shapely 2.1.1
shellingham 1.5.4
shortuuid 1.0.13
simsimd 6.5.1
six 1.17.0
sniffio 1.3.1
soundfile 0.13.1
soupsieve 2.7
soxr 0.5.0.post1
stack-data 0.6.3
starlette 0.47.2
stringzilla 3.12.6
svgwrite 1.4.3
sympy 1.14.0
tabulate 0.9.0
tenacity 9.1.2
thop 0.1.1-2209072238
threadpoolctl 3.6.0
tifffile 2025.6.11
tiktoken 0.11.0
timm 1.0.16
tokenizers 0.22.0
tomlkit 0.13.3
torch 2.7.1+cu128
torch_memory_saver 0.0.8
torchao 0.9.0
torchaudio 2.7.1+cu128
torchvision 0.22.1+cu128
tqdm 4.67.1
traitlets 5.14.3
transformers 4.56.1
triton 3.3.1
typer 0.16.0
typing_extensions 4.14.1
typing-inspection 0.4.1
tzdata 2025.2
ujson 5.11.0
ultralytics 8.3.183
ultralytics-thop 2.0.16
urllib3 2.5.0
uv 0.8.13
uvicorn 0.35.0
uvloop 0.21.0
vllm 0.10.1.1
watchfiles 1.1.0
wavedrom 2.0.3.post3
wcwidth 0.2.13
websockets 15.0.1
wheel 0.45.1
wrapt 1.17.3
xformers 0.0.31
xgrammar 0.1.21
xlsxwriter 3.2.5
xxhash 3.5.0
yarl 1.20.1
zipp 3.23.0
备注 | Anything else?
No response
Could you share which model you used to run the example?
Running into the same issue, model is 'openbmb/MiniCPM-V-4_5'.
@tc-mb I used the following code with the MiniCPM-V-4_5 model, and encountered the same error: answer = model.chat( File "/root/.cache/huggingface/modules/transformers_modules/model/modeling_minicpmv.py", line 361, in chat copy_msgs = deepcopy(msgs) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 205, in _deepcopy_list append(deepcopy(a, memo)) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 205, in _deepcopy_list append(deepcopy(a, memo)) File "/opt/conda/envs/minicpm_v4_5/lib/python3.9/copy.py", line 161, in deepcopy rv = reductor(4) TypeError: cannot pickle 'generator' object
`import torch from PIL import Image from transformers import AutoModel, AutoTokenizer
torch.manual_seed(100) model_path='xxx' device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16) model = model.to(device=device) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
image = Image.open('xxx').convert('RGB')
enable_thinking=False # If enable_thinking=True, the thinking mode is enabled.
stream=True # If stream=True, the answer is string
First round chat
question = "Could you please help me describe the content of the picture?" msgs = [{'role': 'user', 'content': [image, question]}]
answer = model.chat( msgs=msgs, tokenizer=tokenizer, enable_thinking=enable_thinking, stream=True )
generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='')
Second round chat, pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": [answer]}) msgs.append({"role": "user", "content": ["Where can I see such a scenery?"]})
answer = model.chat( msgs=msgs, tokenizer=tokenizer, stream=True )
generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='')`
@wu-yang-work @padmalcom @kada0720 We apologize for this oversight. Thank you very much for bringing it to our attention.
This issue arises from the fact that, when using streaming output, answer is a generator, not the returned text itself. This error stems from misusing non-streaming code without proper validation. I have fixed this issue.
Again, I apologize and will strengthen code validation in the future.