llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Fix build under Windows when enable BUILD_SHARED_LIBS

Open howard0su opened this issue 2 years ago • 1 comments

Fix the build error when enable BUILD_SHARED_LIBS. Export all symbols as a solution for now.

howard0su avatar Apr 21 '23 13:04 howard0su

The build errors:

quantize.obj : error LNK2019: unresolved external symbol ggml_time_init referenced in function main [C:\GPT\llama.cpp\build\examples\quantize\quantize.vcxproj]
quantize.obj : error LNK2019: unresolved external symbol ggml_time_us referenced in function main [C:\GPT\llama.cpp\build\examples\quantize\quantize.vcxproj]
quantize.obj : error LNK2019: unresolved external symbol ggml_init referenced in function main [C:\GPT\llama.cpp\build\examples\quantize\quantize.vcxproj]
quantize.obj : error LNK2019: unresolved external symbol ggml_free referenced in function main [C:\GPT\llama.cpp\build\examples\quantize\quantize.vcxproj]
C:\GPT\llama.cpp\build\bin\Debug\quantize.exe : fatal error LNK1120: 4 unresolved externals [C:\GPT\llama.cpp\build\examples\quantize\quantize.vcxproj]
quantize-stats.obj : error LNK2019: unresolved external symbol "class std::vector<struct std::pair<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,struct ggml_tensor *>,class std::allocator<struct std::pair<class std::basic_strin
g<char,struct std::char_traits<char>,class std::allocator<char> >,struct ggml_tensor *> > > & __cdecl llama_internal_get_tensor_map(struct llama_context *)" (?llama_internal_get_tensor_map@@YAAEAV?$vector@U?$pair@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@
@std@@PEAUggml_tensor@@@std@@V?$allocator@U?$pair@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PEAUggml_tensor@@@std@@@2@@std@@PEAUllama_context@@@Z) referenced in function main [C:\GPT\llama.cpp\build\examples\quantize-stats\quantize-stats.vcxproj]
C:\GPT\llama.cpp\build\bin\Debug\quantize-stats.exe : fatal error LNK1120: 1 unresolved externals [C:\GPT\llama.cpp\build\examples\quantize-stats\quantize-stats.vcxproj]

howard0su avatar Apr 21 '23 13:04 howard0su

Yes, exactly. I can write a def but llama_internal_get_tensor_map (test only) blocks me.

howard0su avatar Apr 22 '23 12:04 howard0su