ling976

Results 21 comments of ling976

这是完整的异常信息 20:22:27.842 [main] INFO ai.djl.pytorch.engine.PtEngine -- Number of inter-op threads is 10 20:22:27.842 [main] INFO ai.djl.pytorch.engine.PtEngine -- Number of intra-op threads is 12 20:22:27.850 [main] DEBUG ai.djl.pytorch.jni.JniUtils -- mapLocation: false...

这个模型训练完了本身就是pt格式的,而且你说的torch.jit.trace我试了你们官网的例子根本没用啊

import torch import torchvision tokenizer = AutoTokenizer.from_pretrained('./checkpoints/reward_model/sentiment_analysis/model_best/') model = torch.load('./checkpoints/reward_model/sentiment_analysis/model_best/model.pt') model.eval() example = torch.rand(1, 3, 224, 224) traced_script_module = torch.jit.trace(model, example) 这是按照你的例子写的,一直报错,根本没用 错误信息是 TypeError: RewardModel.forward() missing 1 required positional argument:...

照着你的方法模型已经转换过来了,最终代码是这样的 input_ids = torch.zero_(torch.tensor([1,16])).unsqueeze(dim=0).to(device) attention_mask = torch.zero_(torch.tensor([1,16])).unsqueeze(dim=0).to(device) token_type_ids = torch.zero_(torch.tensor([1,16])).unsqueeze(dim=0).to(device) traced_model = torch.jit.trace(model, (input_ids.long(),attention_mask, token_type_ids), strict=False) traced_model.save("./model.pt") 但是在java中调用的时候出现了新的问题,在调用predictor.predict()方法的时候抛异常了,根据异常信息判断应该是input和output定义不对,具体的异常信息如下: 12:14:07.905 [main] DEBUG ai.djl.mxnet.jna.LibUtils -- Loading mxnet library from: E:\Python\cache\mxnet\1.9.1-cu120mkl- win-x86_64\mxnet.dll Exception...

我使用下面的代码进行初始化 Path modelPt = Paths.get("build/pytorch_models/ernie-3.0-base-zh/model.pt"); HuggingFaceTokenizer tokenizer = HuggingFaceTokenizer.newInstance(modelPt); TextClassificationTranslator textClassificationTranslator = TextClassificationTranslator.builder(tokenizer).build(); 现在得到一个新的异常信息 Exception in thread "main" java.lang.RuntimeException: expected value at line 1 column 1 at ai.djl.huggingface.tokenizers.jni.TokenizersLibrary.createTokenizerFromString(Native Method) at...

我这边有 tokenizer.json这个文件,然后再java中怎么使用的

这个在源码中有示例吗

NDArray attention = ctx.getNDManager().create(encoding.getAttentionMask()); NDArray inputIds = ctx.getNDManager().create(encoding.getIds()); NDArray tokenTypes = ctx.getNDManager().create(encoding.getTypeIds()); 这里的ctx怎么定义的

总算搞定了,最终代码是这样的 public class TextTranslator implements Translator{ private HuggingFaceTokenizer tokenizer; TextTranslator(HuggingFaceTokenizer tokenizer) { this.tokenizer = tokenizer; } /** {@inheritDoc} */ @Override public NDList processInput(TranslatorContext ctx, String input) { Encoding encoding =...

如下载一个模型,用torch.jit.trace进行跟踪怎么才能知道第二个参数该怎么填呢,比如下面这个模型推理代码 tokenizer = AutoTokenizer.from_pretrained("./outputs/model_files") model_trained = AutoModelForSeq2SeqLM.from_pretrained("./outputs/model_files") #./v1/model_files #tokenizer = AutoTokenizer.from_pretrained("mxmax/Chinese_Chat_T5_Base") #model = AutoModelForSeq2SeqLM.from_pretrained("mxmax/Chinese_Chat_T5_Base") device = 'cuda' if cuda.is_available() else 'cpu' model_trained.to(device) def preprocess(text): return text.replace("\n", "_") def postprocess(text): return...