Ashwani-Dangwal
Ashwani-Dangwal
@sanprit Any luck in using the fine tuned model?
Hi @NielsRogge , I was wondering whether you made some changes to the model while uploading it to hugging face as the results from using the hugging face model and...
@Prabhav55 , @sanprit Can you please share what learning rate you used while fine tuning and what were the AP after training? And also if you made more changes in...
@Prabhav55 , Thanks for the reply. You tried to fine tune the model or trained from scratch?
@Prabhav55 What i was meaning to ask was whether the model you trained was fine tuned on the check point provided b the author (pubtables1m_structure_detr_r18.pth this check point) or did...
@NielsRogge . Thanks for the reply, I did use both the original model and the one in the hugging face and the output of both are as follows - Output...
@NielsRogge How do i print the logits of the hugging face model? Also the post processing steps are same for inferencing for both the models, which are taken from the...
> @Ashwani-Dangwal To get the logits using HF model, it is in the output of the model, you can have it using > > `model(**encoding).logits` > Thankyou
@NielsRogge , Sorry about that, deleted that post. Here is the logits of the hugging face model - > tensor([[[-1.1852e+01, -5.1195e+00, 8.9091e+00, -7.7407e+00, -4.9734e+00, > -3.5293e+00, 1.1821e+00], > [-1.0989e+01, -6.1581e+00,...
@NielsRogge I have added you as a collaborator you can check out the code for inference and visualization. Thankyou.