multi-task-NLP icon indicating copy to clipboard operation
multi-task-NLP copied to clipboard

how to provide samples in answerability and waht is the output for that

Open mayankpathaklumiq opened this issue 5 years ago • 2 comments

Describe the bug A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context Add any other context about the problem here.

mayankpathaklumiq avatar Aug 11 '20 10:08 mayankpathaklumiq

Hi @mayankpathaklumiq Thanks for writing. The concept of answerability is that given a query and a paragraph (context passage), determine where that query can be answered from given passage or not. Hence, the input is query, passage and label. Label is 0 (not-answerable) and 1 (for answerable). Similarly, the output produced will either be 0 or 1 while inferring on trained model. This is mentioned in the answerability example as well, which transforms the MSMARCO data for this task. After running the transformations part in above example, you can go to the data directory and checkout the created data files - msmarco_answerability_dev.tsv (or train/test) to know how is the final input data. PS. There was minor documentation error in the example in the transformation part which has been corrected.

saransh-mehta avatar Aug 12 '20 07:08 saransh-mehta

Thanks for your reply can you please mention the server requirements to train the model.

On Wed, Aug 12, 2020 at 1:05 PM Saransh Mehta [email protected] wrote:

Hi @mayankpathaklumiq https://github.com/mayankpathaklumiq Thanks for writing. The concept of answerability is that given a query and a paragraph (context passage), determine where that query can be answered from given passage or not. Hence, the input is query, passage and label. Label is 0 (not-answerable) and 1 (for answerable). Similarly, the output produced will either be 0 or 1 while inferring on trained model. This is mentioned in the answerability example as well, which transforms the MSMARCO data for this task. After running the transformations part in above example, you can go to the data directory and checkout the created data files - msmarco_answerability_dev.tsv (or train/test) to know how is the final input data.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hellohaptik/multi-task-NLP/issues/7#issuecomment-672692121, or unsubscribe https://github.com/notifications/unsubscribe-auth/APYLFHJ2NXLTLOGBQO7I3MTSAJA43ANCNFSM4P25B2HA .

-- Disclaimer: The content of this email is confidential and intended for the recipient specified in message only. It is strictly forbidden to share any part of this message with any third party, without a written consent of the sender. If you received this message by mistake, please reply to this message and follow with its deletion, so that we can ensure such a mistake does not occur in the future.

mayankpathaklumiq avatar Aug 12 '20 12:08 mayankpathaklumiq