lack of ScheduledOptim

“ScheduledOptim” is not defined in modules.Whether the content is missing?
It seems that the code you are running is inconsistent with the Github repository.

Can you double-check?
I downloaded the latest version and download dataset from google drive. When I running train.py.There are still some problems when I running train.py.
It seems that the data type is not matching.I change my environment python=3.6 and Pytorch=0.4.1 as the README.md.The problem is not solved. Can you give me the current environment?
Can you print the size and content of inputs before calling inputs = self.word_embedder(inputs) in modules.py? Can you check whether they are word indices or sequence lengths?
the type of inputs is int32
How about brutally casting the inputs to long such asinputs = inputs.long()?
Thanks your help!
There are so many place has this problem .I chang the data type from int32 to long in train.py line 114 .It can work now.

But I don't know if it will have an unpredictable effect on the result.Because the source code your share don‘t need to change the tensor type in your environment.
It can be OK. It might be caused by the Line 53-54 in data_loader.py. Can you try to change the argument np.int to np.long?
I chang np.int to np.long but result is same .

Dear author,I change the np.int to np.int64 and the problem was solved. My problem might be caused by numpy version. When I train the model,I find that it needs nearly 22 hours to train an epoch.It takes too much time.Is it normal to consume such a long time? My GPU is GTX2060 and use default configuration in config.py,GPU usage is not high during training.
Thanks for reporting this solution. Yes, it is normal to cost such a long time for training an epoch.
My result is Avg recall BLEU 28.139705, avg precision BLEU 28.139705, F1 28.139705. There seems to be something wrong with my training. Can you provide a trained model ?