Rohit Saxena

Results 12 issues of Rohit Saxena

`[ 25%] Building CXX object CMakeFiles/crnn.dir/ctc.cpp.o In file included from /home/user/torch/install/include/thpp/Tensor.h:14:0, from /media/u556552/f5c1b8df-0e60-4c24-ad5f-2868c5732ecc/Advertisements/text-reco/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:8: /home/u556552/torch/install/include/thpp/Storage.h:22:43: fatal error: thpp/if/gen-cpp2/Tensor_types.h: No such file or directory compilation terminated. CMakeFiles/crnn.dir/build.make:86: recipe for target 'CMakeFiles/crnn.dir/ctc.cpp.o' failed...

Can you share the data and model using any other portal with direct download link available?

Looking at the tensorflow code, it seems to me that tf version has only one bilstm encoder ( only for words). Is that correct implementation?

Hi, I understand the clips cannot be shared due to copyrights. Is it possible to just share the _timestamp_ of the segment associated with the paragraph in the movie? Thanks...

I am using cuda-8 and tried cudnn 5.1 and 6 but was unable to compile https://github.com/vsubhashini/caffe/tree/recurrent/examples/s2vt. (Cannot create an issue there so raising it here) Also, can you elaborate on...

## 📚 Documentation This is in reference to the tutorial page below. https://captum.ai/tutorials/Llama2_LLM_Attribution I could not find the example for LLMGradientAttribution for LLAMA2. Any help on this will be appreciated....

I am trying to execute this python ClassifyWavGrayCORRECT.py evaluate ./files/ ./Emovo_Model/Emovo.caffemodel cnn 0 "" Emovo 250 RuntimeError: Could not open file Structures/Emotion_Gray_Emovo_deploy.prototxt

`RuntimeError: CUDA error: out of memory` I am running sampling for birds model (python main.py --cfg cfg/eval_bird.yml --gpu 0)

Model : ``` sequence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32') words = embedding_layer(sequence_input) h_words = Bidirectional(GRU(200, return_sequences=True,dropout=0.2,recurrent_dropout=0.2))(words) sentence = Attention()(h_words) #with return true #sentence = Dropout(0.2)(sentence) sent_encoder = Model(sequence_input, sentence[0]) print(sent_encoder.summary()) document_input =...