menglin0320

Results 16 comments of menglin0320

If you have the training data on Chinese. You can try.

They use batch size 1. And a great amount of operations involve anchors are numpy codes wrapped in tensorflow. It's expected to not be very training efficient. You can try...

In addition are you guys going to release a trained model without GMM, we want to try the model on math formulas.

Here is a difference between sparsity on **parameter** and sparsity on **representation**. Sparse Autoencoder proposed by Andrew NG is able to learn a sparse **representation** and it is well known...

I just learned abou GIL. So we have to use multiprocess and I can only dive into mpi to solve my problem if I want to stick with python?

Yes I also saw it. I may switch to polygraphy instead. I don't know much about cuda wrappers and I chose pycuda only because official tensorrt example used it. But...

The nvidia guys told me that tensorrt inference releases GIL. That's a good news, if the new feature would be added it can be useful in this case.

okay, just one quick question also. I found that pycuda is a lot quicker than polygraphy when doing memcpy. Do you know the reason?

k, I'll try to read the source code myself...

what do you mean when you say 'duplicate information being passed into the final layers'?