captainst

Results 18 comments of captainst

Yes. it can. Both normal keras model and tensorRT model work in jetson NANO. Only that it take realllllly long to load the tensorRT model in jetson NANO

The .pb file created using object detection api 2 is a saved_model format, which enables eager mode. I don't think that you can load it using the conventional method. For...

> > The .pb file created using object detection api 2 is a saved_model format, which enables eager mode. I don't think that you can load it using the conventional...

I think I found the problem. The bitmap is encoded as BGR while I retrieve the value as RGB. So the correct code snippet is: ``` matrix[0, i, j, 0]...

I think that savedmodel format is mainly used for TF-Serving. For ordinary inference, you might need the convert the Keras H5 file to a .pb file then load the pb...

Just came across this issue. And this is how I made it work: OS: win10 GPU: GForce 1050ti Tensorflow-gpu: 1.10.0 Python: 3.6 (Anaconda) CUDA: 9.0 cudnn: 7.1.4 visual studio: 2013...

It means that you can build the graph and train the model in Tensorflow or Keras, then export the trained model to .pb file. Then use tensorflowsharp to load this...

Good point and good luck with your project.

@azamiftikhar1000 I have tried to insert one example into the prompt to make it "one-shot" (note I removed "Thought" at the end of the promt) :D ``` Answer the following...

The potential cause is that your parser is expecting the Action, and Action Input, something like ``` Action: the action to take, should be xxx Action Input: xxxx ``` While...