you

Results 18 comments of you

I also encounter the same problem. And I set the warning weight layer to FP32 manually, then the warning disappear. However, the accuracy of the result is still degraded.

> @YouSenRong Setting those layer back to FP32 just solve the subnormal value issue, but FP16 indeed has less accuracy than FP32 due to less mantissa bit. if you set...

> > > @YouSenRong Setting those layer back to FP32 just solve the subnormal value issue, but FP16 indeed has less accuracy than FP32 due to less mantissa bit. if...

> @YouSenRong Can you give you commd how to set the layers to fp32 The trtexec support command line: --layerPrecisions, --layerOutputTypes[A.2.1.4. Commonly Used Command-line Flags](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-flags) to set layer precision. You...

I also encountered this problem, that I had set the layer_precision and layer_output_type to kFLOAT of all layers can be set under FP16 mode, but some inference results are still...

@lix19937 Thanks for your reply. Finally, I found that the problem is due to the overflow in FP16 of some inputs. After we fixed the overflow, the accuracy of result...

When I compile the compile the master of folly with Clang compiler (9.0.1), I met a problem as bellow. ``` "/folly/folly/chrono/Hardware.h:94:16: error: expected '(' after 'asm' asm volatile inline(" ```...

Hi, will this pr be merged, or is there any thing else should be done before being merged?

Ok, Thanks! @Orvid

Tanks for you reply! @zerollzeng > How do you set the layer precision? I set the precision by calling the setPrecision of a layer, as ![image](https://github.com/NVIDIA/TensorRT/assets/21186538/85eacda4-08dc-4c45-8ae0-bf45757eff69) > did you set...