DBNet
DBNet copied to clipboard
This is a tensorflow2.x implementation of "Real-time Scene Text Detection with Differentiable Binarization"
训练时会时不时地报以下错误: 2020-12-06 16:33:17.421937: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled 以及: libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: cHRM chunk does not...
dataset
您好,我测试了你开放的预训练模型,效果很好,我想自己训练下,wenmuzhou的检测提供了十几个数据集,您全用到了么?其中有很多弯曲文本的和阿拉伯文的数据集,想知道您训练用到了那几个数据集,我想复现训练下。 还有训练时,默认cpu训练么?如何修改为指定gpu啊,2.0的接口不太熟悉
尝试将DBNet改成支持多卡,但是实践发现,只有使用tf.dataset读取数据时才能支持多卡,请问有没有这样的尝试呢?
in batch norm layer setting layer.trainable = False means freeze the layer, i.e. its internal state will not change during training: its trainable weights will not be updated during fit()...
Can this project achieve the same accuracy and speed as official implement?
你好,我在train.py中看到`model.fit( x=train_generator, steps_per_epoch=cfg.STEPS_PER_EPOCH, initial_epoch=cfg.INITIAL_EPOCH, epochs=cfg.EPOCHS, verbose=1, callbacks=callbacks, validation_data=val_generator, validation_steps=cfg.VALIDATION_STEPS )` 其中fit函数并没指定batch_size,那么STEPS_PER_EPOCH依旧是数据集的大小,并没有除以batch_size吗;但是在generator中却有指定了batch_size,也就是说数据集会除以batch_size。那么在训练的过程是否会出现在同一个epoch中,队某个样本进行重复训练?