bert icon indicating copy to clipboard operation
bert copied to clipboard

ValueError: A KerasTensor is symbolic: it's a placeholder for a shape an a dtype. It doesn't have any actual numerical value. You cannot convert it to a NumPy array.

Open accioharshita opened this issue 1 year ago • 8 comments

Hey, so I've downloaded the preprocessing & encoder layer of BERT in order to build a simple email classification model. When I'm finally building my model to pass the training data it throws this error. Can someone tell me what's wrong?

Screenshot 2024-03-27 021032 Screenshot 2024-03-27 021007

accioharshita avatar Mar 26 '24 20:03 accioharshita

I have the same problem. With the URL was working fine, but with the model working locally, for some reason it crashes

LeviBen avatar Mar 31 '24 19:03 LeviBen

@accioharshita Hi, I was having the same problem. In fact, I was using the exact same code as you. I managed to solve my problem by importing Bert through the keras_nlp library. Here is the code I ended up with:

text_input = tf.keras.layers.Input(shape=(), dtype=tf.string)
preprocessor = keras_nlp.models.BertPreprocessor.from_preset("bert_base_en_uncased",trainable=True)
encoder_inputs = preprocessor(text_input)
encoder = keras_nlp.models.BertBackbone.from_preset("bert_base_en_uncased")
outputs = encoder(encoder_inputs)
pooled_output = outputs["pooled_output"]      # [batch_size, 768].
sequence_output = outputs["sequence_output"]  # [batch_size, seq_length, 768].

SDSCodes22 avatar Apr 06 '24 22:04 SDSCodes22

@SoumyaCodes2020 can you please let me know how you saved model with this approach. I'm using this approach model3.save("model3.keras") model3 = keras.models.load_model("model3.keras") but getting error

No vocabulary has been set for WordPieceTokenizer. Make sure to pass a `vocabulary` argument when creating the layer.

BismaAyaz avatar Apr 17 '24 00:04 BismaAyaz

My problem disappeared when I installed tensorflow 2.15.1 and tensorflow-text 2.15.0, instead of the newest 2.16.0. tensorflow-hub version was 0.16.1. (my comment below does not apply to this solution)

@SoumyaCodes2020 how many trainable params do you have in the model if you use that approach (when you call model.summary())? It seems that this approach leads to the Bert layer parameters being trainable (even if trainable=False). I had to solve it by looping through all layers in the encoder and setting the layers as non-trainable: for layer in encoder.layers: layer.trainable = False

leinonk1 avatar Jun 01 '24 17:06 leinonk1

@accioharshita Hi, I was having the same problem. In fact, I was using the exact same code as you. I managed to solve my problem by importing Bert through the keras_nlp library. Here is the code I ended up with:

text_input = tf.keras.layers.Input(shape=(), dtype=tf.string)
preprocessor = keras_nlp.models.BertPreprocessor.from_preset("bert_base_en_uncased",trainable=True)
encoder_inputs = preprocessor(text_input)
encoder = keras_nlp.models.BertBackbone.from_preset("bert_base_en_uncased")
outputs = encoder(encoder_inputs)
pooled_output = outputs["pooled_output"]      # [batch_size, 768].
sequence_output = outputs["sequence_output"]  # [batch_size, seq_length, 768].

This worked for me too.

arc2226 avatar Jun 11 '24 22:06 arc2226

This is not an appropriate solution to the problem.

AlbertoMQ avatar Aug 02 '24 21:08 AlbertoMQ