BERTopic icon indicating copy to clipboard operation
BERTopic copied to clipboard

Cannot import 'HFTransformerBackend' from 'bertopic.backend'

Open esettouf opened this issue 3 years ago • 3 comments

Hi, I am trying to use a custom model which I did additional training on, uploaded it to huggingface and having some problems. Right now I'm using the code snippet from here to set the custom model for the BERTopic object. But I am getting the following error when importing the package:

ImportError: cannot import name 'HFTransformerBackend' from 'bertopic.backend' (/usr/local/lib/python3.7/dist-packages/bertopic/backend/init.py)

BERTopic library was installed before. Is that package not included anymore?

Thanks a lot!

esettouf avatar Aug 07 '22 15:08 esettouf

My bad! That is an issue with the documentation being misleading in this case. To use this correctly you should run the following instead:

from transformers.pipelines import pipeline

hf_model = pipeline("feature-extraction", model="distilbert-base-cased")

topic_model = BERTopic(embedding_model=hf_model)

MaartenGr avatar Aug 07 '22 16:08 MaartenGr

Thanks for your quick response! Using that model works now, although the results aren't very satisfying yet. I guess still have some work to do with my model. And thank you for the great tool and ongoing support!

esettouf avatar Aug 08 '22 06:08 esettouf

No problem, glad I can be of help 😄

MaartenGr avatar Aug 08 '22 07:08 MaartenGr

I have a question regarding using HF pipeline feature extraction and execution time.

I run the following code, but it lasts 4 hours. Should I make sure that the pipeline is just returning the CLS embedding and not using all the words Embedding?

vectorizer_model = CountVectorizer(stop_words="english", ngram_range=(1,4), min_df=10,max_df=0.95)
umap_model = UMAP(n_neighbors=350, n_components=36, min_dist=0.0, metric='cosine')
embedder= pipeline("feature-extraction","microsoft/mpnet-base")
topic_model = BERTopic(
    vectorizer_model=vectorizer_model
    ,embedding_model=embedder
    ,umap_model=umap_model
    ,verbose=True
    ,nr_topics="auto"
)

With similar code it only took 5 mins the embedding part.

vectorizer_model = CountVectorizer(stop_words="english", ngram_range=(1,4), min_df=10,max_df=0.95)
umap_model = UMAP(n_neighbors=350, n_components=36, min_dist=0.0, metric='cosine')
topic_model = BERTopic(
    vectorizer_model=vectorizer_model
    ,embedding_model="all-mpnet-base-v2"
    ,umap_model=umap_model
    ,verbose=True
    ,nr_topics="auto"
)

rjac-ml avatar Aug 16 '22 20:08 rjac-ml

@rjac-ml The reason for the difference in speed is that you are using two different embedding models. microsoft/mpnet-base is a different model compared to sentence-transformers/all-mpnet-base-v2 so it is expected that their speeds would be different.

If you were to use the same model, then speed would be much more comparable. Do note that SentenceTransformers is optimized and likely to be faster than huggingface.

MaartenGr avatar Aug 17 '22 09:08 MaartenGr

hey @MaartenGr thanks for the response, understood.

rjac-ml avatar Aug 17 '22 14:08 rjac-ml