About Coherence of topic models
Currently, I am calculating the Coherence of a bertopic model using the gensim. For this I need the n_grams from each text of the corpus. Is it possible? The function used by gensim waits for the corpus and topics, and the topics are tokens that must exist in corpus.
cm = CoherenceModel(topics, corpus, dictionary, coherence='u_mass')
Thanks in advance.
I believe you should be using the CountVectorizer for creating the corresponding corpus and dictionary when creating the CoherenceModel.
@MaartenGR thanks a lot for you attention. I am trying this. But I found a sentence in topics set that doesn't exist in dictionary. Is it ok? Do all the topics exist in ngrams?
The used code is this:
from gensim import corpora import nltk nltk.download('punkt') from gensim.models.coherencemodel import CoherenceModel
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range=(2, 20)) #2,20 is the same range of topics cv_fit=cv.fit_transform(comentariosList)
texts = []
for i in range(len(comentariosList)): temp = np.array(cv.inverse_transform(cv_fit.getrow(i))).tolist() texts = texts + temp
topics = topics_df['Keywords'].values.tolist()
cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass') cm.get_coherence_per_topic()
Thanks for your help.
You should focus on what you put into the corpus and dictionary variables as the topics are checked against those two. At the moment, I cannot see how you have constructed them but I would advise you to look into those.
Do you have any recommendations for working with this n_gram_range parameter?
topic_model = BERTopic (verbose = True, embedding_model = embedder, n_gram_range = (1,3), calculate_probabilities = True)
I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens.
You could also try accessing the Countvectorizer directly in Bertopic by using model.vectorizer_model. That way, you do not have to create different instances that might not match exactly.
If this still does not work let me know!
corpus
I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.
I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens.
You could also try accessing the Countvectorizer directly in Bertopic by using
model.vectorizer_model. That way, you do not have to create different instances that might not match exactly.If this still does not work let me know!
First of all, Thank you for your attention.
When I try to use the vectorizer_model from Bertopic we have this error:
1 corpus = ['This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?',] ----> 2 cv = topic_model.vectorizer_model() 3 4 X = cv.fit_transform(corpus)
TypeError: 'CountVectorizer' object is not callable
corpus
I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.
Hi Amine-OMI, thank you for your tips. Do you have some example of gensim CoherenceNPM?
Thanks a lot for your attention.
You should access the vectorizer model like this: cv = topic_model.vectorizer_model. Since it is already fitted you can use something like cv.get_feature_names() and tokenizer = cv.build_tokenizer() to get the words and tokenizer used for constructing the dictionary and corpus.
I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens. You could also try accessing the Countvectorizer directly in Bertopic by using
model.vectorizer_model. That way, you do not have to create different instances that might not match exactly. If this still does not work let me know!First of all, Thank you for your attention. When I try to use the vectorizer_model from Bertopic we have this error:
1 corpus = ['This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?',] ----> 2 cv = topic_model.vectorizer_model() 3 4 X = cv.fit_transform(corpus)
TypeError: 'CountVectorizer' object is not callable
Hey! Use it as such:
cv = topic_model.vectorizer_model
X = cv.fit_transform(docs)
doc_tokens = [text.split(" ") for text in docs]
import gensim.corpora as corpora
id2word = corpora.Dictionary(doc_tokens)
texts = doc_tokens
corpus = [id2word.doc2bow(text) for text in texts]
topic_words = []
for i in range(len(topic_model.get_topic_freq())-1):
interim = []
interim = [t[0] for t in topic_model.get_topic(i)]
topic_words.append(interim)
from gensim.models.coherencemodel import CoherenceModel
coherence_model = CoherenceModel(topics=topic_words, texts=texts, corpus=corpus, dictionary=id2word, coherence='c_v')
coherence_model.get_coherence()
corpus
I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.
Hi Amine-OMI, thank you for your tips. Do you have some example of gensim CoherenceNPM?
Thanks a lot for your attention.
Hey, sorry for the late reply, here's the process if you're still working on it:
Once you have extracted the topics from the corpus, you may have bigrams in the list of top words of each topic, so you need to split them and flatten the list to get a list of unigrams at the end.
After that you can use Gensime Topic coherence as described in this link
And you can use one of the following coherence measures: {'u_mass', 'c_v', 'c_uci', 'c_npmi'}.
from gensim.models.coherencemodel import CoherenceModel
from gensim.corpora.dictionary import Dictionary
# Creat the dictionary of the input corpus
id2word = Dictionary(corpus)
npmi = CoherenceModel(texts=corpus, dictionary=id2word,
topics=flatten_unigrams, coherence='c_v')
print(npmi.get_coherence())
I hope this helps you
The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.
I do want to stress that metrics such as c_v and c_npmi are merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.
import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel
# Preprocess documents
cleaned_docs = topic_model._preprocess_text(docs)
# Extract vectorizer and tokenizer from BERTopic
vectorizer = topic_model.vectorizer_model
tokenizer = vectorizer.build_tokenizer()
# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [tokenizer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)]
for topic in range(len(set(topics))-1)]
# Evaluate
coherence_model = CoherenceModel(topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_v')
coherence = coherence_model.get_coherence()
I t
The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.
I do want to stress that metrics such as
c_vandc_npmiare merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.import gensim.corpora as corpora from gensim.models.coherencemodel import CoherenceModel # Preprocess documents cleaned_docs = topic_model._preprocess_text(docs) # Extract vectorizer and tokenizer from BERTopic vectorizer = topic_model.vectorizer_model tokenizer = vectorizer.build_tokenizer() # Extract features for Topic Coherence evaluation words = vectorizer.get_feature_names() tokens = [tokenizer(doc) for doc in cleaned_docs] dictionary = corpora.Dictionary(tokens) corpus = [dictionary.doc2bow(token) for token in tokens] topic_words = [[words for words, _ in topic_model.get_topic(topic)] for topic in range(len(set(topics))-1)] # Evaluate coherence_model = CoherenceModel(topics=topic_words, texts=tokens, corpus=corpus, dictionary=dictionary, coherence='c_v') coherence = coherence_model.get_coherence()
Hello MaartenGr, I tried to execute this, but the problem is the tokenizer. My Bertopic model got topics with ngrams from 1 to 10 and the tokenizer here got tokens with only one term (1-gram). When I considere n_gram_range=(1,1) like this topic_model = BERTopic(verbose=True, embedding_model=embedder, n_gram_range=(1,1), calculate_probabilities=True) I get the coherence value, that in this case was 0.1725 for 'c_v', -0.2662 for c_npmi, and -8.5744 for u_mass.
Good catch, I did not test for higher n-grams in the example. I made two changes:
- Used the
build_analyzer()instead ofbuild_tokenizer()which allows for n-gram tokenization - Preprocessing is now based on a collection of documents per topic, since the
CountVectorizerwas trained on that data
Tested it with several ranges of n-grams and it seems to work now.
from bertopic import BERTopic
import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel
topic_model = BERTopic(verbose=True, n_gram_range=(1, 3))
topics, _ = topic_model.fit_transform(docs)
# Preprocess Documents
documents = pd.DataFrame({"Document": docs,
"ID": range(len(docs)),
"Topic": topics})
documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model._preprocess_text(documents_per_topic.Document.values)
# Extract vectorizer and analyzer from BERTopic
vectorizer = topic_model.vectorizer_model
analyzer = vectorizer.build_analyzer()
# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)]
for topic in range(len(set(topics))-1)]
# Evaluate
coherence_model = CoherenceModel(topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_v')
coherence = coherence_model.get_coherence()
Great! Thanks a lot!
Hi Maarten, thanks for the code of calculating coherence score. I am wondering which parameter I can tune using coherence score. I tried min_topic_size =10, 7, 5, and it seems the coherence score is increasing as min_topic_size decreases. But it doesn't make sense to me to further reduce min_topic_size.
Is coherence score always decreasing as reducing min_topic_size(number of topics seems increasing)? And what else parameter you recommend to tune for a small dataset (about 1000 sentences)?
@YuanyuanLi96 In general, I would not advise you to use this coherence score to fine-tune BERTopic. These metrics are merely procies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.
Having said that, by reducing min_topic_size the total amount of topics increases which simply leads to more information depending on the coherence metric used.
When it comes to tuning a small dataset, I would focus on keeping a logical min_topic_size of at least 20 since topics should contain sufficient documents. Moreover, with 1000 sentences, you can question whether a topic modeling technique is actually necessary.
@MaartenGr Thanks for your explanation and suggestion! I tried to let min_topic_size =20, and I can get 16 mostly interpretable topics for my data. So I will go with this, since it performs better than other models and reduces out labor work in the long term. Thanks for this amazing package!
Hi @MaartenGr , regarding the conversation here and your reply to YuanyuanLi96, currently the only available measurements i found to evaluate a Topic Model is by Coherence(Umass,NPMI etc..) and Perplexity scores which both have their downsides, beside human judgement which like you said "I would advise you to look at the topics yourself and see if they make sense to you" is there any other measurement you suggest?
in short...if i have a LDA model and a ERTopic model trained on the same data and apply the same number of topics on both,how would i know which is more accurate?
@TomNachman There are a few things that are important here.
What is the definition of "accurate". Is that topic coherence? Quality (density or separation) of clusters? Predictive power? Distribution of topics? Etc. Defining accuracy or quality first is important in knowing if one topic model is better than another. What the best metric to use is highly depends on your use case but it seems that in literature npmi is mostly used together with topic diversity. These metrics are typically used to evaluate the coherence and diversity of topic modeling techniques.
Moreover, I am often very hesitant when it comes to recommending a coherence metric to use. You can quickly overfit on such a metric when tuning the parameters of BERTopic (or any other topic modeling technique) which in practice might result in poor performance. In other words, I want to prevent users from solely focusing on grid-searching parameters and motivate users to look at the results.
Having said that, that does not mean that these metrics cannot be used! They are extremely useful in the right circumstances. So when you want to compare topic models, definitely use these kinds of metrics (e.g., npmi) but make sure the circumstances make sense. For example, they need to have the same number of topics and the same number of words need to be in those topics. If you were to change how the data were to be preprocessed, are you then objectively evaluating the difference in performance between topic modeling techniques?
I want to end with a great package for evaluating your topic model, namely OCTIS. It has many evaluation measures implemented aside from the standard coherence metrics, such as topic diversity, similarity, and classification metrics. I would advise choosing an evaluation metric there that best suits your use case.
The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.
I do want to stress that metrics such as
c_vandc_npmiare merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.import gensim.corpora as corpora from gensim.models.coherencemodel import CoherenceModel # Preprocess documents cleaned_docs = topic_model._preprocess_text(docs) # Extract vectorizer and tokenizer from BERTopic vectorizer = topic_model.vectorizer_model tokenizer = vectorizer.build_tokenizer() # Extract features for Topic Coherence evaluation words = vectorizer.get_feature_names() tokens = [tokenizer(doc) for doc in cleaned_docs] dictionary = corpora.Dictionary(tokens) corpus = [dictionary.doc2bow(token) for token in tokens] topic_words = [[words for words, _ in topic_model.get_topic(topic)] for topic in range(len(set(topics))-1)] # Evaluate coherence_model = CoherenceModel(topics=topic_words, texts=tokens, corpus=corpus, dictionary=dictionary, coherence='c_v') coherence = coherence_model.get_coherence()
Hello Maarten,
I tried to execute this code, but it just gave me the
"raise ValueError('unable to interpret topics either a list of tokens or a list of ids')
ValueError: unable to interpret topic as either list of tokens or a list of ids"
I was tuning the hyperparameters top_n_words and min_topic_size. I basically use the above code as a function to evaluate my topic model quality. It seems that the code does not work for a certain set of values of the two parameters(in my case, it's top_n_words = 5 and min_topic_size =28), while it managed to provide the coherence score for the rest of the pairs.
It's even more peculiar because I'd executed the same thing the other day and there was no issue. The only difference here is I used to a different set of data, although they were preprocessed similarly and had identical structure.
It might be worthwhile to check the differences in output between the output variables for your two sets of data (e.g., topic_words, corpus, etc.). If all parameters are the same but the only thing you changed is the data, then there might be something happening with the results that you get from training on that data. So checking things like the topics and their representation might help you understand what is happening there. For example, it might be the case that you have too few topics generated for it to calculate the coherence.
Good afternoon Maarten,
Thank you very much for pulling this together, I recognise that coherence score isn't necessarily the best option to determine accuracy, but it's a useful proxy to consider. Having taken a brief look at the code I've notice that:
words = vectorizer.get_feature_names()
Isn't referred to elsewhere in the code, can this line be omitted or does it serve a further purpose?
Thanks in advance, H
@hwrightson You are completely right! It is definitely a useful proxy to consider when validating your model. NPMI, for example, has shown promise in emulating human performance (1). A topic coherence score in conjunction with visual checks definitely prevents issues later on.
Isn't referred to elsewhere in the code, can this line be omitted or does it serve a further purpose?
Good catch, I might have used it for something else whilst testing out calculating coherence scores. So yes, you can omit that line!
@MaartenGr I've been delving into model evaluation and, at your suggestion, am using OCTIS. In my first set of experiments I compared the OCTIS metrics for topic diversity, inverted rbo, and npmi coherence. The results I got for inverted rbo seem promising, the others noisy. As you've clearly explained the choice of metric is highly dependent on the use case. I've begun looking for resources for more information on topic model evaluation metrics and am wondering if you have any suggestions? Two papers I found helpful were A review of topic modeling methods and Measuring LDA topic stability from clusters of replicated runs. As you know OCTIS contains over twenty different metrics. Some I'm familiar with, but most not. As far as I can tell they don't provide references for their implementations. Thanks as always in advance!
P.S. Of course right after writing this I remembered that I hadn't gone back to the paper the OCTIS people wrote OCTIS: Comparing and Optimizing Topic models is Simple!!. So anything you suggest that is not referenced there would be super.
@drob-xx Great to hear that you have been working with OCTIS! You might have already seen it, but aside from in the paper itself, some of the references to the evaluation metrics can be found here.
The field of evaluation metrics is a tricky one, there are many different use cases for topic modeling techniques, and topic modeling, by nature, is a subjective method that is often reflected in the evaluation metrics. Over the last years, there have been several papers describing the pros and cons of these metrics:
@inproceedings{lau2014machine,
title={Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality},
author={Lau, Jey Han and Newman, David and Baldwin, Timothy},
booktitle={Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics},
pages={530--539},
year={2014}
}
@inproceedings{mimno2011optimizing,
title={Optimizing semantic coherence in topic models},
author={Mimno, David and Wallach, Hanna and Talley, Edmund and Leenders, Miriam and McCallum, Andrew},
booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing},
pages={262--272},
year={2011}
}
@inproceedings{roder2015exploring,
title={Exploring the space of topic coherence measures},
author={R{\"o}der, Michael and Both, Andreas and Hinneburg, Alexander},
booktitle={Proceedings of the eighth ACM international conference on Web search and data mining},
pages={399--408},
year={2015}
}
@article{o2015analysis,
title={An analysis of the coherence of descriptors in topic modeling},
author={O’callaghan, Derek and Greene, Derek and Carthy, Joe and Cunningham, P{\'a}draig},
journal={Expert Systems with Applications},
volume={42},
number={13},
pages={5645--5657},
year={2015},
publisher={Elsevier}
}
P.S. Of course right after writing this I remembered that I hadn't gone back to the paper the OCTIS people wrote OCTIS: Comparing and Optimizing Topic models is Simple!!. So anything you suggest that is not referenced there would be super.
That has happened to me more times than I would like to admit! The metrics that you find in the paper and in OCTIS are, at least in my experience, the most common metrics that you see in academia. Especially NPMI and Topic Diversity are frequently used metrics as a proxy of the "quality" of these topic modeling techniques.
One thing that might be interesting to look at is clustering metrics. Essentially, BERTopic is a clustering algorithm with a topic representation on top. The assumption here is that good clusters lead to good topic representations. Thus, in order to have a good model, you will need good clusters. You can find some of these metrics here but be aware that some of these might need labels to judge the quality of the generated clusters.
Hello Maarten, I would also like to include Octis in my evaluation of BERTopic's findings. If I understand you correctly in Issues #144 and #331, the following lines should give me the topic-word-matrix I need for Octis:
topic_word_matrix = topic_model.c_tf_idf.toarray()
topic_word_matrix = np.delete(topic_word_matrix, obj=0, axis=0)
Is that correct?
When I initialise BERTopic with topic_diversity=None MMR is not used and the c-TF-IDF then is fully representative of the topic representation. Is this assumption correct?
Many thanks in advance for the help
@juli-sch Yes, you can use topic_model.c_tf_idf to be used as the topic-word matrix. Do note, that you only need to use the topic-word matrix for topic significance I believe and it is not necessary for calculating topic coherence scores. For those, you only need the top n words per topic.
Also, make sure not to use the -1 topic as that strictly is not a topic.
@PoonooP I have the same issue ("raise ValueError('unable to interpret topics either a list of tokens or a list of ids').
But finally fixed it. Bertopic has the default parameter top_n_words = 10, which will produce empty topic_words as many as 10.
Below code works for me. (add if words!='' )
[words for words, _ in topic_model.get_topic(topic) if words!='']
The complete code is below:
def calculuate_coherence_score(topic_model ):
#variable
topic_words = topic_words = [[words for words, _ in topic_model.get_topic(topic) if words!='']
for topic in range(len(set(topics))-1)]
vectorizer = topic_model.vectorizer_model
tokenizer = vectorizer.build_tokenizer()
#dictionary
tokens = [doc.split() for doc in clean_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
coherence_model = CoherenceModel(topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_v')
coherence = coherence_model.get_coherence()
return coherence
calculuate_coherence_score(topic_model)
Keep in mind, coherence is not a perfect metric for measuring the performance of Topic model. In my findings, varying mesurement has different sweets :)!
Thank you for your detailed explanation, @MaartenGr. I think it would be very useful for other users if you could add the above recommendations into the FAQ (eg. "How do I evaluate a topic model?), I believe this is one of the questions that puzzle many users (including myself).