Classification weak learners should allow more than one input feature
Hi team,
I decided to give Refinery a try with a classification problem where there are more than one input features, and the idea is to classify their combination into a few categories.
To give an example of a similar problem, imagine an oxymoron classification task with 2 input features: word_a and word_b, and a binary class output: is_oxymoron and not_oxymoron.
The problem I have is that the two features or their embeddings are useless in isolation, it's their interaction that counts. But in all weak learner possibilities, I apparently must choose feature a or feature b, I can't use both. Am I misunderstanding something? This could be something I don't understand in the UI.
Also, I would expect to be able to transform the input data with my own functions and use that as input as well; although not ideal, this could be used to work around the limitation of one input feature per learner.
Otherwise, it looks good and the UI is rather well-organised.
Hi @agravier,
thank you for reaching out to us and your feedback. You are right, both options (1. multi-attribute embeddings & 2. "calculated" columns) aren't part of our current UI. Calculated columns are on our roadmap for 2022.
Hi! That point is 100% valid, and we thought about it too. We're thinking about the following, and I'd be curious what you think about it:
- currently, you have one programming interface, i.e. in the heuristics sections
- in the near future (Q4), you'll be able to have a programming interface similar to that to write computed attributes, e.g.
def word_a_cat_word_b(record):
return str(record["word_a"]) + str(record["word_b"])
- also, we're continuing our work on our embedder library. Here, again we want to provide a programmatic interface that provides similar to the active learning templates, but with which you can compute your very own customized (and finetuned) embeddings, e.g.
from embedders.classification.contextual import TransformerSentenceEmbedder
def classification_word_a_cat_word_b_distilbert(record):
embedder = TransformerSentenceEmbedder("distilbert-base-cased")
return embedder.fit_transform(record["word_a_cat_word_b"], record["is_oxymoron"])
of course, not 100% sure about the exact interface here, but that is the general idea.
And thanks for trying out refinery, means a lot! :)
Thanks for getting back to me @JWittmeyer and @jhoetter. Sound good, as long as the UX is there to make all this clear. Another couple of things that you may want to consider, from my trial: tabular data export (not that JSON is horrible, but the thing lends itself to a tabular format) and "partially annotated input reconciliation", when one of the columns of the imported data already contains some labels. Obviously this raises some more questions that could be presented to the user about what to do with this data, like assign it to which annotator, etc.
I'll revisit in a few months, all the best, cheers!
Thanks for the input @agravier. We already have a format to upload existing data (https://docs.kern.ai/docs/project-creation-and-data-upload#uploading-existing-labeled-data), but I agree that this requires UX improvement. We'll work on this, and I'd be happy to have your feedback again when that's implemented :)
This will be first solved by implementing #40. You'll be able to modify any attributes, in that case have e.g. a concatenation of word_a and word_b (similar to this):
def word_a_cat_word_b(record):
return str(record["word_a"]) + str(record["word_b"])
Afterward, you can apply encoding to this attribute.
We'll ultimately provide an extensive interface to program embeddings, but that is a bit further down the road :)
@agravier This is solved with the release of version 1.3.0. You can now do attribute modifications, which allow you to then create exactly the embeddings you like. Let us know what you think :)
Thanks for the heads up @jhoetter , I'll give it a try at the next occasion. Cheers