hkristof03
hkristof03
@maciejkula Thanks for your reply regarding this question. I'd like to ask, why is it necessary to separate the .fit() and .evaluate() calls this way? Can't we modify the candidates=...
@patrickorlando This optimization works quite well, but it causes memory explosion, because the candidates are recomputed for each validation and for that a new computation graph is created each time...
Yes that would be better probably, at least according to my knowledge. Never heard that Common stock and Shareholder's equity are used equivalently. The former for me indicates the number...
@Ullar-Kask You should have data about what was visible for the user each time on the page. Products that were visible for the user and did not result in a...
@maciejkula Thanks for your response. I checked the [documentation](https://github.com/tensorflow/recommenders/blob/v0.7.0/tensorflow_recommenders/layers/feature_interaction/dcn.py#L22-L194) of the Cross layer now: ``` def call(self, x0: tf.Tensor, x: Optional[tf.Tensor] = None) -> tf.Tensor: """Computes the feature cross. Args:...
Hi @rnyak, Thanks for the example. I am trying to use embedding vectors from NLP and from CV models. The problem is that these extracted features are sometimes available for...
@haotian-liu thank you for this amazing work. I just started getting familiar with this repository recently. I would like to point out a few things and also ask a question....
Hi @emeli-dral , I just started to use this library but I do not understand why the reference dataset statistics are not / cannot be saved by default, because even...
In the implementation, the global negatives are sampled uniformly from the item corpus. I understand that in the original training dataset each sampling probability is adjusted based on the above...
@patrickorlando to clarify it totally, am I right that the sampling probabilities should be computed from the train dataset for each item ID, then these probabilities should be joined to...