DeepCTR
DeepCTR copied to clipboard
为什么GPU运行时SparseFeat中vocabulary_size的值大小不会引起错误
代码就是demo中的例子,例子中SparseFeat的vocabulary_size大小设置为最大值+1
fixlen_feature_columns = [SparseFeat(feat, data[feat].max() + 1,embedding_dim=4)
for feat in sparse_features]
当我将其改为1时代码仍能成功运行
fixlen_feature_columns = [SparseFeat(feat, 1,embedding_dim=4)
for feat in sparse_features]
而当我禁用gpu,使用cpu训练模型时tensorflow则会提示我越界错误
请补充运行环境信息 例如python/tf/cuda版本等,便于我们复现问题并进行排查
Describe the bug(问题描述) A clear and concise description of what the bug is.Better with standalone code to reproduce the issue.
To Reproduce(复现步骤) Steps to reproduce the behavior:
Go to '...' Click on '....' Scroll down to '....' See error Operating environment(运行环境):
python version [e.g. 3.6, 3.7] tensorflow version [e.g. 1.4.0, 1.15.0, 2.10.0] deepctr version [e.g. 0.9.2,] Additional context Add any other context about the problem here.