Dong Daxiang
Dong Daxiang
Keras style of coding ``` python def train(): args= config.parse_args() config.print_arguments(args) fleet.init(is_collective=True) dam = DAM(args.max_turn_num, args.max_turn_len, args.vocab_size, args.emb_size, args.stack_num, args.channel1_num, args.channel2_num) train_loader, valid_loader = dam.get_loader_from_filelist( args.filelist, args.data_source, fleet.worker_num()) dam.init_emb_from_file(args.word_emb_init) optimizer...
add readthedocs template
try to upgrade fleet as a package, including customized distributed training strategy, large scale solution for training, commonly used distributed utils
make fleet as a package so that users can do batch training or online training very easily
Currently, `FleetX` is using native `DataLoader` for training. When we are using Dali for data loader, lazy import is used. A better way is to write a wrapped `DataLoader` so...
前面脚本的字段imagenet_path,现在改为data_path,实际上data_path为required,需要报错
目前每个benchmark脚本的运行环境并没有完全统一,官方需要说明具体的复现环境,或者复现流程
since word2vec is to learn word representation, it can be view as a pre-training model. please add word2vec pre-training model with parameter server training.