epitran icon indicating copy to clipboard operation
epitran copied to clipboard

Speeding up mapping with HuggingFace datasets

Open MaveriQ opened this issue 4 years ago • 5 comments

Hi. I am trying to convert corpora from HF to their IPA form with the following snippet. But I am getting really slow speeds.. only a couple of examples per second. Do you know how it can be sped up? Thanks

import epitran
from datasets import load_dataset

bookscorpus = load_dataset('bookcorpus',split='train')
epi = epitran.Epitran('eng-Latn')

def transliterate(x):
    return {'trans': epi.transliterate(x['text'])}

tokenized = bookscorpus.map(lambda x: transliterate(x),num_proc=32)

MaveriQ avatar Jun 17 '21 10:06 MaveriQ

Hi @MaveriQ , I seem to have missed this message when you originally sent it. It is true that the Python implementation of Epitran is very slow. If you can take advantage of concurrency, you can do better. I am in the process of rewriting parts of Epitran in Rust, which will increase performance—based on initial tests—by about 100 times.

dmort27 avatar Nov 10 '21 18:11 dmort27

However, for English this will not help as much since English support is provided by Flite (written in C) and the only python part converts the ARPAbet representation from Flite to IPA.

dmort27 avatar Nov 10 '21 18:11 dmort27

Hi. If by concurrency you meant multiprocessing, I already tried that, but it's still pretty slow. Can you recommend anything else for English? Thanks

MaveriQ avatar Nov 12 '21 08:11 MaveriQ

As I look at this issue, the real problem is that Epitran spawns a shell every time it calls lex_lookup to convert a word to IPA. This is expensive (although lex_lookup itself is quite efficient). The solutions would be to:

  • Create Python bindings to the Flite libraries, so the relevant functions could be called directly (rather than via the shell)
  • Create a method to epitran.flite.Flite that passes inputs to lex_lookup by the batch.

The second solution would be easier, but ultimately less satisfying.

dmort27 avatar Nov 12 '21 13:11 dmort27

FYI for future reference, extremely dirty quickfix would be:

import re

words = [...]  # words to be transliterated
with open("eng_words.sh", "w") as f:
    f.write("\n".join([f"lex_lookup {w} | head -n 1" for w in words]))

!bash eng_words.sh > eng_lex.txt

lexs = open("eng_lex.txt").readlines()
arpa_to_ipa = {'ey': 'ej', 'ae': 'æ', 'iy': 'i', 'eh': 'ɛ', 'ay': 'aj', 'ih': 'ɪ', 'ow': 'ow', 'aa': 'ɑ', 'ao': 'ɔ', 'aw': 'aw', 'oy': 'oj', 'ah': 'ʌ', 'ax': 'ə', 'uw': 'u', 'uh': 'ʊ', 'er': 'ɹ̩', 'b': 'b', 'ch': 't͡ʃ', 'd': 'd', 'dx': 'ɾ', 'f': 'f', 'g': 'ɡ', 'hh': 'h', 'jh': 'd͡ʒ', 'k': 'k', 'l': 'l', 'em': 'm̩', 'm': 'm', 'en': 'n̩', 'n': 'n', 'ng': 'ŋ', 'p': 'p', 'q': 'ʔ', 'r': 'ɹ', 's': 's', 'sh': 'ʃ', 't': 't', 'dh': 'ð', 'th': 'θ', 'v': 'v', 'w': 'w', 'y': 'j', 'z': 'z', 'zh': 'ʒ'}

ipas = []
for lex in lexs:
    lex = lex.strip()[1:-1].split()
    lex = map(lambda d: re.sub(r'\d', '', d), lex)
    ipa = map(lambda d: arpa_to_ipa[d], lex)
    ipas.append("".join(list(ipa)))

word_to_ipa = dict(zip(words, ipas))  # now this dict has key: word, value: transliteration result

juice500ml avatar Feb 29 '24 06:02 juice500ml