feast-workshop icon indicating copy to clipboard operation
feast-workshop copied to clipboard

Import time of the library is high

Open OptimeeringBigya opened this issue 7 months ago • 0 comments

The import time of the library seems to be very high.

Steps to reproduce:

Import times with only feast installed

Create a fresh virtual environment and install feast. Run the following python module.

from time import perf_counter
now=perf_counter()
import feast
print(perf_counter()-now)

Result: The printed run time in my case was 2.710442792000322. Already pretty high for an import of a library. Additionally I get a warning, None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Import times after adding TF

Install tensorflow lib. Run the script again.

Result: Took ~3.1 seconds. Warning no longer being printed.

Import times after adding PyTorch

Install pytorch (torch). Ran the script again.

Result: The run time went up to ~15 seconds. Additionally, new warning is now being printed.

2025-07-23 13:36:02.068516: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1753270562.131461   66853 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1753270562.152065   66853 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1753270562.278959   66853 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1753270562.278987   66853 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1753270562.278991   66853 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1753270562.278995   66853 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-07-23 13:36:02.293151: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

OptimeeringBigya avatar Jul 25 '25 08:07 OptimeeringBigya