topaz icon indicating copy to clipboard operation
topaz copied to clipboard

train-targets file missing with version 0.3.7

Open kevinwithak opened this issue 9 months ago • 12 comments

I've just upgraded from 0.2.5 to 0.3.7 and have an error that only occurs with the newer version of topaz. I'm launching from cryosparc, here's the end of my logfile:

Starting dataset splitting by running command /home/exx/topaz.sh train_test_split --number 3 --seed 700124987 --image-dir /data/callisto1/kjude/IgM/CS-igm/J94/preprocessed /data/callisto1/kjude/IgM/CS-igm/J94/topaz_particles_processed.txt

# splitting 18 micrographs with 85 labeled particles into 15 train and 3 test micrographs # writing: /data/callisto1/kjude/IgM/CS-igm/J94/preprocessed/20250320_p3_bin_0031_X-1Y+0-0_patch_aligned_denoised_train.txt # writing: /data/callisto1/kjude/IgM/CS-igm/J94/preprocessed/20250320_p3_bin_0031_X-1Y+0-0_patch_aligned_denoised_test.txt # writing: /data/callisto1/kjude/IgM/CS-igm/J94/preprocessed/image_list_train.txt # writing: /data/callisto1/kjude/IgM/CS-igm/J94/preprocessed/image_list_test.txt

Dataset splitting command complete.

Train-test splitting done in 5.332s. -------------------------------------------------------------- Starting training...

Starting training by running command /home/exx/topaz.sh train --train-images /data/callisto1/kjude/IgM/CS-igm/J93/image_list_train.txt --train-targets /data/callisto1/kjude/IgM/CS-igm/J93/topaz_particles_processed_train.txt --test-images /data/callisto1/kjude/IgM/CS-igm/J93/image_list_test.txt --test-targets /data/callisto1/kjude/IgM/CS-igm/J93/topaz_particles_processed_test.txt --num-particles 200 --learning-rate 0.0002 --minibatch-size 128 --num-epochs 10 --method GE-binomial --slack -1 --autoencoder 0 --l2 0.0 --minibatch-balance 0.0625 --epoch-size 5000 --model resnet8 --units 32 --dropout 0.0 --bn on --unit-scaling 2 --ngf 32 --num-workers 4 --cross-validation-seed 469388260 --radius 3 --num-particles 200 --device 0 --no-pretrained --save-prefix=/data/callisto1/kjude/IgM/CS-igm/J93/models/model -o /data/callisto1/kjude/IgM/CS-igm/J93/train_test_curve.txt

# Loading model: resnet8 # Model parameters: units=32, dropout=0.0, bn=on # Receptive field: 71 # Using device=0 with cuda=True # When using GPU to load data, we only load in this process. Setting num_workers = 0. # Training... # source split p_observed num_positive_regions total_regions Traceback (most recent call last): File "/data/ganymede/anaconda3/envs/topaz/bin/topaz", line 33, in sys.exit(load_entry_point('topaz-em==0.3.7', 'console_scripts', 'topaz')()) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/main.py", line 148, in main args.func(args) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/commands/train.py", line 140, in main classifier = train_model(classifier, args.train_images, args.train_targets, args.test_images, args.test_targets, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 607, in train_model num_positive_regions, total_regions, num_images = report_data_stats(train_images_path, train_targets_path, test_images_path, test_targets_path, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 284, in report_data_stats train_targets = file_utils.read_coordinates(train_targets_path) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/utils/files.py", line 201, in read_coordinates particles = pd.read_csv(path, sep='\t', dtype={'image_name':str}) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 912, in read_csv return _read(filepath_or_buffer, kwds) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 577, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 1407, in init self._engine = self._make_engine(f, self.engine) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 1661, in _make_engine self.handles = get_handle( File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/pandas/io/common.py", line 859, in get_handle handle = open( FileNotFoundError: [Errno 2] No such file or directory: '/data/callisto1/kjude/IgM/CS-igm/J93/topaz_particles_processed_train.txt'

kevinwithak avatar May 02 '25 02:05 kevinwithak

Hi Kevin, this is the bug mentioned in #232 and fixed here. The fix is available in version 0.3.8 from conda, but we currently have an issue with uploading to pip.

Please give that a try and let us know if you still run into issues.

DarnellGranberry avatar May 02 '25 17:05 DarnellGranberry

Thanks for the update Darnell. I updated train_test_split_micrographs.py but now I get a new error. In fact, I get two different errors when running on two different datasets, both of which ran successfully in 0.2.5.

Project 89:

Traceback (most recent call last): File "/data/ganymede/anaconda3/envs/topaz/bin/topaz", line 33, in sys.exit(load_entry_point('topaz-em==0.3.7', 'console_scripts', 'topaz')()) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/main.py", line 148, in main args.func(args) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/commands/train.py", line 140, in main classifier = train_model(classifier, args.train_images, args.train_targets, args.test_images, args.test_targets, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 643, in train_model fit_epochs(classifier, criteria, trainer, train_iterator, test_iterator, args.num_epochs, est_max_prec, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 586, in fit_epochs loss,precision,tpr,fpr,auprc = evaluate_model(classifier, criteria, test_iterator, use_cuda=use_cuda) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 516, in evaluate_model for X,Y in data_iterator: File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in next data = self._next_data() File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 438, in getitem mask = as_mask(img.shape, self.radius, x, y, z, use_cuda=self.use_cuda) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/utils/picks.py", line 28, in as_mask mask[coords] += 1 IndexError: index 347 is out of bounds for dimension 0 with size 252

Project 78:

Traceback (most recent call last): File "/data/ganymede/anaconda3/envs/topaz/bin/topaz", line 33, in sys.exit(load_entry_point('topaz-em==0.3.7', 'console_scripts', 'topaz')()) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/main.py", line 148, in main args.func(args) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/commands/train.py", line 140, in main classifier = train_model(classifier, args.train_images, args.train_targets, args.test_images, args.test_targets, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 643, in train_model fit_epochs(classifier, criteria, trainer, train_iterator, test_iterator, args.num_epochs, est_max_prec, File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 582, in fit_epochs it = fit_epoch(step_method, train_iterator, est_max_prec=est_max_prec, epoch=epoch, it=it, use_cuda=use_cuda, output=output) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/training.py", line 557, in fit_epoch metrics = step_method.step(X, Y) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/methods.py", line 103, in step score = self.model(X).view(-1) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/model/classifier.py", line 64, in forward z = self.features(x) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/model/features/resnet.py", line 250, in forward z = self.features(x) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/topaz/model/features/resnet.py", line 102, in forward y = self.conv(x) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/data/ganymede/anaconda3/envs/topaz/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

My environment:

# packages in environment at /data/ganymede/anaconda3/envs/topaz: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu blas 1.0 mkl bottleneck 1.3.7 py38ha9d4c09_0 brotli-python 1.0.9 py38h6a678d5_8 bzip2 1.0.8 h5eee18b_6 c-ares 1.19.1 h5eee18b_0 ca-certificates 2025.2.25 h06a4308_0 certifi 2024.8.30 py38h06a4308_0 charset-normalizer 3.3.2 pyhd3eb1b0_0 cuda-cudart 11.7.99 0 nvidia cuda-cupti 11.7.101 0 nvidia cuda-libraries 11.7.1 0 nvidia cuda-nvrtc 11.7.99 0 nvidia cuda-nvtx 11.7.91 0 nvidia cuda-runtime 11.7.1 0 nvidia cuda-version 12.9 3 nvidia ffmpeg 4.3 hf484d3e_0 pytorch filelock 3.13.1 py38h06a4308_0 freetype 2.13.3 h4a9f257_0 fsspec 2024.6.1 py38h06a4308_0 future 0.18.3 py38h06a4308_0 gmp 6.3.0 h6a678d5_0 gmpy2 2.1.2 py38heeb90bb_0 gnutls 3.6.15 he1e5248_0 h5py 3.11.0 py38hbe37b52_0 hdf5 1.12.1 h2b7332f_3 idna 3.7 py38h06a4308_0 intel-openmp 2023.1.0 hdb19cb5_46306 jinja2 3.1.4 py38h06a4308_0 joblib 1.4.2 py38h06a4308_0 jpeg 9e h5eee18b_3 krb5 1.20.1 h143b758_1 lame 3.100 h7b6447c_0 lcms2 2.16 hb9589c4_0 ld_impl_linux-64 2.40 h12ee557_0 lerc 4.0.0 h6a678d5_0 libcublas 11.10.3.66 0 nvidia libcufft 10.7.2.124 h4fbf590_0 nvidia libcufile 1.14.0.30 4 nvidia libcurand 10.3.10.19 0 nvidia libcurl 8.12.1 hc9e6f67_0 libcusolver 11.4.0.1 0 nvidia libcusparse 11.7.4.91 0 nvidia libdeflate 1.22 h5eee18b_0 libedit 3.1.20230828 h5eee18b_0 libev 4.33 h7f8727e_1 libffi 3.4.4 h6a678d5_1 libgcc-ng 11.2.0 h1234567_1 libgfortran-ng 11.2.0 h00389a5_1 libgfortran5 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libiconv 1.16 h5eee18b_3 libidn2 2.3.4 h5eee18b_0 libnghttp2 1.57.0 h2d74bed_0 libnpp 11.7.4.75 0 nvidia libnvjpeg 11.8.0.2 0 nvidia libpng 1.6.39 h5eee18b_0 libssh2 1.11.1 h251f7ec_0 libstdcxx-ng 11.2.0 h1234567_1 libtasn1 4.19.0 h5eee18b_0 libtiff 4.5.1 hffd6297_1 libunistring 0.9.10 h27cfd23_0 libwebp-base 1.3.2 h5eee18b_1 lz4-c 1.9.4 h6a678d5_1 markupsafe 2.1.3 py38h5eee18b_0 mkl 2023.1.0 h213fc3f_46344 mkl-service 2.4.0 py38h5eee18b_1 mkl_fft 1.3.8 py38h5eee18b_0 mkl_random 1.2.4 py38hdb19cb5_0 mpc 1.3.1 h5eee18b_0 mpfr 4.2.1 h5eee18b_0 mpmath 1.3.0 py38h06a4308_0 ncurses 6.4 h6a678d5_0 nettle 3.7.3 hbbd107a_1 networkx 3.1 py38h06a4308_0 numexpr 2.8.4 py38hc78ab66_1 numpy 1.24.3 py38hf6e8229_1 numpy-base 1.24.3 py38h060ed82_1 openh264 2.1.1 h4ff587b_0 openjpeg 2.5.2 he7f1fd0_0 openssl 3.0.16 h5eee18b_0 packaging 24.1 py38h06a4308_0 pandas 2.0.3 py38h1128e8f_0 pillow 10.4.0 py38h5eee18b_0 pip 24.2 py38h06a4308_0 platformdirs 3.10.0 py38h06a4308_0 pooch 1.7.0 py38h06a4308_0 pysocks 1.7.1 py38h06a4308_0 python 3.8.20 he870216_0 python-dateutil 2.9.0post0 py38h06a4308_2 python-tzdata 2025.2 pyhd3eb1b0_0 pytorch 2.0.1 py3.8_cuda11.7_cudnn8.5.0_0 pytorch pytorch-cuda 11.7 h778d358_5 pytorch pytorch-mutex 1.0 cuda pytorch pytz 2024.1 py38h06a4308_0 readline 8.2 h5eee18b_0 requests 2.32.3 py38h06a4308_0 scikit-learn 1.3.0 py38h1128e8f_1 scipy 1.10.1 py38hf6e8229_1 setuptools 75.1.0 py38h06a4308_0 six 1.16.0 pyhd3eb1b0_1 sqlite 3.45.3 h5eee18b_0 sympy 1.13.3 py38h06a4308_0 tbb 2021.8.0 hdb19cb5_0 threadpoolctl 3.5.0 py38h2f386ee_0 tk 8.6.14 h39e8969_0 topaz 0.3.7 py_0 tbepler torchtriton 2.0.0 py38 pytorch torchvision 0.15.2 py38_cu117 pytorch tqdm 4.66.5 py38h2f386ee_0 typing_extensions 4.11.0 py38h06a4308_0 urllib3 2.2.3 py38h06a4308_0 wheel 0.44.0 py38h06a4308_0 xz 5.6.4 h5eee18b_1 zlib 1.2.13 h5eee18b_1 zstd 1.5.6 hc292b87_0

kevinwithak avatar May 03 '25 02:05 kevinwithak

Hi Kevin. Can you double check that your x and y coordinates are not flipped? Sometimes rounding causes particle centers to be just past the end of the image, but that difference is too large for that.

The second issue probably comes from reading mode 1 mrc files. Can you save/convert them to mode 2 and let us know if the issue persists? I am still working on finding a good place to convert training images to floats.

DarnellGranberry avatar May 06 '25 23:05 DarnellGranberry

Ah, the micrographs are float16 (mode 12?). When I check the header, it says Data Type............................... Unknown I'm not sure how to convert them, so I reprocessed a subset of movies using float32 (in cryosparc) and generated some new parrticle picks.

Training in topaz goes to completion but ends in an error:

# Loaded 22 training micrographs with ~6609 labeled particles
# Loaded 5 testing micrographs with 1446 labeled particles
# Done!

Training command complete.

Training done in 7519.425s. 
--------------------------------------------------------------
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 129, in cryosparc_master.cryosparc_compute.run.main
  File "/home/exx/software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/topaz/run_topaz.py", line 420, in run_topaz_wrapper_train
    test_rows = n.where(train_test_data[:, type_index] == 'test')[0]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

Is this a cryosparc problem or a topaz?

kevinwithak avatar May 07 '25 15:05 kevinwithak

This looks to me like a problem with cryosparc reading the training output file from topaz. It's hard to say without knowing extract what train_test_data represents, but it seems that train_test_data is a 1D array where it is expecting the 2D array containing the train/test category, loss, precision, etc.

This could happen if the file is read with spaces as the separator instead of tabs, because each line would be read as a single long string. I'm not sure how much control you have over the file parsing.

DarnellGranberry avatar May 07 '25 17:05 DarnellGranberry

Thanks Darnell, I've confirmed it's an incompatibility between the cryosparc wrapper and topaz > 0.2.5

kevinwithak avatar May 07 '25 18:05 kevinwithak

Hi Kevin, I saw the discussion on the cryosparc forum here. It seems like those devs are making a blanket statement that topaz 0.3+ is not currently supported, but it looks like your training run completed normally. The only change we made to output files is writing an additional column, so there really isn't anything new to support. This makes me think something else is wrong.

Could you post the first few lines of your training output file? It would also help if you could let us know any other context for the code that raises the error.

DarnellGranberry avatar May 08 '25 15:05 DarnellGranberry

Cryosparc is choosing the optimal model from training here. The columns are tab delimited:

kjude@sr25-cbeb955199 J121 % head train_test_curve.txt
epoch	iter	split	loss	precision	adjusted_precision	tpr	fpr	auprc
1	1	train	0.7589951627693353	0.08415914806231438	0.08404454186885686	0.517920732498169	0.5298938155174255	-
1	2	train	0.7375577223738987	0.06853655286213166	0.06844322119686322	0.5777642726898193	0.5234835743904114	-
1	3	train	0.751506405371743	0.06805318016433383	0.06796050674604377	0.5062001347541809	0.524276614189148	-
1	4	train	0.7398848305538782	0.07960018143997256	0.07949178355330898	0.5297150611877441	0.5190662145614624	-
1	5	train	0.743365954523801	0.044692347350050086	0.044631486232460375	0.49377626180648804	0.5190767645835876	-
1	6	train	0.7432337594562151	0.051246455797475915	0.05117666943007128	0.48540663719177246	0.5198857188224792	-
1	7	train	0.7313524722093382	0.04697750624940538	0.046913533249519676	0.5161116123199463	0.5149300694465637	-
1	8	train	0.714658259576007	0.07476962045106667	0.0746678007228083	0.5436557531356812	0.5087966918945312	-
1	9	train	0.7009305236377292	0.044988258979003534	0.044926994895055736	0.5856674313545227	0.5053885579109192	-

The file is opened like so:

        with open(train_test_curve_path, 'r') as f:
            f = csv.reader(f, delimiter='\t')
            train_test_data = list(f)
            titles = train_test_data[0]
            train_test_data = train_test_data[1:]

kevinwithak avatar May 08 '25 16:05 kevinwithak

Aha, we need more than 10 lines to find the problem. There's an extra tab in the test rows:

1       5000    train   0.17078656379618207     0.20235150884529104     0.20207595073065834     0.4041571617126465      0.06476200371980667     -
1       5001    test    0.1009608268737793              0.060496680438518524    0.060414297305828764    0.35484278202056885     0.07150598615407944     0.1251785814061537
2       5001    train   0.17374379524347836     0.3291499371523128      0.3287017075511068      0.35731685161590576     0.06171675771474838     -
2       5002    train   0.20208616484533504     0.2761394552999697      0.27576341428049633     0.3538723587989807      0.07015661895275116     -
2       5003    train   0.14104171859084116     0.2974617404867432      0.2970566632187177      0.43619880080223083     0.050665777176618576    -
2       5004    train   0.20216177558082163     0.20026075203433005     0.19998804106923568     0.29314127564430237     0.05757328122854233     -

kevinwithak avatar May 08 '25 16:05 kevinwithak

That's odd. I'm not able to replicate your issue using your code on my files. Does cryosparc post-process the training output in any way after training completes? There should be a '-' where you have a blank in your test lines, like at the end of each training line.

DarnellGranberry avatar May 08 '25 17:05 DarnellGranberry

I don't see anything in the cryosparc wrapper that touches the output file. I'm running a test from the command line to see if I can replicate it.

kevinwithak avatar May 08 '25 18:05 kevinwithak

Confirmed, I get the same result running from the command line:

1       4998    train   0.20054735872684065     0.2077430198754336      0.20409406537330482     0.3529072105884552      0.07785971462726593     -
1       4999    train   0.166635723940751       0.21638158779379574     0.21258089899360702     0.415122389793396       0.07393530011177063     -
1       5000    train   0.20880321677527452     0.28946341698398287     0.28437906402117996     0.30888089537620544     0.06425423920154572     -
1       5001    test    0.09027089178562164             0.06120136380195618     0.060126377060570224    0.34922313690185547     0.06420855969190598     0.13514713149238095
2       5001    train   0.21028016725145765     0.2508393589891963      0.24643342800367604     0.2999488413333893      0.06775198876857758     -
2       5002    train   0.1578676861104145      0.4363098798493109      0.428646205270237       0.4044013023376465      0.05404818430542946     -
2       5003    train   0.19359232069952237     0.3036101158048254      0.2982772795939041      0.4087384045124054      0.07090506702661514     -

But topaz 0.2.5a on the same data behaves correctly:

1       4998    train   5.298602104187012       0.8804230093955994      0.20451861885996173     0.00854632817208767     0.002216080203652382    -
1       4999    train   4.566145896911621       0.5364471673965454      0.3855106578328081      0.015244496054947376    0.0016199429519474506   -
1       5000    train   4.66051721572876        0.8861907720565796      0.26292628786724476     0.011879509314894676    0.0022201593965291977   -
1       5001    test    0.09910254627466202     -       0.046767514     0.0023000082    0.00072193384   0.0894471454349586
2       5001    train   4.684543609619141       0.771906316280365       0.300615208400497       0.012974362820386887    0.0020123340655118227   -
2       5002    train   4.838701248168945       0.7096202969551086      0.29906678948014903     0.012181298807263374    0.0019033157732337713   -
2       5003    train   4.687687397003174       0.5467424988746643      0.31354717875000676     0.011223174631595612    0.0016380692832171917   -

kevinwithak avatar May 08 '25 22:05 kevinwithak

Hi @kevinwithak , we've fixed the issue in our latest releases. Please give that a try and let us know if you have any other issues.

DarnellGranberry avatar Jun 27 '25 14:06 DarnellGranberry