Paul Bauriegel
Paul Bauriegel
There is an official tutorial for your problem: https://colab.research.google.com/github/timeseriesAI/tsai/blob/master/tutorial_nbs/11_How_to_train_big_arrays_faster_with_tsai.ipynb In short use zarr arrays or np.memmap instead of trying to load your hole dataset into memory
I've only tested it with zarr arrays on the [AMEX Kaggle Data](https://www.kaggle.com/competitions/amex-default-prediction/data) which is bigger then my memory and that works perfectly. Maybe your batch size is to high? My...
Works without problems for me:
same here: Version: 0.11.1 OS: darwin OS Release: 21.6.0 Product: Visual Studio Code Product Version: 1.71.1 Language: en
Can you provide a link to the dataset? From my point of view training with neg examples should work as well.
Your are training on images in Pascal VOC annotation format. From my point of view a negative example would be an image without bounding box annotation: ``` JPEGImages jpeg-file.jpg /path/to/your/jpeg-file.jpg...
Yes trained some examples with only one class. Some of these images had no annotation. So these images are like negative examples. I haven't actively trained the network to not...
Delete the old cache pkl file it should solve the issue Additionally in voc.py in line 19 change `tree = ET.parse(ann_dir + ann)` to `tree = ET.parse(os.path.join(ann_dir, ann))` this should...
You can add a small Exception clause for the `IndexError` under the existing one in the `_estimate_friedrich_coefficients` function to "solve" the problem. ```python try: df["quantiles"] = pd.qcut(df.signal, r) except ValueError:...
The CICD Pipeline run into some error, but that should not be connected to this change: `ValueError: No token found. Please set HF_HUB_ACCESS_TOKEN environment variable.`