intake-xarray icon indicating copy to clipboard operation
intake-xarray copied to clipboard

Intake plugin for xarray

Results 25 intake-xarray issues
Sort by recently updated
recently updated
newest added

Thank you for building intake-xarray, this is an awesome package! I am having trouble reading a zarr datastore via an IPFS gateway. I am trying to read a NOAA SST...

rioxarray is a nice library that extends xarray DataArrays for geospatial data. https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html. It is much like the 'geopandas' equivalent for xarray. Essentially it provides 1) `rioxarray.open_rasterio`, a drop-in replacement...

Question: I am wondering if it is possible to subset a dataset (via .sel method) before the data is cached. Reasoning: My use case is - I would like to...

I have a local directory with GeoTIFF files with different shapes. I've explored the `coerce_shape` parameter to define manually a certain shape. I'm wondered if there's a workaround to coerce...

Suppose I have a OpeNDAP url like this: ```yaml urlpath: http://thredds..../MET/{{ variable }}/{{ variable }}_{{ '%04d' % year }}.nc ``` and I want to template the expansion of the `year`...

I think `netcdf4` is inflected with GPL code. I'm wondering if it could be made an optional dependency of `intake-xarray`, or moved into a separate `intake-netcdf4` package. Although [conda-forge reports...

this removes older CI config and adds a scheduled nightly test run, which hopefully will catch issues with new versions of dependency libraries (https://github.com/intake/intake-xarray/issues/98)

Using Intake (intake-xarray) on a larger number of files, e.g. daily data for one or two decades, results in a too-many-files error. Loading the same set of data with `xarray.open_mfdatasets`...

In the future, zarr files/stores will be opened with ``xarray.open_dataset`` as follows ```python ds = xarray.open_dataset(store, engine="zarr", ...) ``` Thus, (eventually) there needs to be a change on how ``intake-xarray``...

when loading `to_dask` with caching as in https://github.com/pangeo-data/pangeo-datastore/issues/113, `fsspec.open_local` first loads the whole dataset and then opens the data in `xarray`, still with chunks but after having spend the time...