Explicit fixed width 'S1' arrays re-encoded creating extra dimension
from collections import OrderedDict
import xarray as xr
import numpy as np
sensor_string_np = np.zeros([12, 100], dtype='|S1')
data_vars = {}
data_vars['sensorName'] = xr.DataArray(data=sensor_string_np.copy(), attrs=OrderedDict([('_FillValue', ' '),]),
name="sensorName", dims=("sensor", "string"))
scanfile = xr.Dataset(data_vars=data_vars)
scanfile.sensorName[0, :len("test")] = np.frombuffer("test".encode(), dtype='|S1')
scanfile.to_netcdf('test.nc')
(py37) C:\Data\in2019_v02>ncdump -h test.nc
netcdf test {
dimensions:
sensor = 12 ;
string = 100 ;
string1 = 1 ;
variables:
char sensorName(sensor, string, string1) ;
sensorName:_FillValue = " " ;
}
Problem description
I'm not entirely sure if this is a bug or user error. The above code is a minimal example of an issue we've been having in the latest version of xarray (or since about version 11).
We are trying to preserve the old fixed width char array style of strings for backwards compatibility purposes. However the above code adds in the extra 'string1' dimension when saving to NetCDF.
From what I can understand this is a feature of encoding described at http://xarray.pydata.org/en/stable/io.html#string-encoding. I think xarray is treating each byte of the S1 array as a 'string' which it is then encoding again by splitting each character into byte arrays of one byte each.
What I would expect is that since I'm explicitly working with a char arrays rather than strings is for the array to be written to the disk as is.
I can work around this by setting the encoding for the variable to be 'str' and removing the _FillValue:
data_vars['sensorName'] = xr.DataArray(data=sensor_string_np.copy(),
name="sensorName", dims=("sensor", "string"))
...
scanfile.to_netcdf(r'test.nc', encoding={'sensorName': {'dtype': 'str'}})
(py37) C:\Data\in2019_v02>ncdump -h test.nc
netcdf test {
dimensions:
sensor = 12 ;
string = 100 ;
variables:
string sensorName(sensor, string) ;
}
However this seems like a painful work around.
Is there another way I should be doing this?
Output of xr.show_versions()
INSTALLED VERSIONS
commit: None python: 3.7.3 | packaged by conda-forge | (default, Mar 27 2019, 23:18:50) [MSC v.1900 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1 pandas: 0.24.2 numpy: 1.16.2 scipy: None netCDF4: 1.5.0.1 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: None cartopy: None seaborn: None setuptools: 41.0.0 pip: 19.0.3 conda: None pytest: None IPython: None sphinx: None
Actually the work around I mentioned won't work as it's changing the variables type to string.
This has some overlap with the proposal in https://github.com/pydata/xarray/issues/2895
I don't think there's an option to disable this currently, but it would probably make sense to add one. In the past, we may not have created dummy dimenisons if the string already had dtype S1, but that changed because we wanted to preserve the "roundtripping" invariant: decode(encode(variable)) should be identity.
@karl-malakoff Solution ahead:
import xarray as xr
import numpy as np
sensor_string_np = np.full(12, "", dtype="|S100")
data_vars = {}
data_vars['sensorName'] = xr.DataArray(data=sensor_string_np.copy(), name="sensorName", dims=("sensor"))
scanfile = xr.Dataset(data_vars=data_vars)
scanfile.sensorName.encoding = dict(char_dim_name="string")
scanfile.sensorName[0] = "test"
scanfile.to_netcdf('test.nc')
ncdump -h
netcdf test {
dimensions:
sensor = 12 ;
string = 100 ;
variables:
char sensorName(sensor, string) ;
data:
sensorName =
"test",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"" ;
}
I'm currently on #10395, so this might be needed to to fully support this.