spatialdata-io icon indicating copy to clipboard operation
spatialdata-io copied to clipboard

Missing tiles in Xenium H&E image

Open marcovarrone opened this issue 1 year ago • 4 comments

I ran xenium() with default parameters on official data from 10x Genomics (such as https://www.10xgenomics.com/datasets/ffpe-human-breast-with-custom-add-on-panel-1-standard).

However, I noticed that when I plotted the H&E image using the spatialdata-plot package, there were some missing tiles, as shown in the following image:

Image

The missing tiles are always the same if I run xenium() multiple times, so it doesn't look like a random drop.

I checked the original .ome.tif image in QuPath and there are no missing tiles.

Packages version:

  • spatialdata = 0.2.5
  • spatialdata-io = 0.1.5
  • spatialdata-plot = 0.2.7

marcovarrone avatar Nov 06 '24 13:11 marcovarrone

H @marcovarrone, strange bug. Which H&E image are you referring to? The Supplemental: Post-Xenium H&E image (OME-TIFF) from Tisue sample 1 or Tissue sample 2?

LucaMarconato avatar Feb 12 '25 20:02 LucaMarconato

Ok, sample 1.

I tried opening the data using Photoshop (ImageJ gave an error), and I see that there seem to be some artifacts/pixels repetitions exactly where spatialdata-plot (or napari-spatialdata) show the black squares:

Image

Here I zoom in one of those areas:

Image

It therefore seems to be an error in the data.

Luckily, if you consider the first downscaled image, FIJI doesn't show any artifact. Therefore, for this particular dataset I'd consider to manually loading the ome.tiff file and constructing the multiscale image from the scale1 instead of by downscaling the scale0. You can check the function xenium_aligned_image() and _add_aligned_images() in xenium() for some code to start from.

LucaMarconato avatar Feb 12 '25 21:02 LucaMarconato

This is the scale 1 opened with FIJI

Image

LucaMarconato avatar Feb 12 '25 21:02 LucaMarconato

Here is an example of code for achieving what I described above (parsing the scale 1 instead of scale 0) of the H&E image and using this scale to compute the DataTree (multiscale image), quickly wrote it from xenium_aligned_image().

import tifffile
from pathlib import Path
import pandas as pd
from spatialdata.models import (
    Image2DModel,
)
from spatialdata.transformations.transformations import Affine
import xmltodict
from spatialdata import SpatialData
from napari_spatialdata import Interactive

image_path = Path(
    "/Users/macbook/embl/projects/basel/spatialdata-sandbox/xenium_rep1_io/data/xenium/outs/Xenium_V1_FFPE_Human_Breast_IDC_With_Addon_he_unaligned_image.ome.tif"
)
alignment_file = Path(
    "/Users/macbook/embl/projects/basel/spatialdata-sandbox/xenium_rep1_io/data/xenium/outs/Xenium_V1_FFPE_Human_Breast_IDC_With_Addon_he_imagealignment.csv"
)
imread_kwargs = {}
image_models_kwargs = {"chunks": (1, 4096, 4096), "scale_factors": [2, 2, 2, 2]}
dims = None
rgba = False
c_coords = None

image_path = Path(image_path)
assert image_path.exists(), f"File {image_path} does not exist."
assert alignment_file.exists(), f"File {alignment_file} does not exist."

# here we read the data fully in-memory; it requires only 1.14 GB of memory
image = tifffile.imread(image_path, level=1)

# get metadata for the scale0
with tifffile.TiffFile(image_path, is_ome=True) as tif:
    ome_metadata = xmltodict.parse(tif.ome_metadata)
    sizes = {}
    sizes["x"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeX"]
    sizes["y"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeY"]
    sizes["c"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeC"]

# after manually examining the metadata, we know that the image is in the order of (y, x, c)
dims = ["y", "x", "c"]
c_coords = ["r", "g", "b"]

alignment_file = Path(alignment_file)
assert alignment_file.exists(), f"File {alignment_file} does not exist."
alignment = pd.read_csv(alignment_file, header=None).values
transformation = Affine(alignment, input_axes=("x", "y"), output_axes=("x", "y"))

image = Image2DModel.parse(
    image,
    dims=dims,
    transformations={"global": transformation},
    c_coords=c_coords,
    **image_models_kwargs,
)

SpatialData.init_from_elements({"he_image": image}).write("he_image.zarr", overwrite=True)

sdata = SpatialData.read("he_image.zarr")
Interactive(sdata)

One note: we currently do not have a general function that takes as input a .ome.tiff file and parses it as a DataArray or DataTree object, but we want to add such API to spatialdata-io. To this regard @lucas-diedrich is developing a general reader for images (not .ome.tiff, but .tiff), and such effort could be the starting point to then also have a .ome.tiff reader. Stay tuned 😁

Napari view for the code above:

Image

LucaMarconato avatar Feb 12 '25 23:02 LucaMarconato