mne-python icon indicating copy to clipboard operation
mne-python copied to clipboard

ENH: Visualizing colocated MEG signal vectors

Open larsoner opened this issue 2 years ago • 1 comments

In OPMs (or if you work with fluxgates!) it's common to have two or three measurements at the same physical location, i.e., a vector-valued channel amplitude. This makes things like plot_topomap not work without some channel picking, and then you end up with incomplete information. (For VectorView/TRIUX it works because we pick mags -- but if that system had 3 mags instead of 1 mag 2 grad at each location it would have the same problem.) It would be really nice to visualize these automatically and nicely. Ideas, which I think I'd probably use all of at different times/use cases:

  1. Project activity to a spherical source space then back out to point mags at each location oriented outward. This is what xfit does with Neuromag data IIRC and something we probably want anyway / independent of this colocation issue.

    • API: source_proj='auto' or something that projects to source space then back out to outward-oriented point mags when MEG sensors are physically colocated, and can be set to True e.g. for Neuromag data to get data from all 306 sensors at least used at once.
  2. Instead of plotting 1 head+sensor array, plot 3 head+sensor arrays (like at the vertices of an equilateral triangle?) in the same set of axes so you can visualize all three directions at once.

    • API: colocated='xyz' (default) | 'x' | 'y' | 'z' or something might work to say which of the doublets/triplets to use.
  3. Allow reorientation of the MEG sensors to the device frame. By default for most systems AFAIK, Z is local to the sensor and points outward. For these vector-valued measures we could rotate so that X/Y/Z is RAS in the MEG device frame.

    • API: sensor_frame='local' (default) | 'meg' could work for this, and would work nicely with the "colocated" plots to make sure x/y/z are actually interpretable.

@georgeoneill @jasmainak (and anyone else working with OPMs!) do you have other ideas? What do you think of the above?

larsoner avatar Mar 21 '23 15:03 larsoner

We don't have x/y/z measurements with the OPMs yet ... not sure if it's even possible with the Fieldline system we have. I haven't started to think about how such an analysis might proceed. However, both 1. and 2. sound reasonable approaches to me and 3. sounds a little convoluted.

Perhaps 1. could lead to potential issues, if the helmet surface is non-spherical. In fact, that is one issue I have been observing with individualized helmets that is common in OPM research. The topomaps can be misleading or distorted because the surface is not spherical and the projection assumes a spherical surface. My immediate thought was to visualize the topography directly on a 3D helmet surface in the same spirit as make_field_maps to avoid this issue, but I haven't gotten around to it yet.

jasmainak avatar Mar 22 '23 03:03 jasmainak

I'm happy to take a stab at a compatibility patch for this, since it's really needed for OPM (eg. ICA).

The simplest method would be to just plot the Z axis (which is radial to the scalp), since this tends to look the most like SQUID. See example below.

Looks like one option would be to us existing syntax within _prepare_topomap_plot. This would have the benefit of updating many (all?) of the topoplot functions at once.

  1. Identify OPM data, e.g. by comparing inst.info['device_info'] against a known list (ours: {'type': 'Cerca', 'model': 'cMEG'}), or check if any raw.info['chs'] come from a known list (ours: 'coil_type': 8002 (FIFFV_COIL_QUSPIN_ZFOPM_MAG2))

  2. call a modified version of _average_fnirs_overlap

  3. update _merge_ch_data to use a modified version of _merge_nirs_data, setting it to just use the OPM Z channels

Another option would be to use the channel names, which should include the axes (e.g., '1T Z', '1T Y', '1T X'). Then you would just subset the picks in _prepare_topomap_plot. However, our channel name formatting has changed with different config files, so would probably need to pass a wildcard somehow (e.g., in an inst.info field). This would be more hands-on for the user.

Agree that a longer-term solution should involve aggregating the data across all channels, but that's probably beyond me for now.

harrisonritz avatar Feb 25 '25 23:02 harrisonritz

The simplest method would be to just plot the Z axis (which is radial to the scalp), since this tends to look the most like SQUID. See example below. ... 3. update _merge_ch_data to use a modified version of _merge_nirs_data, setting it to just use the OPM Z channels

This sounds like maybe the cleanest way. I think long term we probably want to figure out a way to allow plotting all three, but just Z for now is maybe already a good enough start.

EDIT: And this fits well into a the API (2) above with colocated='z' as the default for now.

larsoner avatar Feb 26 '25 18:02 larsoner

Maybe set an option to define which axis is radial. In the demo with UCL data the radial axis is Y.

georgeoneill avatar Feb 26 '25 20:02 georgeoneill

It would also be pretty quick to compute on the fly which one is most radial

larsoner avatar Feb 26 '25 20:02 larsoner

For what it's worth I'm writing a paper with another general solution to this. But that's more a longer term thing to implement.

By putting this here it forces me to rapidly finish it off!

georgeoneill avatar Feb 26 '25 20:02 georgeoneill

@georgeoneill sounds cool!

It would also be pretty quick to compute on the fly which one is most radial

I like the sound of checking for the radial orientation.

So we could add a function to _merge_ch_data that get a radial signal from the three channels per sensor. e.g., we could do a weighted average, with the weights equal to how radially oriented each sensor channel is (the origin always the center of the head, right? So it'd be the absolute cosine between position and orientation?)

harrisonritz avatar Feb 26 '25 20:02 harrisonritz

While we could get fancy and creating a "virtual sensor" via a weighted combination of the 3, I'd rather keep it simple and just use the cosine angle to figure out which of a given set of colocated sensors is most radial (in device coords) and just display that sensor's signal. I think it's what most people will expect, and we can just say "the most radial colocated sensor's signal will be displayed"

larsoner avatar Feb 26 '25 21:02 larsoner

I don't think it would be that fancy, wondering what the downside is

  • Each sensor should have a radially-oriented channel, so it will already be pretty close to winner-take-all. Where its not exactly radial, I'd think it would improve homogeneity across the scalp
  • It's virtually the same code, just without the additional max of winner-take-all
  • It's only going to affect OPM users, who will be paying attention to how this worked
  • colocated channels often share noise issues (e.g., 'bad sensors' have multiple bad channels), so still helpful for debugging (e.g., vs combining channels across multiple sensors)

harrisonritz avatar Feb 26 '25 21:02 harrisonritz

I can run some basic tests (eg how much combining would there be in our OPM system), and can look at both options.

Sounds like we're in agreement on the high-level approach, so I'll start working on this soon

harrisonritz avatar Feb 27 '25 13:02 harrisonritz

wondering what the downside is

If most OPM systems have a nominally radial channel, and we want to display radial by default, as an end user I would expect it to just pick the nominally radial channel's signal. To me it would be weird / unexpected for it to be some fancier, computed quantity (as opposed to just picking the radial channel's data and plotting that)

larsoner avatar Feb 27 '25 14:02 larsoner

For our system, z definitely is the most of the radial projection, but not all (80% of sum). This is normalizing by row sum -- if you use 2-norm, then z is more selective (98% of norm).

I do think that users care more about source realism than sensor realism (eg topo already smooths; sensors dont think), but looking like combining probably won't generate very different visualizations 👍

For our system, the EZ triplet from raw.info['chs']['locs'] gives the right orientation (see below, loc[9:12]). Do you folks have a sense of whether EZ going to be the right orientation to use across systems?

chs = raw.info['chs']
sensor_names = [ch['ch_name'].split(" ")[1] for ch in chs]
unique_sensor_names = list(set(sensor_names))

radial_matrix = np.zeros((len(unique_sensor_names), 3))
for s, sens in enumerate(unique_sensor_names):

    ch_idx = mne.pick_channels_regexp(sensor_names, sens)
    
    for c in [0,1,2]:
        
        loc = np.array(chs[ch_idx[c]]['loc'])
        
        radial_direction = loc[0:3]
        radial_direction /= np.linalg.norm(radial_direction)

        orientation_vector = mne.transforms.apply_trans(info['dev_head_t'], loc[9:12])
        radial_matrix[s, c] = np.abs(np.dot(radial_direction, orientation_vector))

    radial_matrix[s,:] /= np.sum(radial_matrix[s,:])

Note that the fnirs code has a more generic way of getting overlapping sensors without parsing the sensors name.

Image

Image

harrisonritz avatar Feb 27 '25 23:02 harrisonritz

I do think that users care more about source realism than sensor realism (eg topo already smooths; sensors dont think)

I think smoothing / interpolating data between sensors in a topomap to get a smooth image across space is different from doing a weighted average of data for a given sensor at its location. Your argument about wanting to maximize the radial-ness could apply to any MEG system really. I think for consistency and expectations when we show sensor data we should avoid doing a weighted average by default. We do have places where we project to virtual magnetometers (e.g., plot_evoked_field), but it would be a divergence of behavior for MNE to do that by default I think.

Note that the fnirs code has a more generic way of getting overlapping sensors without parsing the sensors name.

Yeah I wouldn't use the names. I would first detect the OPM-ness of the MEG data by checking the coil_type, if OPM data then group channel data into sensors by checking for ch["loc"][:3] equivalence, then pick the radial sensor to display by picking the ch["loc"][9:12] that is most closely aligned with ch["loc"][:3].

larsoner avatar Feb 27 '25 23:02 larsoner