rfcs icon indicating copy to clipboard operation
rfcs copied to clipboard

RFC: Introduce `DerefInto` and `DerefMutInto` for RAII access

Open Thomas-Mewily opened this issue 4 months ago • 8 comments

This RFC introduce DerefInto and DerefMutInto, supertraits of Deref / DerefMut that return targets by value, enabling ergonomic RAII access for types like RefCell, Mutex, and RwLock.

Rendered

Thomas-Mewily avatar Nov 10 '25 21:11 Thomas-Mewily

adding a use case for this

being able to deref to a non-reference type would allow linalg libraries as an example to have Matrix deref to MatrixRef/MatrixMut, which means i don't have to duplicate my api three times

pub struct Matrix<T> { ... }
pub struct MatrixRef<'a, T> { ... }
pub struct MatrixMut<'a, T> { ... }

impl Matrix<T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl MatrixRef<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl MatrixMut<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

becomes

impl MatrixRef<'_, T> {
    pub fn nrows(&self) -> usize { ... }
    // ...
}

impl<T> DerefInto for Matrix<T> {
    type Target<'out> = MatrixRef<'out, T>;
    fn deref_into(&self) -> MatrixRef<'_, T> { ... }
}

impl<T> DerefInto for MatrixMut<'_, T> {
    type Target<'out> = MatrixRef<'out, T>;
    fn deref_into(&self) -> MatrixRef<'_, T> { ... }
}

the more general example would be non-lang reference types, but i mostly care about matrices for my code (faer).

sarah-quinones avatar Dec 02 '25 15:12 sarah-quinones

My 2 cents:

  • The first link in the "Motivation" section is a reddit post. Moreover, it's been downvoted, so its conclusion is unsure. I don't think that link should be used to illustrate, let alone prove, anything. Also, what is an "ol' reference"?
  • The code in the "Motivation" and later sections is based on "foo/bar" examples, which are meaningless and sound more theoretical than motivated by a real example. A concrete example would be clearer and easier to discuss (and indeed appear more motivated); it would also be more educational should this RFC pass eventually.
  • You're suggesting to add a supertrait to Deref (and DerefMut) for another behaviour that would be specific to the type. This trait must be implemented where the type is defined, so in the example of RefCell, there should be an implementation of both Deref and DerefInto for RefCell. But in "Backward compatibility", you show what seems to be a blanket implementation for all T: Deref that calls the base implementation. Wouldn't you get conflicts when defining DerefInto and DerefIntoMut? You also mention that Deref would delegate to DerefInto, but didn't you mean the opposite? I might have misunderstood the code (by the way, nitpicking: there's a notation for supertraits, and I believe the code could use some reformatting).
  • Deref hides an implicit behaviour, but it preserves the intent of reference to an item contained in a smart pointer. In the examples you've shown for DerefInto (and mut), there is another implicit behaviour for varying actions, like borrowing a reference at run-time, locking a mutex, and so on. It's not the same at all, and makes me frankly uncomfortable, as it goes against Rust's philosophy of making any safety risk very visible in the code.
  • Some of the example of types you give, like RefCell, etc, may panic. One of the big conditions to implement Deref is that it mustn't fail unexpectedly, and that's only with the standard Deref trait, so I feel even more uncomfortable with some hidden, possibly fail-prone code.

blueglyph avatar Dec 02 '25 16:12 blueglyph

For the Matrix case IMO the more proper solution is to support "custom dynamic-sized-type" like type Mat<T> = [[T]];[^1], so that instead of Matrix<T>, MatrixRef<'a, T>, MatrixMut<'a, T> you use Box<Mat<T>>, &'a Mat<T>, &'a mut Mat<T>.

[^1]: [[T]] is a rectangular slice of slice of T. consider it as unsizing of [[T; m]; n] moving m, n into the pointer-metadata.

kennytm avatar Dec 03 '25 06:12 kennytm

Hello @sarah-quinones, thank for the example ! I encountered a similar issue in the past with your Matrix for a Grid type (which supports subviews and mutable subviews).

My way to solve it was to move the MatrixRef / MatrixMut logic into some traits (MatrixView and MatrixViewMut ).

I still need to impl the traits for Matrix / MatrixRef / MatrixMut when necessary, but I'm sure the api will stay consistent, and I can easily add new method with a default implementation that I don't need to impl again (except if the impl can provide better performance).

pub struct Matrix<T> { ... }
pub struct MatrixRef<'a, T> { ... }
pub struct MatrixMut<'a, T> { ... }

trait MatrixView<T>
{
    fn nrows(&self) -> usize { ... }
    // other method useful for view
}
trait MatrixViewMut<T> : MatrixView<T> {  /*method for mutable view*/ }

impl<T> MatrixView<T> for  Matrix<T> { ... }
impl<T> MatrixViewMut<T> for  Matrix<T> { ... }

impl<'a,T> MatrixView<'a, T> for  MatrixRef<'a, T> { ... }

impl<'a,T> MatrixView<'a, T> for  MatrixMut<'a, T> { ... }
impl<'a,T> MatrixViewMut<'a, T> for  MatrixMut<'a, T> { ... }

It is also possible to introduce another trait for owning matrix type that expose constructor if needed.

( For my use case, I have a large number of subtraits, and my trait IGridViewMut looks like this:

pub trait IGridViewMut<G, T, Idx, const N: usize> :
      IGridView<G,T,Idx,N>
    + GetMut<Vector<Idx,N>,Output = T> + GetManyMut<Vector<Idx,N>,Output=T>
    + IndexMut<Vector<Idx,N>, Output = T>
    where
    G : IGrid<T, Idx, N>,
    Idx : Integer

Where GetMut is the same as IndexMut, but fallible, and GetManyMut implements logic similar to get_disjoint_mut, enabling operations such as swapping two indices. Note: it need some renaming, it is still work in progress. (Anyways, doing it for high performance matrice require different api than a grid) )

Maybe making some traits more generic can also help in your case:

pub trait ShapeCore {
	fn nrows(&self) -> usize;
	fn ncols(&self) -> usize;
}

=>

pub trait ShapeCore<Rows=usize,Col=usize> {
	fn nrows(&self) -> Rows;
	fn ncols(&self) -> Col;
}

You will still need to duplicate the api three times, but at least it is centralized in one trait, so renaming/updating the documention is a little bit easier.

If the view is tightly coupled with the shape, the trait can also depend on it:

trait MatrixView<T,Rows=usize,Col=usize> : ShapeCore<Rows,Col> { ... }

Edit: impl MatrixView that expose nrows for Matrix MatrixRef and MatrixMut also make it easier to use them in generic context. For example, a uniform function such as fn nrows<T>(value: &T) -> usize where T: ShapeCore { value.nrows() } is straightforward. However, supporting both T: ShapeCore and T: DerefInto<MatrixRef<'a, T>> requires at least two separate functions to avoid conflicting implementations.

Thomas-Mewily avatar Dec 03 '25 16:12 Thomas-Mewily

Thank you, @blueglyph, for your remark. I agree with most of your points, and I will try to respond to each one (sorry for the long post).

1) The first link in the "Motivation" section is a reddit post.

Yeah, sorry, this was my first RFC.

Indeed in the motivation there is a citation to a Reddit post about ergonomics, because it was relevant to the topic. My goal was to create an ergonomic interface for non-locking and guarded access.

I tried to keep my post short and focused on the core idea: Simple RAII access, ideal for singletons and user-made guarded resources.

2) The code in the "Motivation" and later sections is based on "foo/bar" examples

The main point of it was RAII and singleton access. Now what the RAII or singleton need to access (a database, a render context) don't really matter. If you want more context, here it is:

I was previously creating a single-threaded singleton for a rendering API, similar to Macroquad. In Macroquad, the singleton is hidden in the following way (simplified example, but the idea is similar):

struct Context
{
    pub drawer: Drawer,
}

static mut CONTEXT: Option<Context> = None;

fn context_mut() -> &'static mut Context {
    thread_assert::same_thread();
    unsafe { CONTEXT.as_mut().unwrap_or_else(|| panic!()) }
}

The context is hidden, and only method inside the lib that need it can access it.

I will omit the deeper discussion of the underlying unsoundness and the reasons this pattern should be avoided, because for simplicity, in this case, it made Macroquad way more pleasant to use rather than passing around some reference to the context everywhere.

Some method such as drawing are exposed in some function:

pub mod shapes
{
    fn draw_rectangle(r: Rectangle)
    {
        context_mut().drawer.draw_rectangle(r)
    }

    fn draw_line(l: Line)
    {
        context_mut().drawer.draw_line(l)
    }
}

impl Drawer
{
    fn draw_rectangle(&mut self, r: Rectangle) { ... }
    fn draw_line(&mut self, l: Line) { ... }
}

However, I'm conflicted about this approach. It's clean because it doesn't expose the singleton, but:

  • The function need to be written twice:

    • One in for the singleton access: fn draw_rectangle(r: Rectangle) { ... }
    • Another one inside the drawer: impl Drawer { fn draw_rectangle(r: Rectangle) { ... } }
  • From a user's perspective, it's impossible to add new draw functions inside the same shapes module. So drawing methods from the engine and custom user-made methods will be split into different modules (or the user can create a shapes module in their game and re-export the content of macroquad's shapes and add custom drawing logic, but this feels more like wrapping the game framework crate rather than using it).

  • Extending the Drawer functionality in the user's crate correctly requires twice as much work:

pub trait DrawerExtension
{
    fn draw_rectangle_outline(&mut self, r: Rectangle);
}

impl DrawerExtension for Drawer { ... }

fn draw_rectangle_outline(r: Rectangle)
{
    // Either duplicate the code logic here, or expose the context_mut().drawer publicly
    // to allow the user to externaly extend the main singleton drawing mechanism, even
    // if context_mut().drawer should not be used directly for drawing
    context_mut().drawer.draw_line(r);
}

My first iteration over that was to remove the global draw function and call them on the singleton directly

fn drawer_mut() -> &mut Drawer { ... }
fn drawer_ref() -> &    Drawer { ... }

(Technically maybe I don't need drawer_ref() if I can have drawer_mut() that return a mutable reference, but a part of me still want it). But calling different methods depending on mutability, like drawer_mut().draw_rectangle(...) or drawer_ref().current_color(), feels awkward.

So I experimented with empty structs and abusing deref/deref_mut:

struct Draw;

impl Deref for Draw
{
    type Target=Drawer;
    fn deref(&self) -> &Self::Output { ... }
}
impl DerefMut for Draw
{
    type Target=Drawer;
    fn deref_mut(&mut self) -> &mut Self::Output { ... }
}

That way I can write Draw.draw_rectangle() or Draw.current_color() without needing to call different methods depending on whether I need mutable or immutable access.

User can extend the Drawer struct, and directly access the Drawer method from the singleton using the empty struct Draw.draw_foo() without writting another function.

Returning a reference like that with deref() and deref_mut() for the singleton is really conveniant, but it's also so easy so mess it up:

let d1 = Draw;
let drawer_mut1 : &'static Drawer = d1.deref_mut();

let d2 = Draw;
let drawer_mut2 : &'static Drawer = d2.deref_mut(); // Boom, 2 mutables references to the same resource

I can use some kind of guard mecanism like RefCell for single thread, or RwLock for multithreaded avoid these case, but in either way, those type return some kind of guard that impl deref/deref_mut into the target (ex: cell::Ref<'a,T>/cell::RefMut<'a,T> for RefCell). Even if the guards generally act like a reference to the target with some custom RAII/drop behavior (and some extra data about how to drop it correctly), they are not a reference.

fn drawer_mut() -> RefMut<'static,Drawer> { ... }

By making it safer, this removes the convenient deref()/deref_mut() hack access with empty structs (maybe it was a bad idea). So for better safety, I need to sacrifice some convenient ways to call the singleton, and the code Draw.draw_rectangle(...) becomes draw_mut().draw_rectangle(...) (maybe it's better and I'm probably overthinking it).

Now Idk if I want my singleton to be multithreaded or single threaded, so I was thinking about how to wrap it into an api that support both access.

Somethings like:

pub trait ReadGuard : Sized
{
    type Target;
    type ReadGuard<'a> : Deref<Target = Self::Target> where Self: 'a;
    /// Can panic on exceptional behavior (ex poisoned mutex)
    fn read<'a>(&'a self) -> Self::ReadGuard<'a>;
}
impl<T> ReadGuard for std::sync::RwLock<T>
{
    type Target = T;
    type ReadGuard<'a> = std::sync::RwLockReadGuard<'a,T> where Self: 'a;
    fn read<'a>(&'a self) -> Self::ReadGuard<'a> { self.read().expect("poisoned") }
}
pub struct ReferenceReadGuard<'a,T> where T: ?Sized
{
    inner: &'a T,
}
impl<'a,T> Deref for ReferenceReadGuard<'a, T> where T: ?Sized { ... }

(In case you are wondering, I have another trait for faillible read guard access.

pub trait TryReadGuard : ReadGuard
{
    type Error<'a>: Debug where Self: 'a;
    fn try_read<'a>(&'a self) -> Result<Self::ReadGuard<'a>, Self::Error<'a>>;
}

)

But I still miss the convenniance to use the deref()/deref_mut() trait that was possible with a less safe code. I'm also worried about the distinction between reference and non-reference type in the language, since some type act like a reference (cell::Ref<'a,T>), but are not actual reference, making some trait impossible to use.

(If someone have any different approch than drawer_mut().draw_stuff() I'm interested for a safe singleton with some guard.)

3) RefCell example

The Deref trait always return a reference: &Self::Target, but DerefInto always return a Self::Target<'out> (can return a reference or a non reference type).

So Deref imply DerefInto, but DerefInto don't necessary imply Deref because it don't necessary return a reference (can return a non-reference type like a guard, e.g., RwReadGuard<'a,T>)

So all existing type that impl Deref can automatically impl DerefInto

impl<T> DerefInto for T where T: Deref
{
    type Target<'out> = &'out T::Target where Self: 'out;
    fn deref_into(&self) -> Self::Target<'_> {
        self.deref()
    }
}

If you have any link to the notation for supertraits, I'm interested :)

4) Deref hides an implicit behaviour.

Yes you are right. While I'm not sure myself that I want it on RefCell, Mutex, and RwLock, they are useful for user-defined types and library authors.

(line 218)

The implementation of these traits for existing types such as RefCell, Mutex, and RwLock could be addressed later, rejected or not, or considered out of scope for this initial RFC.

(line 220)

Since this feature primarily benefits user-defined types and library authors seeking more ergonomic access patterns, native support in the core library can reasonably come at a later stage once the mechanism is stable.

The Index and IndexMut traits with array[1] can panic if the index is invalid, but people use it instead of array.get(1).unwrap() because they expect the index to be valid in most cases (and if not, it safely panics), so it's convenient.

It's the same for DerefInto / DerefMutInto: if I'm using it to access a singleton, I would expect the access to succeed (or panic safely if it fails for my API).

The original trait Deref and DerefMut don't deref to a field, but contains a method to access the target reference, that can contains some logic that I'm trying to abuse it for ergonomic. The logic can still contains an implicit behavior, but it is very conveniant:

struct DirtyFlag<T>
{
    value: T,
    pub is_dirty: bool,
}

impl Deref for DirtyFlag<T>
{
    type Target=T;
    fn deref(&self) -> &Self::Target { &self.value }
}
impl DerefMut for DirtyFlag<T>
{
    fn deref_mut(&mut self) -> &mut Self::Target { self.is_dirty = true; &mut self.value }
}

5) Deref should not fail

Okay, you got me with that one. In my usage, deref can fail, even though it should not fail most of the time. It's shifting from "deref can never fail" to "deref should not be expected to fail in most cases", which is different.

Other stuff

I'd like to thank everyone for the feedback.

I actually agree with the criticisms because of all the valid points:

  • Value and reference destructor order differences (lexical vs. non-lexical),
  • Hidden behavior (though I'm more hesitant about this one),
  • Potential confusion (don't impl it for std type?)
  • Many traits return references, not just Deref/DerefMut (e.g., AsRef<T>::as_ref(&self) -> &T). I'm not sure if solving this for other traits could be beneficial, or if there a way to make reference and guard type in some way compatible (and I'm not sure if this is desirable).

But sacrificing Draw.draw_rectangle(...) for draw_mut().draw_rectangle(...) is probably the biggest point about ergonomics for me, but maybe that the way to go.

Thomas-Mewily avatar Dec 04 '25 01:12 Thomas-Mewily

@Thomas-Mewily

For 1) and 2), I was mostly talking about your motivation intro.

I also have the impression that the feature you're proposing is not so much related to RAII or "singletons" than to the general desire to access some guarded content without writing the access method explicitly. I'd avoid relying so much on OOP terms for a language that isn't really OOP, at least not in the traditional sense.

For 3), what I meant is that your blanket implementation will conflict with any implementation of DerefInto a user will write. Besides, it seems unnecessary, if not even undesirable.

Try to compile this example to see what I mean.

For the supertrait notation, check the reference. However, it seems the compiler doesn't prioritize which Target is meant in the implementation of the subtrait, so it finds it ambiguous and requires <Self as DerefInto>::Target, or another name... Maybe it's not worth it, after all.

Note also the Rust style, which prefers to consistently place the open brackets on the same line (there's a Rustfmt in the playground, if you want to reformat your code and if your editor / IDE doesn't do it). That's just nitpicking.

The other part of the remark was more about the vocabulary:

The Deref and DerefMut trait implementation of the core library will just delegate to DerefInto and DerefMutInto

A base class/trait can't delegate to a subclass/subtrait, obviously.

It doesn't look like you wrote that text, though; someone else did, right?

For 4), if they're not used for the very traits you're showing in your motivation section, well, not only it makes the RFC confusing and misleading, but it would be confusing for the users of that feature: why use it in their code if the standard library doesn't? What sort of mixed code would that produce, if a .borrow or similar was sometimes explicit, sometimes not?

And the core problem remains: your intent is to hide some mechanism that is very likely to have secondary effects and that Rust generally prefers to expose.

blueglyph avatar Dec 04 '25 13:12 blueglyph

Hello @blueglyph, thanks for the feedback.

Even if the idea it rejected for good reason because it is confusing, here are some clarifications when I had it in mind:

The idea was to replace Deref and DerefMut trait by DerefInto and DerefMutInto that are more generic, but it is possible to keep the original Deref and DerefMut for backward compatibility. In that case, the supers traits DerefInto and DerefMutInto can be automatically implemented for Deref and DerefMut. That was I mean by:

The Deref and DerefMut trait implementation of the core library will just delegate to DerefInto and DerefMutInto.

followed by the blanked implementation that can be used for backward compatibility, and I write it myself.

( impl<T> DerefInto for T where T: Deref { ... })

So from an user pov, you are not supposed to impl both trait, just the one you need. If your target is a reference, use Deref. If your target is value use DerefInto (I hesitated at some point to call DerefInto DerefByValue). If you remove one of the Deref / DerefInto impl for TextUnderReview in your example the code compile fine.

About the supertrait notation, I didn't use it because it was not suitable for that case with the Deref trait for example, because it need to introduce a lifetime, so it break the current Deref trait API :

trait Deref
{
    type Target;
    fn deref(&self) -> &Self::Target;
}

=>

// I'm not sure about how to express the lifetime usage in DerefInto/if this code is valid.
// But the point is that a lifetime need to be introduced somewhere:

trait Deref<'s> : DerefInto<Target<'s>=&'s Self::Target>
{
    type Target;
    fn deref(&'s self) -> &'s Self::Target;
}

Thomas-Mewily avatar Dec 12 '25 13:12 Thomas-Mewily

The idea was to replace Deref and DerefMut trait by DerefInto and DerefMutInto that are more generic, but it is possible to keep the original Deref and DerefMut for backward compatibility. In that case, the supers traits DerefInto and DerefMutInto can be automatically implemented for Deref and DerefMut. That was I mean by:

Yes, I did understand that the first time. That is something else than the problems I saw:

  • From the look of it, DeferInto delegates to Deref, not the other way round
  • You use a blanket implementation, which forbids any other implementation. I think that what you had in mind was a default implementation instead.

About the supertrait notation, I didn't use it because it was not suitable for that case with the Deref trait for example, because it need to introduce a lifetime, so it break the current Deref trait API :

I think it's possible, but as I said, it's a slightly more complicated because of the Target name clash, so I don't think it's worth the trouble. I don't think there should be a problem with the lifetime of the associated type, but the point is moot now, and I'm not inclined to pass more time on this.

blueglyph avatar Dec 12 '25 15:12 blueglyph