[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230403182644.uylyonu6w6l63oze@revolver>
Date: Mon, 3 Apr 2023 14:26:44 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Mark Brown <broonie@...nel.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/2] regmap: Add maple tree based register cache
* Mark Brown <broonie@...nel.org> [230403 12:58]:
> On Mon, Apr 03, 2023 at 11:45:08AM -0400, Liam R. Howlett wrote:
> > * Mark Brown <broonie@...nel.org> [230329 20:10]:
>
> > > The entries stored in the maple tree are arrays of register
> > > values, with the maple tree keys holding the register addresses.
>
> > Why not store the registers to values in the maple tree without the
> > array? From reading the below code, the maple tree will hold a ranges
> > (based on registers) pointing to an array which will store the value at
> > the register offset. Could we just store the value in the maple tree
> > directly?
>
> AFAICT that means that we can't readily get the values back out en masse
> to do bulk operations on them without doing a bunch of work to check for
> adjacency and then doing some intermediate marshalling, with cache sync
> block operations are a noticable win. I'm *hopeful* this might end up
> working out fast enough to make the cache more viable on faster buses.
>
> > > This should work well for a lot of devices, though there's some
> > > additional areas that could be looked at such as caching the
> > > last accessed entry like we do for rbtree and trying to minimise
> > > the maple tree level locking.
>
> > In the case of the VMAs, we had a vmacache, which was removed when the
> > maple tree was added since it wasn't providing any benefit. We lost any
> > speed increase to cache misses and updating the cache. I don't know
> > your usecase or if it would result in the same outcome here, but I
> > thought I'd share what happened in the VMA space.
>
> Yeah, I'm hopeful that the maple tree is fast enough that it's not worth
> it. The main use case is read/modify/write sequences where you hit the
> same register twice in quick succession.
>
> > > + rcu_read_lock();
> > > +
> > > + entry = mas_find(&mas, reg);
>
> > mas_walk() might be a better interface for this.
>
> Ah, that's not very discoverable. mas_find() should possibly be called
> mas_find_pausable() or something?
Well, it finds a value at reg or higher, within the limits you pass in.
It was designed for the VMA code where there was a find() that did just
this (but without limits so you actually had to check once it returned).
>
> > > + /* Any adjacent entries to extend/merge? */
> > > + mas_set_range(&mas, reg - 1, reg + 1);
> > > + index = reg;
> > > + last = reg;
> > > +
> > > + lower = mas_find(&mas, reg - 1);
>
> > If you just want to check the previous, you can use:
> > mas_prev(&mas, reg - 1);
> > This will try the previous entry without rewalking from the top of the
> > tree and you don't need to mas_set_range() call.
>
> Hrm, right - it looks like that doesn't actually apply the constraints
> so that'd work. The whole specifying constraints for some operations in
> the mas is a bit confusing.
>
> > > +
> > > + mas_set_range(&mas, index, last);
> > > + ret = mas_store_gfp(&mas, entry, GFP_KERNEL);
>
> > You can avoid this walk as well by changing the order of the code
> > before:
>
> > mas_walk(&mas, reg);
> > if entry... return
> > mas_next(&mas, reg + 1);
> > ...
> > mas_prev(&mas, reg - 1);
> > ...
>
> > This should now be pointing at the location mas_store_gfp() expects:
> > mas.last = last;
> > ret = mas_store_gfp()
>
> Don't we need to set mas.index as well? It does feel a bit wrong to be
> just writing into the mas struct.
Thinking about this more, it might be safer to set mas.index if there
isn't a previous. Perhaps use mas_set_range() if there isn't a
previous.
Perhaps the interface needs to be expanded for setting mas.last. The write
path should be safe for changing where the write ends. I've tried to
avoid re-walking the tree when needed.
What you have will work. If you need more optimisations later, then we
can have another look.
Powered by blists - more mailing lists