[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251208185725.6ab9bf7e@pumpkin>
Date: Mon, 8 Dec 2025 18:57:25 +0000
From: David Laight <david.laight.linux@...il.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand
<david@...nel.org>, "Liam R . Howlett" <Liam.Howlett@...cle.com>, Vlastimil
Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>, Suren Baghdasaryan
<surenb@...gle.com>, Michal Hocko <mhocko@...e.com>, oliver.sang@...el.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: avoid use of BIT() macro for initialising VMA flags
On Mon, 8 Dec 2025 16:42:43 +0000
Lorenzo Stoakes <lorenzo.stoakes@...cle.com> wrote:
> On Sat, Dec 06, 2025 at 04:43:57PM +0000, Lorenzo Stoakes wrote:
> > On Fri, Dec 05, 2025 at 09:34:49PM +0000, David Laight wrote:
> > > On Fri, 5 Dec 2025 19:18:56 +0000
> > > Lorenzo Stoakes <lorenzo.stoakes@...cle.com> wrote:
> > >
> > > > On Fri, Dec 05, 2025 at 06:43:42PM +0000, David Laight wrote:
> > > > > On Fri, 5 Dec 2025 17:50:37 +0000
> > > > > Lorenzo Stoakes <lorenzo.stoakes@...cle.com> wrote:
> > > > >
> > > > > > Commit 2b6a3f061f11 ("mm: declare VMA flags by bit") significantly changed
> > > > > > how VMA flags are declared, utilising an enum of VMA bit values and
> > > > > > ifdef-fery VM_xxx flag declarations via macro.
> > > > > >
> > > > > > As part of this change, it uses INIT_VM_FLAG() to define VM_xxx flags from
> > > > > > the newly introduced VMA bit numbers.
> > > > > >
> > > > > > However, use of this macro results in apparently unfortunate macro
> > > > > > expansion and resulted in a performance degradation.This appears to be due
> > > > > > to the (__force int), which is required for the sparse typechecking to
> > > > > > work.
> > > > >
> > > > > Does sparse complain if you just add 0? As in:
> > > > > #define INIT_VM_FLAG(name) BIT(VMA_ ## name ## _BIT + 0u)
> > > > >
> > > > > That should change the type without affecting what BIT() expands to.
> > > >
> > > > Thanks, checked that and unfortunately that doesn't satisfy sparse :)
> > > >
> > > > I don't think it's too crazy to use 1UL << here, just very frustrating (TM)
> > > > that this is an issue.
> > >
> > > I might use some of my copious spare time (ha) to see why BIT() fails.
> > > I bet it is just too complex for its own good.
> > > Personally I'm fine with both explicit (1ul << n) and hex constants.
> > > The latter are definitely most useful if you ever look at hexdumps.
> >
> > Thanks :) yeah I just didn't want to go down that rabbit hole myself as I seemed
> > to have the answer and wanted to get it fixed, but obviously am quite curious as
> > to what on earth is causing that.
>
> I did wonder about _calc_vm_trans(), given the 'interesting' stuff it does.
>
> Maybe I should fiddle with that and see...
Hmmm...
/*
* Optimisation macro. It is equivalent to:
* (x & bit1) ? bit2 : 0
* but this version is faster.
* ("bit1" and "bit2" must be single bits)
*/
#define _calc_vm_trans(x, bit1, bit2) \
((!(bit1) || !(bit2)) ? 0 : \
((bit1) <= (bit2) ? ((x) & (bit1)) * ((bit2) / (bit1)) \
: ((x) & (bit1)) / ((bit1) / (bit2))))
The comment fails to mention it is only sane for constants.
If nothing else 9 expansions of BIT() are going to generate a very
long line.
For starters make it a statement expression and use __auto_type _bit1 = bit1.
Then add a check for both _bit1 and _bit2 being constants.
It is also worth checking the compiler doesn't do it for you.
Looks like gcc 7.1 onwards generate the 'optimised' code.
https://godbolt.org/z/EGGE56E3r
David
Powered by blists - more mailing lists