lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <24a84b3f-f7fc-41aa-ac44-2c0319c78e4f@lucifer.local>
Date: Fri, 5 Dec 2025 17:46:16 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: kernel test robot <oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org
Subject: Re: [linux-next:master] [mm]  2b6a3f061f:  stress-ng.msg.ops_per_sec
 34.1% regression

On Fri, Dec 05, 2025 at 02:33:08PM +0000, Lorenzo Stoakes wrote:
> On Fri, Dec 05, 2025 at 12:11:23PM +0000, Lorenzo Stoakes wrote:
> > On Fri, Dec 05, 2025 at 09:41:37AM +0800, kernel test robot wrote:
> > >
> > >
> > > Hello,
> > >
> > > kernel test robot noticed a 34.1% regression of stress-ng.msg.ops_per_sec on:
> > >
> > >
> > > commit: 2b6a3f061f11372af79b862d6184d43193ae927f ("mm: declare VMA flags by bit")
> >
> > This is strange, as this _should_ have no actual impact on anything.
> >
> > However, it seems that <= 32-bit flags being declared as de-facto unsigned int
> > is the delta here, and by declaring everything as an unsigned long via BIT() we
> > have inadvertantly undone this optimisation.
> >
> > We can resolve this by having two types of INIT_VM_FLAG() macro - one for 32-bit
> > flags and one for 64-bit flags and using each as appropriate.
> >
> > I will send a patch shortly.
> >
> > Cheers, Lorenzo
>
> OK, having said that, I've now been able to restore original performance to
> baseline just by making the result signed rather than unsigned :/
>
> I am digging in to why on earth this is the case, will see if I can figure out
> which flag and then which bit of code is somehow relying on this.
>
> BTW I notice reduced system time before patch/with fixup, but the actual bogo
> ops/s values are... worse before + after the patch cited as a regression here?

OK it's not even this :) it's the fact that _somehow_ the BIT() macro does
something funky which causes some imperfect macro expansion somewhere (due to
the (__force int) case). Good lord.

Patch incoming in a second.

Cheers, Lorenzo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ