lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3f10e398-5bca-4b8e-aca0-750a1ecfdda8@lucifer.local>
Date: Fri, 5 Dec 2025 14:33:08 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: kernel test robot <oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org
Subject: Re: [linux-next:master] [mm]  2b6a3f061f:  stress-ng.msg.ops_per_sec
 34.1% regression

On Fri, Dec 05, 2025 at 12:11:23PM +0000, Lorenzo Stoakes wrote:
> On Fri, Dec 05, 2025 at 09:41:37AM +0800, kernel test robot wrote:
> >
> >
> > Hello,
> >
> > kernel test robot noticed a 34.1% regression of stress-ng.msg.ops_per_sec on:
> >
> >
> > commit: 2b6a3f061f11372af79b862d6184d43193ae927f ("mm: declare VMA flags by bit")
>
> This is strange, as this _should_ have no actual impact on anything.
>
> However, it seems that <= 32-bit flags being declared as de-facto unsigned int
> is the delta here, and by declaring everything as an unsigned long via BIT() we
> have inadvertantly undone this optimisation.
>
> We can resolve this by having two types of INIT_VM_FLAG() macro - one for 32-bit
> flags and one for 64-bit flags and using each as appropriate.
>
> I will send a patch shortly.
>
> Cheers, Lorenzo

OK, having said that, I've now been able to restore original performance to
baseline just by making the result signed rather than unsigned :/

I am digging in to why on earth this is the case, will see if I can figure out
which flag and then which bit of code is somehow relying on this.

BTW I notice reduced system time before patch/with fixup, but the actual bogo
ops/s values are... worse before + after the patch cited as a regression here?

All very odd:

stress-ng: metrc: [1662] stressor       bogo ops real time  usr time  sys time   bogo ops/s     bogo ops/s CPU used per       RSS Max
stress-ng: metrc: [1662]                           (secs)    (secs)    (secs)   (real time) (usr+sys time) instance (%)          (KB)

(before patch)
stress-ng: metrc: [1217] msg           906205725     60.00    136.42    560.81  15103193.43     1299711.65        18.74          2124
stress-ng: metrc: [1375] msg           901766456     60.00    134.85    552.39  15028736.07     1312159.76        18.47          2128

(after patch)
stress-ng: metrc:  [776] msg           1186639996     60.01    166.74    615.22  19775640.54     1517517.09        21.02          2144
stress-ng: metrc:  [942] msg           1191427646     60.00    168.62    621.92  19855980.65     1507098.28        21.25          2120

(after fixup)
stress-ng: metrc:  [771] msg           906964106     60.02    135.62    555.29  15111666.71     1312719.21        18.57          2120
stress-ng: metrc:  [934] msg           904987762     60.00    135.16    555.69  15082486.49     1309953.15        18.57          2136


Cheers, Lorenzo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ