lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251106192210.1b6a3ca0@pumpkin>
Date: Thu, 6 Nov 2025 19:22:10 +0000
From: David Laight <david.laight.linux@...il.com>
To: Chuck Lever <cel@...nel.org>
Cc: "stable@...r.kernel.org" <stable@...r.kernel.org>, Andrew Morton
 <akpm@...ux-foundation.org>, David Laight <David.Laight@...LAB.COM>, Linux
 NFS Mailing List <linux-nfs@...r.kernel.org>, Linux List Kernel Mailing
 <linux-kernel@...r.kernel.org>, speedcracker@...mail.com
Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit slotsize
 greater than high limit total_avail/scale_factor

On Thu, 6 Nov 2025 09:33:28 -0500
Chuck Lever <cel@...nel.org> wrote:

> FYI
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=220745

Ugg - that code is horrid.
It seems to have got deleted since, but it is:

	u32 slotsize = slot_bytes(ca);
	u32 num = ca->maxreqs;
	unsigned long avail, total_avail;
	unsigned int scale_factor;

	spin_lock(&nfsd_drc_lock);
	if (nfsd_drc_max_mem > nfsd_drc_mem_used)
		total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
	else
		/* We have handed out more space than we chose in
		 * set_max_drc() to allow.  That isn't really a
		 * problem as long as that doesn't make us think we
		 * have lots more due to integer overflow.
		 */
		total_avail = 0;
	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
	/*
	 * Never use more than a fraction of the remaining memory,
	 * unless it's the only way to give this client a slot.
	 * The chosen fraction is either 1/8 or 1/number of threads,
	 * whichever is smaller.  This ensures there are adequate
	 * slots to support multiple clients per thread.
	 * Give the client one slot even if that would require
	 * over-allocation--it is better than failure.
	 */
	scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads);

	avail = clamp_t(unsigned long, avail, slotsize,
			total_avail/scale_factor);
	num = min_t(int, num, avail / slotsize);
	num = max_t(int, num, 1);

Lets rework it a bit...
	if (nfsd_drc_max_mem > nfsd_drc_mem_used) {
		total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
		avail = min(NFSD_MAX_MEM_PER_SESSION, total_avail);
		avail = clamp(avail, n + sizeof(xxx), total_avail/8)
	} else {
		total_avail = 0;
		avail = 0;
		avail = clamp(0, n + sizeof(xxx), 0);
	}

Neither of those clamp() are sane at all - should be clamp(val, lo, hi)
with 'lo <= hi' otherwise the result is dependant on the order of the
comparisons.
The compiler sees the second one and rightly bleats.
I can't even guess what the code is actually trying to calculate!

Maybe looking at where the code came from, or the current version might help.

It MIGHT be that the 'lo' of slotsize was an attempt to ensure that
the following 'avail / slotsize' was as least one.
Some software archaeology might show that the 'num = max(num, 1)' was added
because the code above didn't work.
In that case the clamp can be clamp(avail, 0, total_avail/scale_factor)
which is just min(avail, total_avail/scale_factor).

The person who rewrote it between 6.1 and 6.18 might now more.

	David
	
> 
> 
> -------- Forwarded Message --------
> Subject: Re: Compile Error fs/nfsd/nfs4state.o - clamp() low limit
> slotsize greater than high limit total_avail/scale_factor
> Date: Thu, 06 Nov 2025 07:29:25 -0500
> From: Jeff Layton <jlayton@...nel.org>
> To: Mike-SPC via Bugspray Bot <bugbot@...nel.org>, cel@...nel.org,
> neilb@...mail.net, trondmy@...nel.org, linux-nfs@...r.kernel.org,
> anna@...nel.org, neilb@...wn.name
> 
> On Thu, 2025-11-06 at 11:30 +0000, Mike-SPC via Bugspray Bot wrote:
> > Mike-SPC writes via Kernel.org Bugzilla:
> > 
> > (In reply to Bugspray Bot from comment #5)  
> > > Chuck Lever <cel@...nel.org> replies to comment #4:
> > > 
> > > On 11/5/25 7:25 AM, Mike-SPC via Bugspray Bot wrote:  
> > > > Mike-SPC writes via Kernel.org Bugzilla:
> > > >   
> > > > > Have you found a 6.1.y kernel for which the build doesn't fail?  
> > > > 
> > > > Yes. Compiling Version 6.1.155 works without problems.
> > > > Versions >= 6.1.156 aren't.  
> > > 
> > > My analysis yesterday suggests that, because the nfs4state.c code hasn't
> > > changed, it's probably something elsewhere that introduced this problem.
> > > As we can't reproduce the issue, can you use "git bisect" between
> > > v6.1.155 and v6.1.156 to find the culprit commit?
> > > 
> > > (via https://msgid.link/ab235dbe-7949-4208-a21a-2cdd50347152@kernel.org)  
> > 
> > 
> > Yes, your analysis is right (thanks for it).
> > After some investigation, the issue appears to be caused by changes introduced in
> > include/linux/minmax.h.
> > 
> > I verified this by replacing minmax.h in 6.1.156 with the version from 6.1.155,
> > and the kernel then compiles successfully.
> > 
> > The relevant section in the 6.1.156 changelog (https://cdn.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.156) shows several modifications to minmax.h (notably around __clamp_once() and the use of
> > BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...)), which seem to trigger a compile-time assertion when building NFSD.
> > 
> > Replacing the updated header with the previous one resolves the issue, so this appears
> > to be a regression introduced by the new clamp() logic.
> > 
> > Could you please advise who is the right person or mailing list to report this issue to
> > (minmax.h maintainers, kernel core, or stable tree)?
> >   
> 
> I'd let all 3 know, and I'd include the author of the patches that you
> suspect are the problem. They'll probably want to revise the one that's
> a problem.
> 
> Cheers,


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ