lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b72bcad-5174-49c7-ad90-63a9c312df0b@suse.cz>
Date: Wed, 12 Nov 2025 16:56:19 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Christoph Hellwig <hch@....de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
 Roman Gushchin <roman.gushchin@...ux.dev>, Harry Yoo <harry.yoo@...cle.com>,
 Eric Biggers <ebiggers@...nel.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 7/7] mempool: add mempool_{alloc,free}_bulk

On 11/12/25 16:47, Christoph Hellwig wrote:
> On Wed, Nov 12, 2025 at 01:20:21PM +0100, Vlastimil Babka wrote:
>> > +	if (IS_ERR(fault_create_debugfs_attr("fail_mempool_alloc", NULL,
>> > +			&fail_mempool_alloc)) ||
>> > +	    IS_ERR(fault_create_debugfs_attr("fail_mempool_alloc_bulk", NULL,
>> > +			&fail_mempool_alloc_bulk)))
>> > +		return -ENOMEM;
>> 
>> Pedantically speaking the error (from debugfs_create_dir()) might be
>> different, probably doesn't matter in practice.
> 
> Yeah, this is an initcall, so the exact error really does not matter.
> But adding an error variable isn't that annyoing, so I'll switch to
> that.
> 
>> >  	unsigned long flags;
>> > -	void *element;
>> > +	unsigned int i;
>> >  
>> >  	spin_lock_irqsave(&pool->lock, flags);
>> > -	if (unlikely(!pool->curr_nr))
>> > +	if (unlikely(pool->curr_nr < count - allocated))
>> 
>> So we might be pessimistic here when some of the elements in the array
>> already are not NULL so we need in fact less. Might be an issue if callers
>> were relying on this for forward progress? It would be simpler to just tell
>> them not to...
> 
> Yes.  I think alloc_pages_bulk always allocates from the beginning
> and doesn't leave random holes?  That's what a quick look at the code
> suggest, unfortunately the documentation for it totally sucks.

Yeah I think it's fine with alloc_pages_bulk. It would only be a concern is
the bulk alloc+mempool user used up part of the allocated array, NULLing
some earlier pointers but leaving later ones alone, and then attempted to
refill it.

>> > + * @pool:	pointer to the memory pool
>> > + * @elems:	partially or fully populated elements array
>> > + * @count:	size (in entries) of @elem
>> > + * @gfp_mask:	GFP_* flags.  %__GFP_ZERO is not supported.
>> 
>> We should say __GFP_DIRECT_RECLAIM is mandatory...
> 
> It's not really going to fit in there :)  Maybe I should have ignored
> Eric's request to mention __GFP_ZERO here and just keep everything
> together.

I thought it can be multiline, but if not, can refer to the notes below and
explain there, yeah.

>> > +repeat_alloc:
>> > +	/*
>> > +	 * Try to allocate the elements using the allocation callback.  If that
>> > +	 * succeeds or we were not allowed to sleep, return now.  Don't dip into
>> > +	 * the reserved pools for !__GFP_DIRECT_RECLAIM allocations as they
>> > +	 * aren't guaranteed to succeed and chances of getting an allocation
>> > +	 * from the allocators using GFP_ATOMIC is higher than stealing one of
>> > +	 * the few items from our usually small pool.
>> > +	 */
>> 
>> Hm but the code doesn't do what the comment says, AFAICS? It will try
>> dipping into the pool and might succeed if there are elements, only will not
>> wait for them there?

> Yeah, that comment is actually stale from an older version.
> 
>> 
>> > +	for (; i < count; i++) {
>> > +		if (!elems[i]) {
>> > +			elems[i] = pool->alloc(gfp_temp, pool->pool_data);
>> > +			if (unlikely(!elems[i]))
>> > +				goto use_pool;
>> > +		}
>> > +	}
>> > +
>> > +	return 0;
>> > +
>> > +use_pool:
>> 
>> So should we bail out here with -ENOMEM when !(gfp_mask & __GFP_DIRECT_RECLAIM)?
> 
> No, I don't want the !__GFP_DIRECT_RECLAIM handling.  It's a mess,
> and while for mempool_alloc having it for compatibility might make some
> sense, I'd rather avoid it for this new interface where the semantics
> of failing allocations are going to be really annoying.

OK.

>> > +	if (!mempool_alloc_from_pool(pool, elems, count, i, gfp_temp)) {
>> > +		gfp_temp = gfp_mask;
>> > +		goto repeat_alloc;
>> 
>> Because this seems to be an infinite loop otherwise?
> 
> You mean if someone passed in !__GFP_DIRECT_RECLAIM and got the warning
> above?  Yes, IFF that code makes it to production and then runs into
> a low-memory situation it would.  But it's an API abuse.  The other
> option would be to just force __GFP_DIRECT_RECLAIM.

True, so let's ignore it.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ