lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dec19492-bc23-49a0-a2b0-1cd8b5c7726e@suse.cz>
Date: Wed, 12 Nov 2025 11:53:39 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Christoph Hellwig <hch@....de>, Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
 Roman Gushchin <roman.gushchin@...ux.dev>, Harry Yoo <harry.yoo@...cle.com>,
 Eric Biggers <ebiggers@...nel.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/7] mempool: fix a wakeup race when sleeping for elements

On 11/11/25 14:52, Christoph Hellwig wrote:
> Waiters always need to re-check their condition after adding themselves
> to the waitqueue, as otherwise they might miss a wakeup.  Check
> for elements in the pool and use them before going to sleep.
> 
> The workaround mentioned was probably due to this, but seems genuinely
> useful for other reasons, so keep it and update the comment describing
> it.

Thanks for looking into that historic comment :)
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
>  mm/mempool.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/mempool.c b/mm/mempool.c
> index 850362f4ca7a..8cf3b5705b7f 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -388,6 +388,7 @@ static void *mempool_alloc_from_pool(struct mempool *pool, gfp_t gfp_mask)
>  	spin_lock_irqsave(&pool->lock, flags);
>  	if (unlikely(!pool->curr_nr))
>  		goto fail;
> +alloc:
>  	element = remove_element(pool);
>  	spin_unlock_irqrestore(&pool->lock, flags);
>  
> @@ -406,13 +407,17 @@ static void *mempool_alloc_from_pool(struct mempool *pool, gfp_t gfp_mask)
>  		DEFINE_WAIT(wait);
>  
>  		prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
> +		if (pool->curr_nr) {
> +			finish_wait(&pool->wait, &wait);
> +			goto alloc;
> +		}
>  		spin_unlock_irqrestore(&pool->lock, flags);
I think the race already cannot exist, thanks to the pool->lock being
unlocked after prepare_to_wait()?
The freeing path can't bump pool->curr_nr without the lock, so the condition
you added can't even be true, no?

>  
>  		/*
> -		 * Wait for someone else to return an element to @pool.
> -		 *
> -		 * FIXME: this should be io_schedule().  The timeout is there as
> -		 * a workaround for some DM problems in 2.6.18.
> +		 * Wait for someone else to return an element to @pool, but wake
> +		 * up occasionally as memory pressure might have reduced even
> +		 * and the normal allocation in alloc_fn could succeed even if
> +		 * no element was returned.
>  		 */
>  		io_schedule_timeout(5 * HZ);
>  		finish_wait(&pool->wait, &wait);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ