[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b990116d-d475-4c57-9c6c-fafb4ef5fbad@suse.cz>
Date: Tue, 11 Feb 2025 16:05:54 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Michal Hocko <mhocko@...nel.org>, Dennis Zhou <dennis@...nel.org>,
Tejun Heo <tj@...nel.org>, Filipe Manana <fdmanana@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>, Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH] mm, percpu: do not consider sleepable allocations atomic
On 2/6/25 13:26, Michal Hocko wrote:
> From: Michal Hocko <mhocko@...e.com>
>
> 28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp context")
> has fixed a reclaim recursion for scoped GFP_NOFS context. It has done
> that by avoiding taking pcpu_alloc_mutex. This is a correct solution as
> the worker context with full GFP_KERNEL allocation/reclaim power and which
> is using the same lock cannot block the NOFS pcpu_alloc caller.
>
> On the other hand this is a very conservative approach that could lead
> to failures because pcpu_alloc lockless implementation is quite limited.
>
> We have a bug report about premature failures when scsi array of 193
> devices is scanned. Sometimes (not consistently) the scanning aborts
> because the iscsid daemon fails to create the queue for a random scsi
> device during the scan. iscsid itslef is running with PR_SET_IO_FLUSHER
> set so all allocations from this process context are GFP_NOIO. This in
> turn makes any pcpu_alloc lockless (without pcpu_alloc_mutex) which
> leads to pre-mature failures.
>
> It has turned out that iscsid has worked around this by dropping
> PR_SET_IO_FLUSHER (https://github.com/open-iscsi/open-iscsi/pull/382)
> when scanning host. But we can do better in this case on the kernel side
> and use pcpu_alloc_mutex for NOIO resp. NOFS constrained allocation
> scopes too. We just need the WQ worker to never trigger IO/FS reclaim.
> Achieve that by enforcing scoped GFP_NOIO for the whole execution of
> pcpu_balance_workfn (this will imply NOFS constrain as well). This will
> remove the dependency chain and preserve the full allocation power of
> the pcpu_alloc call.
>
> While at it make is_atomic really test for blockable allocations.
>
> Fixes: 28307d938fb2 ("percpu: make pcpu_alloc() aware of current gfp context
> Signed-off-by: Michal Hocko <mhocko@...e.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/percpu.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index d8dd31a2e407..192c2a8e901d 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1758,7 +1758,7 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
> gfp = current_gfp_context(gfp);
> /* whitelisted flags that can be passed to the backing allocators */
> pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
> - is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;
> + is_atomic = !gfpflags_allow_blocking(gfp);
> do_warn = !(gfp & __GFP_NOWARN);
>
> /*
> @@ -2204,7 +2204,12 @@ static void pcpu_balance_workfn(struct work_struct *work)
> * to grow other chunks. This then gives pcpu_reclaim_populated() time
> * to move fully free chunks to the active list to be freed if
> * appropriate.
> + *
> + * Enforce GFP_NOIO allocations because we have pcpu_alloc users
> + * constrained to GFP_NOIO/NOFS contexts and they could form lock
> + * dependency through pcpu_alloc_mutex
> */
> + unsigned int flags = memalloc_noio_save();
> mutex_lock(&pcpu_alloc_mutex);
> spin_lock_irq(&pcpu_lock);
>
> @@ -2215,6 +2220,7 @@ static void pcpu_balance_workfn(struct work_struct *work)
>
> spin_unlock_irq(&pcpu_lock);
> mutex_unlock(&pcpu_alloc_mutex);
> + memalloc_noio_restore(flags);
> }
>
> /**
Powered by blists - more mailing lists