[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <zeuszr6ot5qdi46f5gvxa2c5efy4mc6eaea3au52nqnbhjek7o@l43ps2jtip7x>
Date: Wed, 2 Apr 2025 21:37:40 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Michal Hocko <mhocko@...e.com>
Cc: Dave Chinner <david@...morbit.com>, Yafang Shao <laoar.shao@...il.com>,
Harry Yoo <harry.yoo@...cle.com>, Kees Cook <kees@...nel.org>, joel.granados@...nel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, Josef Bacik <josef@...icpanda.com>,
linux-mm@...ck.org, Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH] proc: Avoid costly high-order page allocations when
reading proc files
On Wed, Apr 02, 2025 at 02:24:45PM +0200, Michal Hocko wrote:
> diff --git a/mm/util.c b/mm/util.c
> index 60aa40f612b8..8386f6976d7d 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -601,14 +601,18 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
> * We want to attempt a large physically contiguous block first because
> * it is less likely to fragment multiple larger blocks and therefore
> * contribute to a long term fragmentation less than vmalloc fallback.
> - * However make sure that larger requests are not too disruptive - no
> - * OOM killer and no allocation failure warnings as we have a fallback.
> + * However make sure that larger requests are not too disruptive - i.e.
> + * do not direct reclaim unless physically continuous memory is preferred
> + * (__GFP_RETRY_MAYFAIL mode). We still kick in kswapd/kcompactd to start
> + * working in the background but the allocation itself.
> */
> if (size > PAGE_SIZE) {
> flags |= __GFP_NOWARN;
>
> if (!(flags & __GFP_RETRY_MAYFAIL))
> flags |= __GFP_NORETRY;
> + else
> + flags &= ~__GFP_DIRECT_RECLAIM;
I think you wanted the following instead:
if (!(flags & __GFP_RETRY_MAYFAIL))
flags &= ~__GFP_DIRECT_RECLAIM;
This is what Dave is asking as well for kmalloc() case of kvmalloc().
>
> /* nofail semantic is implemented by the vmalloc fallback */
> flags &= ~__GFP_NOFAIL;
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists