[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <81c37127dda0f2f69a019d67d4420f62c995ee7f.camel@redhat.com>
Date: Thu, 01 Feb 2024 14:16:41 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
kuba@...nel.org
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Alexander Duyck
<alexanderduyck@...com>, Alexander Duyck <alexander.duyck@...il.com>,
"Michael S. Tsirkin"
<mst@...hat.com>, Jason Wang <jasowang@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, Eric Dumazet <edumazet@...gle.com>,
kvm@...r.kernel.org, virtualization@...ts.linux.dev, linux-mm@...ck.org
Subject: Re: [PATCH net-next v4 2/5] page_frag: unify gfp bits for order 3
page allocation
On Tue, 2024-01-30 at 19:37 +0800, Yunsheng Lin wrote:
> Currently there seems to be three page frag implementions
> which all try to allocate order 3 page, if that fails, it
> then fail back to allocate order 0 page, and each of them
> all allow order 3 page allocation to fail under certain
> condition by using specific gfp bits.
>
> The gfp bits for order 3 page allocation are different
> between different implementation, __GFP_NOMEMALLOC is
> or'd to forbid access to emergency reserves memory for
> __page_frag_cache_refill(), but it is not or'd in other
> implementions, __GFP_DIRECT_RECLAIM is masked off to avoid
> direct reclaim in skb_page_frag_refill(), but it is not
> masked off in __page_frag_cache_refill().
>
> This patch unifies the gfp bits used between different
> implementions by or'ing __GFP_NOMEMALLOC and masking off
> __GFP_DIRECT_RECLAIM for order 3 page allocation to avoid
> possible pressure for mm.
>
> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> Reviewed-by: Alexander Duyck <alexanderduyck@...com>
> CC: Alexander Duyck <alexander.duyck@...il.com>
> ---
> drivers/vhost/net.c | 2 +-
> mm/page_alloc.c | 4 ++--
> net/core/sock.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index f2ed7167c848..e574e21cc0ca 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -670,7 +670,7 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz,
> /* Avoid direct reclaim but allow kswapd to wake */
> pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
> __GFP_COMP | __GFP_NOWARN |
> - __GFP_NORETRY,
> + __GFP_NORETRY | __GFP_NOMEMALLOC,
> SKB_FRAG_PAGE_ORDER);
> if (likely(pfrag->page)) {
> pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c0f7e67c4250..636145c29f70 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4685,8 +4685,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
> gfp_t gfp = gfp_mask;
>
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY |
> - __GFP_NOMEMALLOC;
> + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
> + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
> page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
> PAGE_FRAG_CACHE_MAX_ORDER);
> nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 88bf810394a5..8289a3d8c375 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -2919,7 +2919,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
> /* Avoid direct reclaim but allow kswapd to wake */
> pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
> __GFP_COMP | __GFP_NOWARN |
> - __GFP_NORETRY,
> + __GFP_NORETRY | __GFP_NOMEMALLOC,
> SKB_FRAG_PAGE_ORDER);
This will prevent memory reserve usage when allocating order 3 pages,
but not when allocating a single page as a fallback. Still different
from the __page_frag_cache_refill() allocator - which never accesses
the memory reserves.
I'm unsure we want to propagate the __page_frag_cache_refill behavior
here, the current behavior could be required by some systems.
It looks like this series still leave the skb_page_frag_refill()
allocator alone, what about dropping this chunk, too?
Thanks!
Paolo
Powered by blists - more mailing lists