[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <866a0ac45418a3543c9ddc2869671fc9c2b20afb.camel@redhat.com>
Date: Mon, 03 Oct 2022 13:17:15 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Guillaume Nault <gnault@...hat.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>
Cc: netdev@...r.kernel.org, Chuck Lever <chuck.lever@...cle.com>,
Jeff Layton <jlayton@...nel.org>,
Trond Myklebust <trond.myklebust@...merspace.com>,
Anna Schumaker <anna@...nel.org>, linux-nfs@...r.kernel.org,
Benjamin Coddington <bcodding@...hat.com>
Subject: Re: [PATCH net] sunrpc: Use GFP_NOFS to prevent use of
current->task_frag.
On Wed, 2022-09-21 at 14:16 +0200, Guillaume Nault wrote:
> Commit a1231fda7e94 ("SUNRPC: Set memalloc_nofs_save() on all
> rpciod/xprtiod jobs") stopped setting sk->sk_allocation explicitly in
> favor of using memalloc_nofs_save()/memalloc_nofs_restore() critical
> sections.
>
> However, ->sk_allocation isn't used just by the memory allocator.
> In particular, sk_page_frag() uses it to figure out if it can return
> the page_frag from current or if it has to use the socket one.
> With ->sk_allocation set to the default GFP_KERNEL, sk_page_frag() now
> returns current->page_frag, which might already be in use in the
> current context if the call happens during memory reclaim.
>
> Fix this by setting ->sk_allocation to GFP_NOFS.
> Note that we can't just instruct sk_page_frag() to look at
> current->flags, because it could generate a cache miss, thus slowing
> down the TCP fast path.
>
> This is similar to the problems fixed by the following two commits:
> * cifs: commit dacb5d8875cc ("tcp: fix page frag corruption on page
> fault").
> * nbd: commit 20eb4f29b602 ("net: fix sk_page_frag() recursion from
> memory reclaim").
>
> Link: https://lore.kernel.org/netdev/b4d8cb09c913d3e34f853736f3f5628abfd7f4b6.1656699567.git.gnault@redhat.com/
> Fixes: a1231fda7e94 ("SUNRPC: Set memalloc_nofs_save() on all rpciod/xprtiod jobs")
> Signed-off-by: Guillaume Nault <gnault@...hat.com>
@Trond, @Anna, @Chuck: are you ok with this patch? Should we take it
via the net tree or will you merge it?
Thanks!
Paolo
Powered by blists - more mailing lists