[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <429C561E-2F85-4DB5-993C-B2DD4E575BF0@redhat.com>
Date: Fri, 08 Jul 2022 14:10:50 -0400
From: "Benjamin Coddington" <bcodding@...hat.com>
To: "Eric Dumazet" <edumazet@...gle.com>
Cc: "Guillaume Nault" <gnault@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"Jakub Kicinski" <kuba@...nel.org>,
"Paolo Abeni" <pabeni@...hat.com>, netdev <netdev@...r.kernel.org>,
"Chuck Lever" <chuck.lever@...cle.com>,
"Jeff Layton" <jlayton@...nel.org>,
"Trond Myklebust" <trond.myklebust@...merspace.com>,
"Anna Schumaker" <anna@...nel.org>,
"Steve French" <sfrench@...ba.org>,
"Josef Bacik" <josef@...icpanda.com>,
"Scott Mayhew" <smayhew@...hat.com>, "Tejun Heo" <tj@...nel.org>
Subject: Re: [RFC net] Should sk_page_frag() also look at the current GFP
context?
On 7 Jul 2022, at 12:29, Eric Dumazet wrote:
> On Fri, Jul 1, 2022 at 8:41 PM Guillaume Nault <gnault@...hat.com>
> wrote:
>>
>> I'm investigating a kernel oops that looks similar to
>> 20eb4f29b602 ("net: fix sk_page_frag() recursion from memory
>> reclaim")
>> and dacb5d8875cc ("tcp: fix page frag corruption on page fault").
>>
>> This time the problem happens on an NFS client, while the previous
>> bzs
>> respectively used NBD and CIFS. While NBD and CIFS clear __GFP_FS in
>> their socket's ->sk_allocation field (using GFP_NOIO or GFP_NOFS),
>> NFS
>> leaves sk_allocation to its default value since commit a1231fda7e94
>> ("SUNRPC: Set memalloc_nofs_save() on all rpciod/xprtiod jobs").
>>
>> To recap the original problems, in commit 20eb4f29b602 and
>> dacb5d8875cc,
>> memory reclaim happened while executing tcp_sendmsg_locked(). The
>> code
>> path entered tcp_sendmsg_locked() recursively as pages to be
>> reclaimed
>> were backed by files on the network. The problem was that both the
>> outer and the inner tcp_sendmsg_locked() calls used
>> current->task_frag,
>> thus leaving it in an inconsistent state. The fix was to use the
>> socket's ->sk_frag instead for the file system socket, so that the
>> inner and outer calls wouln't step on each other's toes.
>>
>> But now that NFS doesn't modify ->sk_allocation anymore,
>> sk_page_frag()
>> sees sunrpc sockets as plain TCP ones and returns ->task_frag in the
>> inner tcp_sendmsg_locked() call.
>>
>> Also it looks like the trend is to avoid GFS_NOFS and GFP_NOIO and
>> use
>> memalloc_no{fs,io}_save() instead. So maybe other network file
>> systems
>> will also stop setting ->sk_allocation in the future and we should
>> teach sk_page_frag() to look at the current GFP flags. Or should we
>> stick to ->sk_allocation and make NFS drop __GFP_FS again?
>>
>> Signed-off-by: Guillaume Nault <gnault@...hat.com>
>
> Can you provide a Fixes: tag ?
>
>> ---
>> include/net/sock.h | 8 ++++++--
>> 1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/net/sock.h b/include/net/sock.h
>> index 72ca97ccb460..b934c9851058 100644
>> --- a/include/net/sock.h
>> +++ b/include/net/sock.h
>> @@ -46,6 +46,7 @@
>> #include <linux/netdevice.h>
>> #include <linux/skbuff.h> /* struct sk_buff */
>> #include <linux/mm.h>
>> +#include <linux/sched/mm.h>
>> #include <linux/security.h>
>> #include <linux/slab.h>
>> #include <linux/uaccess.h>
>> @@ -2503,14 +2504,17 @@ static inline void
>> sk_stream_moderate_sndbuf(struct sock *sk)
>> * socket operations and end up recursing into sk_page_frag()
>> * while it's already in use: explicitly avoid task page_frag
>> * usage if the caller is potentially doing any of them.
>> - * This assumes that page fault handlers use the GFP_NOFS flags.
>> + * This assumes that page fault handlers use the GFP_NOFS flags
>> + * or run under memalloc_nofs_save() protection.
>> *
>> * Return: a per task page_frag if context allows that,
>> * otherwise a per socket one.
>> */
>> static inline struct page_frag *sk_page_frag(struct sock *sk)
>> {
>> - if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM |
>> __GFP_MEMALLOC | __GFP_FS)) ==
>> + gfp_t gfp_mask = current_gfp_context(sk->sk_allocation);
>
> This is slowing down TCP sendmsg() fast path, reading current->flags,
> possibly cold value.
True - current->flags is pretty distant from current->task_frag.
> I would suggest using one bit in sk, close to sk->sk_allocation to
> make the decision,
> instead of testing sk->sk_allocation for various flags.
>
> Not sure if we have available holes.
Its looking pretty packed on my build.. the nearest hole is 5 cachelines
away.
It'd be nice to allow network filesystem to use task_frag when possible.
If we expect sk_page_frag() to only return task_frag once per call
stack,
then can we simply check it's already in use, perhaps by looking at the
size field?
Or maybe can we set sk_allocation early from current_gfp_context()
outside
the fast path?
Ben
Powered by blists - more mailing lists