[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161005222438.GA86006@ast-mbp.thefacebook.com>
Date: Wed, 5 Oct 2016 15:24:40 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Greg Thelen <gthelen@...gle.com>, Chris Mason <clm@...com>,
kernel-team@...com
Subject: Re: [PATCH net] netlink: do not enter direct reclaim from
netlink_dump()
On Thu, Oct 06, 2016 at 04:13:18AM +0900, Eric Dumazet wrote:
>
> While we are at it, since we do an order-3 allocation, allow to use
> all the allocated bytes instead of 16384 to reduce syscalls during
> large dumps.
>
> iproute2 already uses 32KB recvmsg() buffer sizes.
....
> diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
> index 627f898c05b96552318a881ce995ccc3342e1576..62bea4591054820eb516ef016214ee23fe89b6e9 100644
> --- a/net/netlink/af_netlink.c
> +++ b/net/netlink/af_netlink.c
> @@ -1832,7 +1832,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> /* Record the max length of recvmsg() calls for future allocations */
> nlk->max_recvmsg_len = max(nlk->max_recvmsg_len, len);
> nlk->max_recvmsg_len = min_t(size_t, nlk->max_recvmsg_len,
> - 16384);
> + SKB_WITH_OVERHEAD(32768));
sure, it won't stress it more than what it is today, but why increase it?
iproute2 increased the buffer form 16k to 32k due to 'msg_trunc' which
I think was due to this issue. If we go with SKB_WITH_OVERHEAD(16384)
we can go back to 16k in iproute2 as well.
Do we have any data to justify that buffer of 32k - skb_shared_info vs 16k
will meaninfully reduce the number of syscalls?
We're seeing direct reclaim get hammered due to order-3.
Not sure whether & ~__GFP_DIRECT_RECLAIM is going to be enough.
Currently we're testing with SKB_WITH_OVERHEAD(16384) and ~__GFP_DIRECT_RECLAIM.
It will take another week to make sure SKB_WITH_OVERHEAD(32768) is ok.
imo this optimization is done too soon.
I'd much more comfortable with SKB_WITH_OVERHEAD(16384) value here.
>
> copied = data_skb->len;
> if (len < copied) {
> @@ -2083,8 +2083,9 @@ static int netlink_dump(struct sock *sk)
>
> if (alloc_min_size < nlk->max_recvmsg_len) {
> alloc_size = nlk->max_recvmsg_len;
> - skb = alloc_skb(alloc_size, GFP_KERNEL |
> - __GFP_NOWARN | __GFP_NORETRY);
> + skb = alloc_skb(alloc_size,
> + (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) |
> + __GFP_NOWARN | __GFP_NORETRY);
> }
> if (!skb) {
> alloc_size = alloc_min_size;
>
>
Powered by blists - more mailing lists