[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1475710521.28155.234.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Thu, 06 Oct 2016 08:35:21 +0900
From: Eric Dumazet <eric.dumazet@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Greg Thelen <gthelen@...gle.com>, Chris Mason <clm@...com>,
kernel-team@...com
Subject: Re: [PATCH net] netlink: do not enter direct reclaim from
netlink_dump()
On Wed, 2016-10-05 at 15:24 -0700, Alexei Starovoitov wrote:
> On Thu, Oct 06, 2016 at 04:13:18AM +0900, Eric Dumazet wrote:
> >
> > While we are at it, since we do an order-3 allocation, allow to use
> > all the allocated bytes instead of 16384 to reduce syscalls during
> > large dumps.
> >
> > iproute2 already uses 32KB recvmsg() buffer sizes.
> ....
> > diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
> > index 627f898c05b96552318a881ce995ccc3342e1576..62bea4591054820eb516ef016214ee23fe89b6e9 100644
> > --- a/net/netlink/af_netlink.c
> > +++ b/net/netlink/af_netlink.c
> > @@ -1832,7 +1832,7 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> > /* Record the max length of recvmsg() calls for future allocations */
> > nlk->max_recvmsg_len = max(nlk->max_recvmsg_len, len);
> > nlk->max_recvmsg_len = min_t(size_t, nlk->max_recvmsg_len,
> > - 16384);
> > + SKB_WITH_OVERHEAD(32768));
>
> sure, it won't stress it more than what it is today, but why increase it?
> iproute2 increased the buffer form 16k to 32k due to 'msg_trunc' which
> I think was due to this issue. If we go with SKB_WITH_OVERHEAD(16384)
> we can go back to 16k in iproute2 as well.
Wow, if really iproute2 tool would have increased the buffer to work
around a bug in the kernel, we should be worried.
Hopefully the issue was fixed for good in the kernel ?
commit db65a3aaf29ecce2e34271d52e8d2336b97bd9fe
("netlink: Trim skb to alloc size to avoid MSG_TRUNC")
Powered by blists - more mailing lists