[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1492312137.10587.87.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Sat, 15 Apr 2017 20:08:57 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jamal Hadi Salim <jhs@...atatu.com>
Cc: Johannes Berg <johannes@...solutions.net>,
Pablo Neira Ayuso <pablo@...filter.org>,
David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [RFC patch 1/1] large netlink dumps
On Sat, 2017-04-15 at 13:07 -0400, Jamal Hadi Salim wrote:
> Eric,
>
> How does attached look instead of the 32K?
> I found it helps to let user space suggest something
> larger.
>
> cheers,
> jamal
Looks dangerous to me, for various reasons.
1) Memory allocations might not like it
Have you tried your change after user does a
setsockopt( SO_RCVBUFFORCE, 256 Mbytes), and a
recvmsg ( .. 64 Mbytes) ?
Presumably, we could replace 32768 by (PAGE_SIZE <<
PAGE_ALLOC_COSTLY_ORDER), but this will not matter on x86.
2) We might have paths in the kernel filling a potential big skb without
yielding cpu or a spinlock or a mutex. -> latency source.
What perf numbers do you have, using 1MB buffers instead of 32KB ?
The syscall overhead seems tiny compared to the actual cost of filling
the netlink message, accessing thousands of cache lines all over the
places.
Powered by blists - more mailing lists