[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151102062240.GQ22011@ZenIV.linux.org.uk>
Date: Mon, 2 Nov 2015 06:22:40 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
David Miller <davem@...emloft.net>,
Stephen Hemminger <stephen@...workplumber.org>,
Network Development <netdev@...r.kernel.org>,
David Howells <dhowells@...hat.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [Bug 106241] New: shutdown(3)/close(3) behaviour is incorrect
for sockets in accept(3)
On Sun, Nov 01, 2015 at 06:14:43PM -0800, Eric Dumazet wrote:
> On Mon, 2015-11-02 at 00:24 +0000, Al Viro wrote:
>
> > This ought to be a bit cleaner. Eric, could you test the variant below on your
> > setup?
>
> Sure !
>
> 5 runs of :
> lpaa24:~# taskset ff0ff ./opensock -t 16 -n 10000000 -l 10
>
> total = 4386311
> total = 4560402
> total = 4437309
> total = 4516227
> total = 4478778
Umm... With Linus' variant it was what, around 4000000? +10% or so, then...
> With 48 threads :
>
> ./opensock -t 48 -n 10000000 -l 10
> total = 4940245
> total = 4848513
> total = 4813153
> total = 4813946
> total = 5127804
And that - +40%? Interesting... And it looks like at 48 threads you are
still seeing arseloads of contention, but apparently less than with Linus'
variant... What if you throw the __clear_close_on_exec() patch on
top of that?
Looks like it's spending less time under ->files_lock... Could you get
information on fs/file.o hotspots?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists