[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AE90C24D6B3A694183C094C60CF0A2F6026B717A@saturn3.aculab.com>
Date: Wed, 6 Mar 2013 11:17:39 -0000
From: "David Laight" <David.Laight@...LAB.COM>
To: "Cong Wang" <xiyou.wangcong@...il.com>, <netdev@...r.kernel.org>
Subject: RE: Spinlock spinning in __inet_hash_connect
> On Wed, 06 Mar 2013 at 09:52 GMT, Johannes Rudolph <johannes.rudolph@...glemail.com> wrote:
> > Hello all,
> >
> > I hope I'm on the correct mailing list for raising this issue. We are
> > seeing an issue while running a load test with jmeter against a web
> > server [1]. The test suite uses 50 threads to connect to a localhost
> > web server, runs one http request per connection and then loops. What
> > happens is that after the test runs for about 10 seconds (~ 100000
> > connections established / closed) the CPU load goes up and connection
> > rates slow down massively (see [1] for a chart). With `perf top` I'm
> > observing this on the _client_ side:
> >
> > 41.39% [kernel] [k] __ticket_spin_lock
> > 16.83% [kernel] [k]
> > __inet_check_established
> > 12.50% [kernel] [k] __inet_hash_connect
> > 4.35% [kernel] [k] __ticket_spin_unlock
> >
>
> It seems both IPv6 and IPv4 call paths contest for spin_lock(&head->lock),
> so I am just wondering if we could use RCU to protect the iteration of
> inet_bind_bucket_for_each().
I'd guess that the code is having 'difficultly' allocating
port numbers.
No amount of fiddling with locking will fix that.
There are probably a lot of sockets in one of the 'wait' states.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists