[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161219.205646.1955469060856026212.davem@davemloft.net>
Date: Mon, 19 Dec 2016 20:56:46 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: jbacik@...com
Cc: hannes@...essinduktion.org, tom@...bertland.com,
kraigatgoog@...il.com, eric.dumazet@...il.com,
netdev@...r.kernel.org
Subject: Re: Soft lockup in inet_put_port on 4.6
From: Josef Bacik <jbacik@...com>
Date: Sat, 17 Dec 2016 13:26:00 +0000
> So take my current duct tape fix and augment it with more
> information in the bind bucket? I'm not sure how to make this work
> without at least having a list of the binded addrs as well to make
> sure we are really ok. I suppose we could save the fastreuseport
> address that last succeeded to make it work properly, but I'd have
> to make it protocol agnostic and then have a callback to have the
> protocol to make sure we don't have to do the bind_conflict run. Is
> that what you were thinking of? Thanks,
So there isn't a deadlock or lockup here, something is just running
really slow, right?
And that "something" is a scan of the sockets on a tb list, and
there's lots of timewait sockets hung off of that tb.
As far as I can tell, this scan is happening in
inet_csk_bind_conflict().
Furthermore, reuseport is somehow required to make this problem
happen. How exactly?
Powered by blists - more mailing lists