[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220716012731.2zz7hpg3qbhwgeqd@google.com>
Date: Sat, 16 Jul 2022 01:27:31 +0000
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Benjamin Segall <bsegall@...gle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Eric Dumazet <edumazet@...gle.com>,
Roman Penyaev <rpenyaev@...e.de>,
Jason Baron <jbaron@...mai.com>,
Khazhismel Kumykov <khazhy@...gle.com>, Heiher <r@....cc>
Subject: Re: [RESEND RFC PATCH] epoll: autoremove wakers even more aggressively
On Thu, Jun 30, 2022 at 07:59:05AM -0700, Shakeel Butt wrote:
> On Wed, Jun 29, 2022 at 7:24 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > On Wed, 29 Jun 2022 18:12:46 -0700 Shakeel Butt <shakeelb@...gle.com> wrote:
> >
> > > On Wed, Jun 29, 2022 at 4:55 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> > > >
> > > > On Wed, 15 Jun 2022 14:24:23 -0700 Benjamin Segall <bsegall@...gle.com> wrote:
> > > >
> > > > > If a process is killed or otherwise exits while having active network
> > > > > connections and many threads waiting on epoll_wait, the threads will all
> > > > > be woken immediately, but not removed from ep->wq. Then when network
> > > > > traffic scans ep->wq in wake_up, every wakeup attempt will fail, and
> > > > > will not remove the entries from the list.
> > > > >
> > > > > This means that the cost of the wakeup attempt is far higher than usual,
> > > > > does not decrease, and this also competes with the dying threads trying
> > > > > to actually make progress and remove themselves from the wq.
> > > > >
> > > > > Handle this by removing visited epoll wq entries unconditionally, rather
> > > > > than only when the wakeup succeeds - the structure of ep_poll means that
> > > > > the only potential loss is the timed_out->eavail heuristic, which now
> > > > > can race and result in a redundant ep_send_events attempt. (But only
> > > > > when incoming data and a timeout actually race, not on every timeout)
> > > > >
> > > >
> > > > Thanks. I added people from 412895f03cbf96 ("epoll: atomically remove
> > > > wait entry on wake up") to cc. Hopefully someone there can help review
> > > > and maybe test this.
> > > >
> > > >
> > >
> > > Thanks Andrew. Just wanted to add that we are seeing this issue in
> > > production with real workloads and it has caused hard lockups.
> > > Particularly network heavy workloads with a lot of threads in
> > > epoll_wait() can easily trigger this issue if they get killed
> > > (oom-killed in our case).
> >
> > Hard lockups are undesirable. Is a cc:stable justified here?
>
> Not for now as I don't know if we can blame a patch which might be the
> source of this behavior.
I am able to repro the epoll hard lockup on next-20220715 with Ben's
patch reverted. The repro is a simple TCP server and tens of clients
communicating over loopback. Though to cause the hard lockup I have to
create a couple thousand threads in epoll_wait() in server and also
reduce the kernel.watchdog_thresh. With Ben's patch the repro does not
cause the hard lockup even with kernel.watchdog.thresh=1.
Please add:
Tested-by: Shakeel Butt <shakeelb@...gle.com>
Powered by blists - more mailing lists