[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5KX7XEHR9h_jFHf5pJcYB+dODEeaLKrQLtSy9EUqgvWw@mail.gmail.com>
Date: Wed, 29 Jun 2022 18:12:46 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Benjamin Segall <bsegall@...gle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Eric Dumazet <edumazet@...gle.com>,
Roman Penyaev <rpenyaev@...e.de>,
Jason Baron <jbaron@...mai.com>,
Khazhismel Kumykov <khazhy@...gle.com>, Heiher <r@....cc>
Subject: Re: [RESEND RFC PATCH] epoll: autoremove wakers even more aggressively
On Wed, Jun 29, 2022 at 4:55 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Wed, 15 Jun 2022 14:24:23 -0700 Benjamin Segall <bsegall@...gle.com> wrote:
>
> > If a process is killed or otherwise exits while having active network
> > connections and many threads waiting on epoll_wait, the threads will all
> > be woken immediately, but not removed from ep->wq. Then when network
> > traffic scans ep->wq in wake_up, every wakeup attempt will fail, and
> > will not remove the entries from the list.
> >
> > This means that the cost of the wakeup attempt is far higher than usual,
> > does not decrease, and this also competes with the dying threads trying
> > to actually make progress and remove themselves from the wq.
> >
> > Handle this by removing visited epoll wq entries unconditionally, rather
> > than only when the wakeup succeeds - the structure of ep_poll means that
> > the only potential loss is the timed_out->eavail heuristic, which now
> > can race and result in a redundant ep_send_events attempt. (But only
> > when incoming data and a timeout actually race, not on every timeout)
> >
>
> Thanks. I added people from 412895f03cbf96 ("epoll: atomically remove
> wait entry on wake up") to cc. Hopefully someone there can help review
> and maybe test this.
>
>
Thanks Andrew. Just wanted to add that we are seeing this issue in
production with real workloads and it has caused hard lockups.
Particularly network heavy workloads with a lot of threads in
epoll_wait() can easily trigger this issue if they get killed
(oom-killed in our case).
Powered by blists - more mailing lists