[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250711-ermangelung-darmentleerung-394cebde2708@brauner>
Date: Fri, 11 Jul 2025 11:44:28 +0200
From: Christian Brauner <brauner@...nel.org>
To: Nam Cao <namcao@...utronix.de>
Cc: Xi Ruoyao <xry111@...111.site>,
Frederic Weisbecker <frederic@...nel.org>, Valentin Schneider <vschneid@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>, John Ogness <john.ogness@...utronix.de>,
Clark Williams <clrkwllms@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
linux-rt-users@...r.kernel.org, Joe Damato <jdamato@...tly.com>,
Martin Karsten <mkarsten@...terloo.ca>, Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH v3] eventpoll: Fix priority inversion problem
On Fri, Jul 11, 2025 at 07:02:17AM +0200, Nam Cao wrote:
> On Thu, Jul 10, 2025 at 05:47:57PM +0800, Xi Ruoyao wrote:
> > It didn't work :(.
>
> Argh :(
>
> Another possibility is that you are running into event starvation problem.
>
> Can you give the below patch a try? It is not the real fix, the patch hurts
> performance badly. But if starvation is really your problem, it should
> ameliorate the situation:
>
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 895256cd2786..0dcf8e18de0d 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -1764,6 +1764,8 @@ static int ep_send_events(struct eventpoll *ep,
> __llist_add(n, &txlist);
> }
>
> + struct llist_node *shuffle = llist_del_all(&ep->rdllist);
> +
> llist_for_each_entry_safe(epi, tmp, txlist.first, rdllink) {
> init_llist_node(&epi->rdllink);
>
> @@ -1778,6 +1780,13 @@ static int ep_send_events(struct eventpoll *ep,
> }
> }
>
> + if (shuffle) {
> + struct llist_node *last = shuffle;
> + while (last->next)
> + last = last->next;
> + llist_add_batch(shuffle, last, &ep->rdllist);
> + }
> +
> __pm_relax(ep->ws);
> mutex_unlock(&ep->mtx);
>
I think we should revert the fix so we have time to fix it properly
during v6.17+. This patch was a bit too adventurous for a fix in the
first place tbh.
Powered by blists - more mailing lists