[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231031160120.GE15024@noisy.programming.kicks-ass.net>
Date: Tue, 31 Oct 2023 17:01:20 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Tomas Glozar <tglozar@...hat.com>
Subject: Re: [PATCH] sched/fair: Make the BW replenish timer expire in
hardirq context for PREEMPT_RT
On Mon, Oct 30, 2023 at 03:51:04PM +0100, Valentin Schneider wrote:
> Consider the following scenario under PREEMPT_RT:
> o A CFS task p0 gets throttled while holding read_lock(&lock)
> o A task p1 blocks on write_lock(&lock), making further readers enter the
> slowpath
> o A ktimers or ksoftirqd task blocks on read_lock(&lock)
>
> If the cfs_bandwidth.period_timer to replenish p0's runtime is enqueued on
> the same CPU as one where ktimers/ksoftirqd is blocked on read_lock(&lock),
> this creates a circular dependency.
>
> This has been observed to happen with:
> o fs/eventpoll.c::ep->lock
> o net/netlink/af_netlink.c::nl_table_lock (after hand-fixing the above)
> but can trigger with any rwlock that can be acquired in both process and
> softirq contexts.
>
> The linux-rt tree has had
> 1ea50f9636f0 ("softirq: Use a dedicated thread for timer wakeups.")
> which helped this scenario for non-rwlock locks by ensuring the throttled
> task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately,
> rwlocks cannot sanely do PI as they allow multiple readers.
>
> Make the period_timer expire in hardirq context under PREEMPT_RT. The
> callback for this timer can end up doing a lot of work, but this is
> mitigated somewhat when using nohz_full / CPU isolation: the timers *are*
> pinned, but on the CPUs the taskgroups are created on, which is usually
> going to be HK CPUs.
Moo... so I think 'people' have been pushing towards changing the
bandwidth thing to only throttle on the return-to-user path. This solves
the kernel side of the lock holder 'preemption' issue.
I'm thinking working on that is saner than adding this O(n) cgroup loop
to hard-irq context. Hmm?
Powered by blists - more mailing lists