[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4CD615rYurnV6h7@hirez.programming.kicks-ass.net>
Date: Fri, 25 Nov 2022 09:59:23 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Josh Don <joshdon@...gle.com>
Cc: Chengming Zhou <zhouchengming@...edance.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org, Tejun Heo <tj@...nel.org>,
Michal Koutný <mkoutny@...e.com>,
Christian Brauner <brauner@...nel.org>,
Zefan Li <lizefan.x@...edance.com>,
Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbec@...il.com>,
anna-maria@...utronix.de
Subject: Re: [PATCH v3] sched: async unthrottling for cfs bandwidth
On Fri, Nov 25, 2022 at 09:57:09AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 22, 2022 at 11:35:48AM +0100, Peter Zijlstra wrote:
> > On Mon, Nov 21, 2022 at 11:37:14AM -0800, Josh Don wrote:
> > > Yep, this tradeoff feels "best", but there are some edge cases where
> > > this could potentially disrupt fairness. For example, if we have
> > > non-trivial W, a lot of cpus to iterate through for dispatching remote
> > > unthrottle, and quota is small. Doesn't help that the timer is pinned
> > > so that this will continually hit the same cpu.
> >
> > We could -- if we wanted to -- manually rotate the timer around the
> > relevant CPUs. Doing that sanely would require a bit of hrtimer surgery
> > though I'm afraid.
>
> Here; something like so should enable us to cycle the bandwidth timer.
> Just need to figure out a way to find another CPU or something.
Some more preparation...
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5617,7 +5617,7 @@ static int do_sched_cfs_period_timer(str
if (!throttled) {
/* mark as potentially idle for the upcoming period */
cfs_b->idle = 1;
- return 0;
+ return HRTIMER_RESTART;
}
/* account preceding periods in which throttling occurred */
@@ -5641,10 +5641,10 @@ static int do_sched_cfs_period_timer(str
*/
cfs_b->idle = 0;
- return 0;
+ return HRTIMER_RESTART;
out_deactivate:
- return 1;
+ return HRTIMER_NORESTART;
}
/* a cfs_rq won't donate quota below this amount */
@@ -5836,9 +5836,9 @@ static enum hrtimer_restart sched_cfs_pe
{
struct cfs_bandwidth *cfs_b =
container_of(timer, struct cfs_bandwidth, period_timer);
+ int restart = HRTIMER_RESTART;
unsigned long flags;
int overrun;
- int idle = 0;
int count = 0;
raw_spin_lock_irqsave(&cfs_b->lock, flags);
@@ -5847,7 +5847,7 @@ static enum hrtimer_restart sched_cfs_pe
if (!overrun)
break;
- idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
+ restart = do_sched_cfs_period_timer(cfs_b, overrun, flags);
if (++count > 3) {
u64 new, old = ktime_to_ns(cfs_b->period);
@@ -5880,11 +5880,11 @@ static enum hrtimer_restart sched_cfs_pe
count = 0;
}
}
- if (idle)
+ if (restart == HRTIMER_NORESTART)
cfs_b->period_active = 0;
raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
- return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
+ return restart;
}
void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
Powered by blists - more mailing lists