[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOBoifgz0pRCBUqo7+X2BKgSuHmQLB6X0LZ9D2eYvboO5yzybg@mail.gmail.com>
Date: Fri, 3 Feb 2023 10:47:01 -0800
From: Xi Wang <xii@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Ben Segall <bsegall@...gle.com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] sched: Consider capacity for certain load balancing decisions
On Fri, Feb 3, 2023 at 1:51 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Jan 31, 2023 at 05:20:32PM -0800, Xi Wang wrote:
> > After load balancing was split into different scenarios, CPU capacity
> > is ignored for the "migrate_task" case, which means a thread can stay
> > on a softirq heavy cpu for an extended amount of time.
> >
> > By comparing nr_running/capacity instead of just nr_running we can add
> > CPU capacity back into "migrate_task" decisions. This benefits
> > workloads running on machines with heavy network traffic. The change
> > is unlikely to cause serious problems for other workloads but maybe
> > some corner cases still need to be considered.
> >
> > Signed-off-by: Xi Wang <xii@...gle.com>
> > ---
> > kernel/sched/fair.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0f8736991427..aad14bc04544 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10368,8 +10368,9 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> > break;
> >
> > case migrate_task:
> > - if (busiest_nr < nr_running) {
> > + if (busiest_nr * capacity < nr_running * busiest_capacity) {
> > busiest_nr = nr_running;
> > + busiest_capacity = capacity;
> > busiest = rq;
> > }
> > break;
>
> I don't think this is correct. The migrate_task case is work-conserving,
> and your change can severely break that I think.
>
I think you meant this kind of scenario:
cpu 0: idle
cpu 1: 2 tasks
cpu 2: 1 task but only has 30% of capacity
Pulling from cpu 2 is good for the task but lowers the overall cpu
throughput.
The problem we have is:
cpu 0: idle
cpu 1: 1 task
cpu 2: 1 task but only has 60% of capacity due to net softirq
The task on cpu 2 stays there and runs slower. (This can also be
considered non work-conserving if we account softirq like a task.)
Maybe the logic can be merged like this: Use capacity but pick from
nr_running > 1 cpus first, then nr_running == 1 cpus if not found.
Powered by blists - more mailing lists