[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87bl096kbg.mognet@arm.com>
Date: Tue, 18 Jan 2022 17:10:59 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Yihao Wu <wuyihao@...ux.alibaba.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Shanpei Chen <shanpeic@...ux.alibaba.com>,
王贇 <yun.wang@...ux.alibaba.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Again ignore percpu threads for imbalance pulls
On 18/01/22 16:11, Yihao Wu wrote:
> On 2022/1/18 1:16 am, Valentin Schneider wrote:
>> On 17/01/22 22:50, Yihao Wu wrote:
>>> wakeup balance keeps doing this until another NUMA node becomes so busy.
>>> And another periodic load balance just shifts it around, makeing the
>>> previously overloaded node completely idle now.
>>>
>>
>> Oooh, right, I came to the same conclusion when I got that stress-ng
>> regression report back then:
>>
>> https://lore.kernel.org/all/871rajkfkn.mognet@arm.com/
>>
>
> Shocked! I wasted weeks to locate almost the same regression. Why on
> earth haven't I read your discussion of half a year ago?
>
I've been there too :) It's a tricky thing, you have to at least do a
bisection to find some commit, and then look up the ML if there's been any
further discussion / report on it...
>> I pretty much gave up on that as the regression we caused by removing an
>> obscure/accidental balance which I couldn't properly codify. I can give it
>
> Strange, the regression reported to me says differently from yours.
>
> 4.19.91 before_2f5f4 after_2f5f4
> my_report good bad bad
> your_report N/A good bad
>
> your_report says 2f5f4 introduces new regression. While
> my_report says 2f5f4 fails and leaves the old regression be ...
>
> Maybe that's the reason why you give up on fixing it, yet I came to make
> can_migrate_task cover more cases (kernel_thread).
>
Huh; 2f5f4cce496e is actually a 5.10-stable backport of 9bcb959d05ee; what
was the first bad commit for you?
>
>> another shot, but AFAICT that only affects fork/exec heavy workloads (that
>> -13% was on something doing almost only forks) which is an odd case to
>> support.
>>
> Yes. They're indeed quite odd workloads.
> - Apps with massive shortlived threads better change runtime model, or
> use a thread pool.
> - Massive different apps on the same machine are even odder.
>
> But I guess this problem affects normal workloads too, more or less but
> not significantly. Hard to tell exactly how much influence it has.
>
Looking at my notes for the regression on that particular machine for that particular
benchmark, the group_imbalanced logic triggers for ~1% of the forks, and
the avg task lifespan was 6µs. IMO that's pretty extreme, fork-time balance
becomes the only available balance point for the child tasks (IIRC
benchmark has N stressors forking one child each) - as you said above a
more realistic approach here should use a thread pool of some sort.
Powered by blists - more mailing lists