[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250223000853.rj4wtnxa6iazoehu@airbuntu>
Date: Sun, 23 Feb 2025 00:08:53 +0000
From: Qais Yousef <qyousef@...alina.io>
To: zihan zhou <15645113830zzh@...il.com>
Cc: bsegall@...gle.com, dietmar.eggemann@....com, juri.lelli@...hat.com,
linux-kernel@...r.kernel.org, mgorman@...e.de, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org,
vincent.guittot@...aro.org, vschneid@...hat.com
Subject: Re: [PATCH V3 1/2] sched: Reduce the default slice to avoid tasks
getting an extra tick
On 02/22/25 11:19, zihan zhou wrote:
> > Yes. The minimum bar of modern hardware is higher now. And generally IMHO this
> > value depends on workload. NR_CPUs can make an overloaded case harder, but it
> > really wouldn't take much to saturate 8 CPUs compared to 2 CPUs. And from
> > experience the larger the machine the larger the workload, so the worst case
> > scenario of having to slice won't be in practice too much different. Especially
> > many programmers look at NR_CPUs and spawn as many threads..
> >
> > Besides with EAS we force packing, so we artificially force contention to save
> > power.
> >
> > Dynamically depending on rq->hr_nr_runnable looks attractive but I think this
> > is a recipe for more confusion. We sort of had this with sched_period, the new
> > fixed model is better IMHO.
>
> Hi, It seems that I have been thinking less about things. I have been re
> reading these emails recently. Can you give me the LPC links for these
> discussions? I want to relearn this part seriously, such as why we don't
> dynamically adjust the slice.
No LPC talks. It was just something I noticed and brought up during LPC offline
and was planning to send a patch with that effect. The reasons above is pretty
much is all of it. We are simply better off having a constant base_slice.
debugfs allows modifying it if users think they know better and need to use
another default. But the scaling factor doesn't hold great (or any) value
anymore and can create confusions for our users.
Powered by blists - more mailing lists