lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250704094750.GA1654@bytedance>
Date: Fri, 4 Jul 2025 17:47:50 +0800
From: Aaron Lu <ziqianlu@...edance.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Benjamin Segall <bsegall@...gle.com>,
	Chengming Zhou <chengming.zhou@...ux.dev>,
	Valentin Schneider <vschneid@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Xi Wang <xii@...gle.com>, linux-kernel@...r.kernel.org,
	Juri Lelli <juri.lelli@...hat.com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
	Chuyi Zhou <zhouchuyi@...edance.com>,
	Jan Kiszka <jan.kiszka@...mens.com>,
	Florian Bezdeka <florian.bezdeka@...mens.com>
Subject: Re: [PATCH v2 0/5] Defer throttle when task exits to user

Hi Prateek,

On Fri, Jul 04, 2025 at 02:18:49PM +0530, K Prateek Nayak wrote:
> Hello Aaron,
> 
> On 7/4/2025 1:24 PM, Aaron Lu wrote:
> > On Fri, Jul 04, 2025 at 10:04:13AM +0530, K Prateek Nayak wrote:
> > > Hello Ben,
> > > 
> > > On 7/3/2025 3:30 AM, Benjamin Segall wrote:
> > > > Aaron Lu <ziqianlu@...edance.com> writes:
> > > > 
> > > > > For pelt clock, I chose to keep the current behavior to freeze it on
> > > > > cfs_rq's throttle time. The assumption is that tasks running in kernel
> > > > > mode should not last too long, freezing the cfs_rq's pelt clock can keep
> > > > > its load and its corresponding sched_entity's weight. Hopefully, this can
> > > > > result in a stable situation for the remaining running tasks to quickly
> > > > > finish their jobs in kernel mode.
> > > > 
> > > > I suppose the way that this would go wrong would be CPU 1 using up all
> > > > of the quota, and then a task waking up on CPU 2 and trying to run in
> > > > the kernel for a while. I suspect pelt time needs to also keep running
> > > > until all the tasks are asleep (and that's what we have been running at
> > > > google with the version based on separate accounting, so we haven't
> > > > accidentally done a large scale test of letting it pause).
> > > 
> > > Thinking out loud ...
> > > 
> > > One thing this can possibly do is create a lot of:
> > > 
> > >    throttled -> partially unthrottled -> throttled
> > > 
> > > transitions when tasks wakeup on throttled hierarchy, run for a while,
> > > and then go back to sleep. If the PELT clocks aren't frozen, this
> > > either means:
> > > 
> > > 1. Do a full walk_tg_tree_from() placing all the leaf cfs_rq that have
> > >     any load associated back onto the list and allow PELT to progress only
> > >     to then remove them again once tasks go back to sleep. A great many of
> > >     these transitions are possible theoretically which is not ideal.
> > > 
> > 
> > I'm going this route, becasue it is kind of already the case now :)
> > 
> > I mean throttled cfs_rqs are already added back to the leaf cfs_rq
> > list during enqueue time, to make the assert_list_leaf_cfs_rq(rq); at
> > the bottom of enqueue_task_fair() happy when a task is enqueued to a
> > throttled cfs_rq.
> > 
> > I'm sorry if this is not obvious in this series, I guess I put too many
> > things in patch3.
> > 
> > Below is the diff I cooked on top of this series to keep pelt clock
> > running as long as there is task running in a throttled cfs_rq, does it
> > look sane?
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d869c8b51c5a6..410b850df2a12 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5290,8 +5290,15 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> >   	se->on_rq = 1;
> >   	if (cfs_rq->nr_queued == 1) {
> > +		struct rq *rq = rq_of(cfs_rq);
> > +
> >   		check_enqueue_throttle(cfs_rq);
> >   		list_add_leaf_cfs_rq(cfs_rq);
> > +		if (cfs_rq->pelt_clock_throttled) {
> > +			cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
> > +				cfs_rq->throttled_clock_pelt;
> > +			cfs_rq->pelt_clock_throttled = 0;
> > +		}
> 
> At this point we've already done a update_load_avg() above in
> enqueue_entity() without unfreezing the PELT clock. Does it make
> sense to do it at the beginning?
>

My thinking is, when the first entity is enqueued, the cfs_rq should
have been throttled so the update_load_avg() done above should do nothing
since its pelt clock is frozen(so no decay should happen). Then after the
entity is added, this throttled cfs_rq's pelt clock is unfrozen and load
can be updated accordingly.

Thinking it another way, when we move the unfreezing earlier, the delta
calculated in __update_load_sum() should be zero because
cfs_rq->throttled_clock_pelt_time is adjusted and the "now" value
derived from cfs_rq_clock_pelt() should be the same as last update time.

Does this make sense or do I mis-understand it?

> Overall idea seems sane to me. I was thinking if anything can go
> wrong by only unfreezing the PELT for one part of the hierarchy but
> I suppose the other cfs_rq can be considered individually throttled
> and it should turn out fine.
> 

Unfreezing the PELT for only one part seems problematic, I suppose we
will have to propagate the load upwards all the way up to let the root
level sched entity have a correct load setting so that when it needs to
compete for cpu resources, it gets the correct share. I assume this is
why Ben think it is better to keep the pelt clock running as long as
there is task running.

Note that the pelt clock is only frozen when the cfs_rq has no tasks
running, and gets unfrozen when the first entity is enqueued or at
unthrottle time. So when a new task is enqueued, at each level's
enqueue_entity():
- if it's the first entity of a throttled cfs_rq, then its PELT clock
  will be unfrozen;
- if it's not the first entity, that means the cfs_rq's pelt clock is
  not frozen yet.
So in the end, we should not have a partial unfrozen hiearchy. At least,
that's my intention, please let me know if I messed up :)

Thanks for the quick review!

Best regards,
Aaron

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ