lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtC8Vg9WJ-hZZ5is-q2Tfi8BEScXGXNvUz9Pz6pxLCWmvw@mail.gmail.com>
Date: Mon, 16 Jun 2025 16:51:35 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: dhaval@...nis.ca
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, 
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, 
	mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] sched/fair: Increase max lag clamping

On Fri, 13 Jun 2025 at 23:00, <dhaval@...nis.ca> wrote:
>
>
>
>
>
>
> On Friday, June 13th, 2025 at 7:14 AM, Vincent Guittot <vincent.guittot@...aro.org> wrote:
>
> >
> >
> > From: Peter Zijlstra peterz@...radead.org
> >
> >
> > sched_entity's lag is currently limited to the maximum between the tick
> > and twice its slice. This is too short compared to the maximum custom
> > slice that can be set and accumulated by other tasks.
> > A task can accumulate up to its slice of negative lag while running to
> > parity and the other runnable tasks can accumulate the same positive lag
> > while waiting to run. This positive lag could be lost during dequeue when
> > clamping it to twice task's slice if a task's slice is 100ms and others
> > use a smaller value like the default 2.8ms.
> > Clamp the lag of a task to the maximum slice of enqueued entities plus
> > a tick as the update can be delayed to the next tick.
> >
> > Signed-off-by: Peter Zijlstra (Intel) peterz@...radead.org
> >
> >
> > [ Rebased and Fix max slice computation ]
> >
> > Signed-off-by: Vincent Guittot vincent.guittot@...aro.org
> >
> > ---
> > include/linux/sched.h | 1 +
> > kernel/sched/fair.c | 41 +++++++++++++++++++++++++++++++++++++----
> > 2 files changed, 38 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 4f78a64beb52..89855ab45c43 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -576,6 +576,7 @@ struct sched_entity {
> > u64 deadline;
> > u64 min_vruntime;
> > u64 min_slice;
> > + u64 max_slice;
> >
>
> I am just wondering if it makes sense to maybe add a few comments here on what each of these fields are for. Maybe not this series, but if you are open to it, I will spin one up next week.

Yes, make sense


>
> > struct list_head group_node;
> > unsigned char on_rq;
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 44a09de38ddf..479b38dc307a 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -676,6 +676,8 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
> > return cfs_rq->min_vruntime + avg;
> >
> > }
> >
> > +static inline u64 cfs_rq_max_slice(struct cfs_rq cfs_rq);
> > +
> > /
> > * lag_i = S - s_i = w_i * (V - v_i)
> > *
> > @@ -689,17 +691,16 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
> > * EEVDF gives the following limit for a steady state system:
> > *
> > * -r_max < lag < max(r_max, q)
> > - *
> > - * XXX could add max_slice to the augmented data to track this.
> > */
> > static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > {
> > + u64 max_slice = cfs_rq_max_slice(cfs_rq) + TICK_NSEC;
> > s64 vlag, limit;
> >
> > WARN_ON_ONCE(!se->on_rq);
> >
> >
> > vlag = avg_vruntime(cfs_rq) - se->vruntime;
> >
> > - limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se);
> >
> > + limit = calc_delta_fair(max_slice, se);
> >
> > se->vlag = clamp(vlag, -limit, limit);
> >
>
> As an aside, I have a test for Theorem 1 from the paper which shows we are clamping here even in conditions I do not expect. Almost always a cgroup seems to be involved. I was out dealing with sickness last couple of weeks, so I have not debugged further.
>
> > }
> > @@ -795,6 +796,21 @@ static inline u64 cfs_rq_min_slice(struct cfs_rq *cfs_rq)
> > return min_slice;
> > }
> >
> > +static inline u64 cfs_rq_max_slice(struct cfs_rq *cfs_rq)
> > +{
> > + struct sched_entity *root = __pick_root_entity(cfs_rq);
> > + struct sched_entity *curr = cfs_rq->curr;
> >
> > + u64 max_slice = 0ULL;
> > +
> > + if (curr && curr->on_rq)
> >
> > + max_slice = curr->slice;
> >
> > +
> > + if (root)
> > + max_slice = max(max_slice, root->max_slice);
> >
> > +
> > + return max_slice;
> > +}
> > +
> > static inline bool __entity_less(struct rb_node *a, const struct rb_node *b)
> > {
> > return entity_before(__node_2_se(a), __node_2_se(b));
> > @@ -820,6 +836,16 @@ static inline void __min_slice_update(struct sched_entity *se, struct rb_node *n
> > }
> > }
> >
> > +static inline void __max_slice_update(struct sched_entity *se, struct rb_node *node)
> > +{
> > + if (node) {
> > + struct sched_entity *rse = __node_2_se(node);
> > +
> > + if (rse->max_slice > se->max_slice)
> >
> > + se->max_slice = rse->max_slice;
> >
> > + }
> > +}
> > +
> > /*
> > * se->min_vruntime = min(se->vruntime, {left,right}->min_vruntime)
> >
> > */
> > @@ -827,6 +853,7 @@ static inline bool min_vruntime_update(struct sched_entity *se, bool exit)
> > {
> > u64 old_min_vruntime = se->min_vruntime;
> >
> > u64 old_min_slice = se->min_slice;
> >
> > + u64 old_max_slice = se->max_slice;
> >
> > struct rb_node *node = &se->run_node;
> >
> >
> > se->min_vruntime = se->vruntime;
> >
> > @@ -837,8 +864,13 @@ static inline bool min_vruntime_update(struct sched_entity *se, bool exit)
> > __min_slice_update(se, node->rb_right);
> >
> > __min_slice_update(se, node->rb_left);
> >
> >
> > + se->max_slice = se->slice;
> >
> > + __max_slice_update(se, node->rb_right);
> >
> > + __max_slice_update(se, node->rb_left);
> >
> > +
> > return se->min_vruntime == old_min_vruntime &&
> >
> > - se->min_slice == old_min_slice;
> >
> > + se->min_slice == old_min_slice &&
> >
> > + se->max_slice == old_max_slice;
> >
> > }
> >
> > RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
> > @@ -852,6 +884,7 @@ static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > avg_vruntime_add(cfs_rq, se);
> > se->min_vruntime = se->vruntime;
> >
> > se->min_slice = se->slice;
> >
> > + se->max_slice = se->slice;
> >
> > rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,
> >
> > __entity_less, &min_vruntime_cb);
> > }
>
> otherwise
>
> Reviewed-by: Dhaval Giani (AMD) <dhaval@...nis.ca>
>
> > --
> > 2.43.0
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ