lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAYS3OV+udSncqVWHh7+PCQxL-_pnSGCJqJMr_nyTOXUA@mail.gmail.com>
Date: Thu, 10 Jul 2025 12:40:12 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, 
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, 
	mgorman@...e.de, vschneid@...hat.com, dhaval@...nis.ca, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/6] sched/fair: Limit run to parity to the min slice
 of enqueued entities

On Thu, 10 Jul 2025 at 09:00, Madadi Vineeth Reddy
<vineethr@...ux.ibm.com> wrote:
>
> Hi Vincent,
>
> On 08/07/25 22:26, Vincent Guittot wrote:
> > Run to parity ensures that current will get a chance to run its full
> > slice in one go but this can create large latency and/or lag for
> > entities with shorter slice that have exhausted their previous slice
> > and wait to run their next slice.
> >
> > Clamp the run to parity to the shortest slice of all enqueued entities.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >  kernel/sched/fair.c | 12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 7e82b357763a..85238f2e026a 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -884,16 +884,20 @@ struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq)
> >  /*
> >   * Set the vruntime up to which an entity can run before looking
> >   * for another entity to pick.
> > - * In case of run to parity, we protect the entity up to its deadline.
> > + * In case of run to parity, we use the shortest slice of the enqueued
> > + * entities to set the protected period.
> >   * When run to parity is disabled, we give a minimum quantum to the running
> >   * entity to ensure progress.
> >   */
>
> If I set my task’s custom slice to a larger value but another task has a smaller slice,
> this change will cap my protected window to the smaller slice. Does that mean my custom
> slice is no longer honored?

What do you mean by honored ? EEVDF never mandates that a request of
size slice will be done in one go. Slice mainly defines the deadline
and orders the entities but not that it will always run your slice in
one go. Run to parity tries to minimize the number of context switches
between runnable tasks but must not break fairness and lag theorem.So
If your task A has a slice of 10ms and task B wakes up with a slice of
1ms. B will preempt A because its deadline is earlier. If task B still
wants to run after its slice is exhausted, it will not be eligible and
task A will run until task B becomes eligible, which is as long as
task B's slice.




>
> Thanks,
> Madadi Vineeth Reddy
>
> >  static inline void set_protect_slice(struct sched_entity *se)
> >  {
> > -     u64 quantum = se->slice;
> > +     u64 quantum;
> >
> > -     if (!sched_feat(RUN_TO_PARITY))
> > -             quantum = min(quantum, normalized_sysctl_sched_base_slice);
> > +     if (sched_feat(RUN_TO_PARITY))
> > +             quantum = cfs_rq_min_slice(cfs_rq_of(se));
> > +     else
> > +             quantum = normalized_sysctl_sched_base_slice;
> > +     quantum = min(quantum, se->slice);
> >
> >       if (quantum != se->slice)
> >               se->vprot = min_vruntime(se->deadline, se->vruntime + calc_delta_fair(quantum, se));
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ