lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Oct 2022 17:07:34 +0100
From:   Qais Yousef <qais.yousef@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
        chris.hyser@...cle.com, valentin.schneider@....com,
        patrick.bellasi@...bug.net, David.Laight@...lab.com,
        pjt@...gle.com, pavel@....cz, tj@...nel.org, qperret@...gle.com,
        tim.c.chen@...ux.intel.com, joshdon@...gle.com, timj@....org
Subject: Re: [PATCH v5 5/7] sched/fair: Add sched group latency support

On 10/12/22 17:42, Vincent Guittot wrote:
> On Wed, 12 Oct 2022 at 16:22, Qais Yousef <qais.yousef@....com> wrote:
> >
> > On 09/25/22 16:39, Vincent Guittot wrote:
> > > Task can set its latency priority with sched_setattr(), which is then used
> > > to set the latency offset of its sched_entity, but sched group entities
> > > still have the default latency offset value.
> > >
> > > Add a latency.nice field in cpu cgroup controller to set the latency
> > > priority of the group similarly to sched_setattr(). The latency priority
> > > is then used to set the offset of the sched_entities of the group.
> > >
> > > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > > ---
> > >  Documentation/admin-guide/cgroup-v2.rst |  8 ++++
> > >  kernel/sched/core.c                     | 53 +++++++++++++++++++++++++
> > >  kernel/sched/fair.c                     | 33 +++++++++++++++
> > >  kernel/sched/sched.h                    |  4 ++
> > >  4 files changed, 98 insertions(+)
> > >
> > > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > > index be4a77baf784..d8ae7e411f9c 100644
> > > --- a/Documentation/admin-guide/cgroup-v2.rst
> > > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > > @@ -1095,6 +1095,14 @@ All time durations are in microseconds.
> > >          values similar to the sched_setattr(2). This maximum utilization
> > >          value is used to clamp the task specific maximum utilization clamp.
> > >
> > > +  cpu.latency.nice
> > > +     A read-write single value file which exists on non-root
> > > +     cgroups.  The default is "0".
> > > +
> > > +     The nice value is in the range [-20, 19].
> > > +
> > > +     This interface file allows reading and setting latency using the
> > > +     same values used by sched_setattr(2).
> >
> > I still don't understand how tasks will inherit the latency_nice value from
> > cgroups they're attached to.
> 
> The behavior is the same as for sched_entity weight. The latency is
> applied on the sched_entity of the group

But this is the point I am raising. Not all users behave the same as weight.

In EAS we just look at the effective value of the task (see uclamp for
example). We don't care about the group value except to calculate how it
impacts the task's value.

Or am I missing something here?


Cheers

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ