lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d703071084dadb477b8248b041d0d1aa730d65cd.camel@surriel.com>
Date:   Wed, 28 Aug 2019 19:18:58 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     linux-kernel <linux-kernel@...r.kernel.org>,
        Kernel Team <kernel-team@...com>, Paul Turner <pjt@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [PATCH 08/15] sched,fair: simplify timeslice length code

On Wed, 2019-08-28 at 19:32 +0200, Vincent Guittot wrote:
> On Thu, 22 Aug 2019 at 04:18, Rik van Riel <riel@...riel.com> wrote:
> > The idea behind __sched_period makes sense, but the results do not
> > always.
> > 
> > When a CPU has one high priority task and a large number of low
> > priority
> > tasks, __sched_period will return a value larger than
> > sysctl_sched_latency,
> > and the one high priority task may end up getting a timeslice all
> > for itself
> > that is also much larger than sysctl_sched_latency.
> 
> note that unless you enable sched_feat(HRTICK), the sched_slice is
> mainly used to decide how fast we preempt running task at tick but a
> newly wake up task can preempt it before
> 
> > The low priority tasks will have their time slices rounded up to
> > sysctl_sched_min_granularity, resulting in an even larger
> > scheduling
> > latency than targeted by __sched_period.
> 
> Will this not break the fairness between a always running task and a
> short sleeping one with this changes ?

In what way?

The vruntime for the always running task will continue
to advance the same way it always has.

> > Simplify the code by simply ripping out __sched_period and always
> > taking
> > fractions of sysctl_sched_latency.
> > 
> > If a high priority task ends up getting a "too small" time slice
> > compared
> > to low priority tasks, the vruntime scaling ensures that it will
> > simply
> > get scheduled more frequently than low priority tasks.
> 
> Will you not increase the number of context switch ?

It should actually decrease the number of context
switches. If a nice +19 task gets a longer time slice
than it would today, its vruntime will be advanced by
more than sysctl_sched_latency, and it will not get
to run again until another task has caught up with its
vruntime.

That means the regular (or high) priority task that
shares the CPU with that nice +19 task might get
several time slices in a row until the nice +19 task
gets to run again.

What am I overlooking?

-- 
All Rights Reversed.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ