lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Sep 2022 10:37:16 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Dietmar Eggemann <dietmar.eggemann@....com>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, zhangqiao22@...wei.com
Subject: Re: [PATCH 2/4] sched/fair: cleanup loop_max and loop_break

On Mon, 12 Sept 2022 at 10:45, Dietmar Eggemann
<dietmar.eggemann@....com> wrote:
>
> On 25/08/2022 14:27, Vincent Guittot wrote:
> > sched_nr_migrate_break is set to a fix value and never changes so we can
> > replace it by a define SCHED_NR_MIGRATE_BREAK.
> >
> > Also, we adjust SCHED_NR_MIGRATE_BREAK to be aligned with the init value
> > of sysctl_sched_nr_migrate which can be init to different values.
> >
> > Then, use SCHED_NR_MIGRATE_BREAK to init sysctl_sched_nr_migrate.
> >
> > The behavior stays unchanged unless you modify sysctl_sched_nr_migrate
> > trough debugfs.
>
> I don't quite get this sentence. The behavior would potentially change
> if you change sysctl_sched_nr_migrate before this patch too?

the behavior is different if you change the sysctl_sched_nr_migrate.

With this patch, loop_break is now aligned with
sysctl_sched_nr_migrate value which was not the case for
CONFIG_PREEMPT_RT. For the latter, the behavior can change if you
increase sysctl_sched_nr_migrate at runtime because there is now at
least one break whereas it was not the case before as long as
sysctl_sched_nr_migrate stayed below 32

>
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >  kernel/sched/core.c  |  6 +-----
> >  kernel/sched/fair.c  | 11 ++++-------
> >  kernel/sched/sched.h |  6 ++++++
> >  3 files changed, 11 insertions(+), 12 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 64c08993221b..a21e817bdd1c 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -142,11 +142,7 @@ __read_mostly int sysctl_resched_latency_warn_once = 1;
> >   * Number of tasks to iterate in a single balance run.
> >   * Limited because this is done with IRQs disabled.
> >   */
>
>     ^^^
> Shouldn't this comment be removed as well?
>
> > -#ifdef CONFIG_PREEMPT_RT
> > -const_debug unsigned int sysctl_sched_nr_migrate = 8;
> > -#else
> > -const_debug unsigned int sysctl_sched_nr_migrate = 32;
> > -#endif
> > +const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
> >
> >  __read_mostly int scheduler_running;
>
> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ