lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <354ea7554096b0745d4f947685add33c8c8d2d62.camel@mediatek.com>
Date:   Tue, 12 Apr 2022 17:28:10 +0800
From:   Kuyo Chang <kuyo.chang@...iatek.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
CC:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        "Ben Segall" <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        "Daniel Bristot de Oliveira" <bristot@...hat.com>,
        Matthias Brugger <matthias.bgg@...il.com>,
        <wsd_upstream@...iatek.com>, <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH 1/1] sched/pelt: Refine the enqueue_load_avg calculate
 method

On Tue, 2022-04-12 at 10:58 +0200, Vincent Guittot wrote:
> Le mardi 12 avril 2022 à 10:51:23 (+0800), Kuyo Chang a écrit :
> > On Mon, 2022-04-11 at 10:39 +0200, Vincent Guittot wrote:
> > > On Mon, 11 Apr 2022 at 08:17, Kuyo Chang <kuyo.chang@...iatek.com
> > > >
> > > wrote:
> > > > 
> > > > From: kuyo chang <kuyo.chang@...iatek.com>
> > > > 
> > > > I meet the warning message at cfs_rq_is_decayed at below code.
> > > > 
> > > > SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> > > >                     cfs_rq->avg.util_avg ||
> > > >                     cfs_rq->avg.runnable_avg)
> > > > 
> > > > Following is the calltrace.
> > > > 
> > > > Call trace:
> > > > __update_blocked_fair
> > > > update_blocked_averages
> > > > newidle_balance
> > > > pick_next_task_fair
> > > > __schedule
> > > > schedule
> > > > pipe_read
> > > > vfs_read
> > > > ksys_read
> > > > 
> > > > After code analyzing and some debug messages, I found it exits
> > > > a
> > > > corner
> > > > case at attach_entity_load_avg which will cause load_sum is
> > > > zero
> > > > and
> > > > load_avg is not.
> > > > Consider se_weight is 88761 according by sched_prio_to_weight
> > > > table.
> > > > And assume the get_pelt_divider() is 47742, se->avg.load_avg is
> > > > 1.
> > > > By the calculating for se->avg.load_sum as following will
> > > > become
> > > > zero
> > > > as following.
> > > > se->avg.load_sum =
> > > >         div_u64(se->avg.load_avg * se->avg.load_sum,
> > > > se_weight(se));
> > > > se->avg.load_sum = 1*47742/88761 = 0.
> > > 
> > > The root problem is there, se->avg.load_sum must not be null if
> > > se->avg.load_avg is not null because the correct relation between
> > > _avg
> > > and _sum is:
> > > 
> > > load_avg = weight * load_sum / divider.
> > > 
> > > so the fix should be attach_entity_load_avg() and probably the
> > > below
> > > is enough
> > > 
> > > se->avg.load_sum = div_u64(se->avg.load_avg * se->avg.load_sum,
> > > se_weight(se)) + 1;
> > 
> > Thanks for your kindly suggestion.
> > +1 would make the calcuation for load_sum may be overestimate?
> > How about the below code make sense for fix the corner case?
> > 
> > --- 
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3832,7 +3832,8 @@ static void attach_entity_load_avg(struct
> > cfs_rq
> > *cfs_rq, struct sched_entity *s
> >  	se->avg.load_sum = divider;
> >  	if (se_weight(se)) {
> >  		se->avg.load_sum =
> > -			div_u64(se->avg.load_avg * se->avg.load_sum,
> > se_weight(se));
> > +			(se->avg.load_avg * se->avg.load_sum >
> > se_weight(se)) ?
> > +			div_u64(se->avg.load_avg * se->avg.load_sum,
> > se_weight(se)) : 1;
> >  	}
> >  
> >  	enqueue_load_avg(cfs_rq, se);
> > -- 
> > 2.18.0
> 
> In this case, the below is easier to read
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1658a9428d96..2c685474db23 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3836,10 +3836,12 @@ static void attach_entity_load_avg(struct
> cfs_rq *cfs_rq, struct sched_entity *s
> 
>         se->avg.runnable_sum = se->avg.runnable_avg * divider;
> 
> -       se->avg.load_sum = divider;
> -       if (se_weight(se)) {
> +       se->avg.load_sum = se->avg.load_avg * divider;
> +       if (se_weight(se) < se->avg.load_sum) {
>                 se->avg.load_sum =
> -                       div_u64(se->avg.load_avg * se->avg.load_sum,
> se_weight(se));
> +                       div_u64(se->avg.load_sum, se_weight(se));
> +       } else {
> +               se->avg.load_sum = 1;
>         }
> 
>         enqueue_load_avg(cfs_rq, se);

It really easier to read.
Thanks for your kindly suggestion.


> 
> 
> > 
> > 
> > > > 
> > > > After enqueue_load_avg code as below.
> > > > cfs_rq->avg.load_avg += se->avg.load_avg;
> > > > cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
> > > > 
> > > > Then the load_sum for cfs_rq will be 1 while the load_sum for
> > > > cfs_rq is 0.
> > > > So it will hit the warning message.
> > > > 
> > > > After all, I refer the following commit patch to do the similar
> > > > thing at
> > > > enqueue_load_avg.
> > > > sched/pelt: Relax the sync of load_sum with load_avg
> > > > 
> > > > After long time testing, the kernel warning was gone and the
> > > > system
> > > > runs
> > > > as well as before.
> > > > 
> > > > Signed-off-by: kuyo chang <kuyo.chang@...iatek.com>
> > > > ---
> > > >  kernel/sched/fair.c | 6 ++++--
> > > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index d4bd299d67ab..30d8b6dba249 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -3074,8 +3074,10 @@ account_entity_dequeue(struct cfs_rq
> > > > *cfs_rq, struct sched_entity *se)
> > > >  static inline void
> > > >  enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity
> > > > *se)
> > > >  {
> > > > -       cfs_rq->avg.load_avg += se->avg.load_avg;
> > > > -       cfs_rq->avg.load_sum += se_weight(se) * se-
> > > > >avg.load_sum;
> > > > +       add_positive(&cfs_rq->avg.load_avg, se->avg.load_avg);
> > > > +       add_positive(&cfs_rq->avg.load_sum, se_weight(se) * se-
> > > > > avg.load_sum);
> > > > 
> > > > +       cfs_rq->avg.load_sum = max_t(u32, cfs_rq->avg.load_sum,
> > > > +                                         cfs_rq->avg.load_avg
> > > > *
> > > > PELT_MIN_DIVIDER);
> > > >  }
> > > > 
> > > >  static inline void
> > > > --
> > > > 2.18.0
> > > > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ