lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+tQmHBEvZTe9-7tReRHBw1tkTW7uGS5FhPZ41eZFOqtoyqkHQ@mail.gmail.com>
Date:   Tue, 18 May 2021 18:22:05 +0800
From:   chi wu <wuchi.zero@...il.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, tan.hu@....com.cn
Subject: Re: [PATCH] lib/flex_proportions.c: Use abs() when percpu_counter is negative.

Jan Kara <jack@...e.cz> 于2021年5月18日周二 下午4:59写道:
>
> On Tue 18-05-21 11:42:53, chi wu wrote:
> > Chi Wu <wuchi.zero@...il.com> 于2021年5月17日周一 下午11:53写道:
> > >
> > > The value of percpu_counter_read() may become negative after
> > > running percpu_counter_sum() in fprop_reflect_period_percpu().
> > > The value of variable 'num' will be zero in fprop_fraction_percpu()
> > > when using percpu_counter_read_positive(), but if using the abs of
> > > percpu_counter_read() will be close to the correct value.
> > >
> >
> > I realized that I was wrong as follow:
> > (a) the decay rule is broken, the negative means the difference in
> > decay here.
> > (b) as the target event increasing, proportion of the event will
> > decrease to 0 firstly and then it will increase. The logic is bad.
> > 1. abs(-50) / abs(100) = 50%       //+50 to 2
> > 2. abs(0) / abs(150) = 0 %           //+50 to 3
> > 3. abs(50)/abs(200) = 25%
> >
> > Anyway, the percpu_counter_sum() had cost a lost performance,
> > may be we could get a little benefits from that. So could we add a
> > variable to stroe the decay value, we will get the value when
> > percpu_counter_read() is negative?
>
> The result of percpu_counter_read() is inherently inexact (but fast! ;). It
> can be upto number_of_cpus * counter_batch away from the real counter
> value. But do you observe any practical problems with this inaccuracy on
> your system? Sure, cache memory won't be split among devices exactly
> according to writeout proportion but that usually does not matter.
>
>                                                                 Honza
>

Thanks, Got it.
Just try to optimize the fuse (with strictlimit feature)performance
issue: The writing thread will be paused and runs slowly, when the
proportion of fuse-bdi is 0.
The issue is normal,and one of reasons is the characteristics of
percpu_counter batch. Even the pages are writeout, we may be could not
get the real proportion value due to side effects of counter
performance. It's just a slight disappointment.

> > > ---
> > >  lib/flex_proportions.c | 8 ++++----
> > >  1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
> > > index 451543937524..3ac79ca2c441 100644
> > > --- a/lib/flex_proportions.c
> > > +++ b/lib/flex_proportions.c
> > > @@ -147,7 +147,7 @@ void fprop_fraction_single(struct fprop_global *p,
> > >                 seq = read_seqcount_begin(&p->sequence);
> > >                 fprop_reflect_period_single(p, pl);
> > >                 num = pl->events;
> > > -               den = percpu_counter_read_positive(&p->events);
> > > +               den = abs(percpu_counter_read(&p->events));
> > >         } while (read_seqcount_retry(&p->sequence, seq));
> > >
> > >         /*
> > > @@ -209,7 +209,7 @@ static void fprop_reflect_period_percpu(struct fprop_global *p,
> > >                         val = percpu_counter_sum(&pl->events);
> > >
> > >                 percpu_counter_add_batch(&pl->events,
> > > -                       -val + (val >> (period-pl->period)), PROP_BATCH);
> > > +                       -val + (val >> (period - pl->period)), PROP_BATCH);
> > >         } else
> > >                 percpu_counter_set(&pl->events, 0);
> > >         pl->period = period;
> > > @@ -234,8 +234,8 @@ void fprop_fraction_percpu(struct fprop_global *p,
> > >         do {
> > >                 seq = read_seqcount_begin(&p->sequence);
> > >                 fprop_reflect_period_percpu(p, pl);
> > > -               num = percpu_counter_read_positive(&pl->events);
> > > -               den = percpu_counter_read_positive(&p->events);
> > > +               num = abs(percpu_counter_read(&pl->events));
> > > +               den = abs(percpu_counter_read(&p->events));
> > >         } while (read_seqcount_retry(&p->sequence, seq));
> > >
> > >         /*
> > > --
> > > 2.17.1
> > >
> --
> Jan Kara <jack@...e.com>
> SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ