lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+tQmHAE6vK17Xghi9YT7+7r4oFJuQ86cU8m5MzMs6-D0G=DBQ@mail.gmail.com>
Date:   Tue, 18 May 2021 11:42:53 +0800
From:   chi wu <wuchi.zero@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
        tan.hu@....com.cn
Subject: Re: [PATCH] lib/flex_proportions.c: Use abs() when percpu_counter is negative.

Chi Wu <wuchi.zero@...il.com> 于2021年5月17日周一 下午11:53写道:
>
> The value of percpu_counter_read() may become negative after
> running percpu_counter_sum() in fprop_reflect_period_percpu().
> The value of variable 'num' will be zero in fprop_fraction_percpu()
> when using percpu_counter_read_positive(), but if using the abs of
> percpu_counter_read() will be close to the correct value.
>

I realized that I was wrong as follow:
(a) the decay rule is broken, the negative means the difference in
decay here.
(b) as the target event increasing, proportion of the event will
decrease to 0 firstly and then it will increase. The logic is bad.
1. abs(-50) / abs(100) = 50%       //+50 to 2
2. abs(0) / abs(150) = 0 %           //+50 to 3
3. abs(50)/abs(200) = 25%

Anyway, the percpu_counter_sum() had cost a lost performance,
may be we could get a little benefits from that. So could we add a
variable to stroe the decay value, we will get the value when
percpu_counter_read() is negative?

Thanks.

> Signed-off-by: Chi Wu <wuchi.zero@...il.com>
> ---
>  lib/flex_proportions.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
> index 451543937524..3ac79ca2c441 100644
> --- a/lib/flex_proportions.c
> +++ b/lib/flex_proportions.c
> @@ -147,7 +147,7 @@ void fprop_fraction_single(struct fprop_global *p,
>                 seq = read_seqcount_begin(&p->sequence);
>                 fprop_reflect_period_single(p, pl);
>                 num = pl->events;
> -               den = percpu_counter_read_positive(&p->events);
> +               den = abs(percpu_counter_read(&p->events));
>         } while (read_seqcount_retry(&p->sequence, seq));
>
>         /*
> @@ -209,7 +209,7 @@ static void fprop_reflect_period_percpu(struct fprop_global *p,
>                         val = percpu_counter_sum(&pl->events);
>
>                 percpu_counter_add_batch(&pl->events,
> -                       -val + (val >> (period-pl->period)), PROP_BATCH);
> +                       -val + (val >> (period - pl->period)), PROP_BATCH);
>         } else
>                 percpu_counter_set(&pl->events, 0);
>         pl->period = period;
> @@ -234,8 +234,8 @@ void fprop_fraction_percpu(struct fprop_global *p,
>         do {
>                 seq = read_seqcount_begin(&p->sequence);
>                 fprop_reflect_period_percpu(p, pl);
> -               num = percpu_counter_read_positive(&pl->events);
> -               den = percpu_counter_read_positive(&p->events);
> +               num = abs(percpu_counter_read(&pl->events));
> +               den = abs(percpu_counter_read(&p->events));
>         } while (read_seqcount_retry(&p->sequence, seq));
>
>         /*
> --
> 2.17.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ