[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210518085955.GB28667@quack2.suse.cz>
Date: Tue, 18 May 2021 10:59:55 +0200
From: Jan Kara <jack@...e.cz>
To: chi wu <wuchi.zero@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Jan Kara <jack@...e.cz>,
linux-kernel@...r.kernel.org, tan.hu@....com.cn
Subject: Re: [PATCH] lib/flex_proportions.c: Use abs() when percpu_counter is
negative.
On Tue 18-05-21 11:42:53, chi wu wrote:
> Chi Wu <wuchi.zero@...il.com> 于2021年5月17日周一 下午11:53写道:
> >
> > The value of percpu_counter_read() may become negative after
> > running percpu_counter_sum() in fprop_reflect_period_percpu().
> > The value of variable 'num' will be zero in fprop_fraction_percpu()
> > when using percpu_counter_read_positive(), but if using the abs of
> > percpu_counter_read() will be close to the correct value.
> >
>
> I realized that I was wrong as follow:
> (a) the decay rule is broken, the negative means the difference in
> decay here.
> (b) as the target event increasing, proportion of the event will
> decrease to 0 firstly and then it will increase. The logic is bad.
> 1. abs(-50) / abs(100) = 50% //+50 to 2
> 2. abs(0) / abs(150) = 0 % //+50 to 3
> 3. abs(50)/abs(200) = 25%
>
> Anyway, the percpu_counter_sum() had cost a lost performance,
> may be we could get a little benefits from that. So could we add a
> variable to stroe the decay value, we will get the value when
> percpu_counter_read() is negative?
The result of percpu_counter_read() is inherently inexact (but fast! ;). It
can be upto number_of_cpus * counter_batch away from the real counter
value. But do you observe any practical problems with this inaccuracy on
your system? Sure, cache memory won't be split among devices exactly
according to writeout proportion but that usually does not matter.
Honza
> > ---
> > lib/flex_proportions.c | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
> > index 451543937524..3ac79ca2c441 100644
> > --- a/lib/flex_proportions.c
> > +++ b/lib/flex_proportions.c
> > @@ -147,7 +147,7 @@ void fprop_fraction_single(struct fprop_global *p,
> > seq = read_seqcount_begin(&p->sequence);
> > fprop_reflect_period_single(p, pl);
> > num = pl->events;
> > - den = percpu_counter_read_positive(&p->events);
> > + den = abs(percpu_counter_read(&p->events));
> > } while (read_seqcount_retry(&p->sequence, seq));
> >
> > /*
> > @@ -209,7 +209,7 @@ static void fprop_reflect_period_percpu(struct fprop_global *p,
> > val = percpu_counter_sum(&pl->events);
> >
> > percpu_counter_add_batch(&pl->events,
> > - -val + (val >> (period-pl->period)), PROP_BATCH);
> > + -val + (val >> (period - pl->period)), PROP_BATCH);
> > } else
> > percpu_counter_set(&pl->events, 0);
> > pl->period = period;
> > @@ -234,8 +234,8 @@ void fprop_fraction_percpu(struct fprop_global *p,
> > do {
> > seq = read_seqcount_begin(&p->sequence);
> > fprop_reflect_period_percpu(p, pl);
> > - num = percpu_counter_read_positive(&pl->events);
> > - den = percpu_counter_read_positive(&p->events);
> > + num = abs(percpu_counter_read(&pl->events));
> > + den = abs(percpu_counter_read(&p->events));
> > } while (read_seqcount_retry(&p->sequence, seq));
> >
> > /*
> > --
> > 2.17.1
> >
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists