[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALvZod73xvzi=8ZZ-vOXK-ssh54ARwYrizmv5sAa0xyQR=7KOw@mail.gmail.com>
Date: Mon, 7 Nov 2022 13:19:38 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Marek Szyprowski <m.szyprowski@...sung.com>
Subject: Re: [PATCH] percpu_counter: add percpu_counter_sum_all interface
On Mon, Nov 7, 2022 at 1:05 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Sat, 5 Nov 2022 01:40:13 +0000 Shakeel Butt <shakeelb@...gle.com> wrote:
>
> > The percpu_counter is used for scenarios where performance is more
> > important than the accuracy. For percpu_counter users, who want more
> > accurate information in their slowpath, percpu_counter_sum is provided
> > which traverses all the online CPUs to accumulate the data. The reason
> > it only needs to traverse online CPUs is because percpu_counter does
> > implement CPU offline callback which syncs the local data of the
> > offlined CPU.
> >
> > However there is a small race window between the online CPUs traversal
> > of percpu_counter_sum and the CPU offline callback. The offline callback
> > has to traverse all the percpu_counters on the system to flush the CPU
> > local data which can be a lot. During that time, the CPU which is going
> > offline has already been published as offline to all the readers. So, as
> > the offline callback is running, percpu_counter_sum can be called for
> > one counter which has some state on the CPU going offline. Since
> > percpu_counter_sum only traverses online CPUs, it will skip that
> > specific CPU and the offline callback might not have flushed the state
> > for that specific percpu_counter on that offlined CPU.
>
> OK, got it, thanks.
>
> > Normally this is not an issue because percpu_counter users can deal with
> > some inaccuracy for small time window. However a new user i.e. mm_struct
> > on the cleanup path wants to check the exact state of the percpu_counter
> > through check_mm(). For such users, this patch introduces
> > percpu_counter_sum_all() which traverses all possible CPUs.
>
> And uses it in fork.c:check_mm()!
>
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -756,7 +756,7 @@ static void check_mm(struct mm_struct *mm)
> > "Please make sure 'struct resident_page_types[]' is updated as well");
> >
> > for (i = 0; i < NR_MM_COUNTERS; i++) {
> > - long x = percpu_counter_sum(&mm->rss_stat[i]);
> > + long x = percpu_counter_sum_all(&mm->rss_stat[i]);
>
> check_mm() just became more expensive in some cases. nr_possible_cpus
> * 4. I wonder if this is enough for people to start caring about.
>
> check_mm() is presently non-optional and I'd be reluctant to change
> this, given how commonly we see the "BUG: Bad rss-counter state"
> getting reported (22 million hits in a google search!).
>
> We could save a ton of that cost by running percpu_counter_sum() first,
> then trying percpu_counter_sum_all() if percpu_counter_sum() indicated
> an error. This is only worth bothering about if the new check_mm()
> cost is a concern.
>
Yes, this makes much more sense. I had run hackbench on the original
patch and didn't see any significant difference. I will update this
and run some more perf benchmarks to make sure there is no regression
due to this change.
thanks,
Shakeel
Powered by blists - more mailing lists