[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fcdaacf58296dfd826bfdbbd5ef3a06b6e05a456.camel@redhat.com>
Date: Mon, 07 Mar 2022 14:57:47 +0100
From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
To: Mel Gorman <mgorman@...e.de>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, frederic@...nel.org, tglx@...utronix.de,
mtosatti@...hat.com, linux-rt-users@...r.kernel.org,
vbabka@...e.cz, cl@...ux.com, paulmck@...nel.org,
willy@...radead.org
Subject: Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support
Hi Mel,
Thanks for having a look at this.
On Thu, 2022-03-03 at 11:45 +0000, Mel Gorman wrote:
> On Tue, Feb 08, 2022 at 11:07:48AM +0100, Nicolas Saenz Julienne wrote:
> > This series replaces mm/page_alloc's per-cpu page lists drain mechanism with
> > one that allows accessing the lists remotely. Currently, only a local CPU is
> > permitted to change its per-cpu lists, and it's expected to do so, on-demand,
> > whenever a process demands it by means of queueing a drain task on the local
> > CPU. This causes problems for NOHZ_FULL CPUs and real-time systems that can't
> > take any sort of interruption and to some lesser extent inconveniences idle and
> > virtualised systems.
> >
>
> I know this has been sitting here for a long while. Last few weeks have
> not been fun.
No problem.
> > Note that this is not the first attempt at fixing this per-cpu page lists:
> > - The first attempt[1] tried to conditionally change the pagesets locking
> > scheme based the NOHZ_FULL config. It was deemed hard to maintain as the
> > NOHZ_FULL code path would be rarely tested. Also, this only solves the issue
> > for NOHZ_FULL setups, which isn't ideal.
> > - The second[2] unanimously switched the local_locks to per-cpu spinlocks. The
> > performance degradation was too big.
> >
>
> For unrelated reasons I looked at using llist to avoid locks entirely. It
> turns out it's not possible and needs a lock. We know "local_locks to
> per-cpu spinlocks" took a large penalty so I considered alternatives on
> how a lock could be used. I found it's possible to both remote drain
> the lists and avoid the disable/enable of IRQs entirely as long as a
> preempting IRQ is willing to take the zone lock instead (should be very
> rare). The IRQ part is a bit hairy though as softirqs are also a problem
> and preempt-rt needs different rules and the llist has to sort PCP
> refills which might be a loss in total. However, the remote draining may
> still be interesting. The full series is at
> https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git/ mm-pcpllist-v1r2
I'll have a proper look at it soon.
Regards,
--
Nicolás Sáenz
Powered by blists - more mailing lists