[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <167d30f439d171912b1ef584f20219e67a009de8.camel@redhat.com>
Date: Fri, 13 May 2022 17:19:18 +0200
From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
To: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists
On Fri, 2022-05-13 at 16:04 +0100, Mel Gorman wrote:
> On Thu, May 12, 2022 at 12:37:43PM -0700, Andrew Morton wrote:
> > On Thu, 12 May 2022 09:50:43 +0100 Mel Gorman <mgorman@...hsingularity.net> wrote:
> >
> > > From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
> > >
> > > Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu
> > > drain work queued by __drain_all_pages(). So introduce a new mechanism to
> > > remotely drain the per-cpu lists. It is made possible by remotely locking
> > > 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme
> > > is that drain operations are now migration safe.
> > >
> > > There was no observed performance degradation vs. the previous scheme.
> > > Both netperf and hackbench were run in parallel to triggering the
> > > __drain_all_pages(NULL, true) code path around ~100 times per second.
> > > The new scheme performs a bit better (~5%), although the important point
> > > here is there are no performance regressions vs. the previous mechanism.
> > > Per-cpu lists draining happens only in slow paths.
> > >
> > > Minchan Kim tested this independently and reported;
> > >
> > > My workload is not NOHZ CPUs but run apps under heavy memory
> > > pressure so they goes to direct reclaim and be stuck on
> > > drain_all_pages until work on workqueue run.
> > >
> > > unit: nanosecond
> > > max(dur) avg(dur) count(dur)
> > > 166713013 487511.77786438033 1283
> > >
> > > From traces, system encountered the drain_all_pages 1283 times and
> > > worst case was 166ms and avg was 487us.
> > >
> > > The other problem was alloc_contig_range in CMA. The PCP draining
> > > takes several hundred millisecond sometimes though there is no
> > > memory pressure or a few of pages to be migrated out but CPU were
> > > fully booked.
> > >
> > > Your patch perfectly removed those wasted time.
> >
> > I'm not getting a sense here of the overall effect upon userspace
> > performance. As Thomas said last year in
> > https://lkml.kernel.org/r/87v92sgt3n.ffs@tglx
> >
> > : The changelogs and the cover letter have a distinct void vs. that which
> > : means this is just another example of 'scratch my itch' changes w/o
> > : proper justification.
> >
> > Is there more to all of this than itchiness and if so, well, you know
> > the rest ;)
> >
>
> I think Minchan's example is clear-cut. The draining operation can take
> an arbitrary amount of time waiting for the workqueue to run on each CPU
> and can cause severe delays under reclaim or CMA and the patch fixes
> it. Maybe most users won't even notice but I bet phone users do if a
> camera app takes too long to open.
>
> The first paragraphs was written by Nicolas and I did not want to modify
> it heavily and still put his Signed-off-by on it. Maybe it could have
> been clearer though because "too busy" is vague when the actual intent
> is to avoid interfering with RT tasks. Does this sound better to you?
>
> Some setups, notably NOHZ_FULL CPUs, may be running realtime or
> latency-sensitive applications that cannot tolerate interference
> due to per-cpu drain work queued by __drain_all_pages(). Introduce
> a new mechanism to remotely drain the per-cpu lists. It is made
> possible by remotely locking 'struct per_cpu_pages' new per-cpu
> spinlocks. This has two advantages, the time to drain is more
> predictable and other unrelated tasks are not interrupted.
>
> You raise a very valid point with Thomas' mail and it is a concern that
> the local_lock is no longer strictly local. We still need preemption to
> be disabled between the percpu lookup and the lock acquisition but that
> can be done with get_cpu_var() to make the scope clear.
This isn't going to work in RT :(
get_cpu_var() disables preemption hampering RT spinlock use. There is more to
it in Documentation/locking/locktypes.rst.
Regards,
--
Nicolás Sáenz
Powered by blists - more mailing lists