[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120127161236.ff1e7e7e.akpm@linux-foundation.org>
Date: Fri, 27 Jan 2012 16:12:36 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Gilad Ben-Yossef <gilad@...yossef.com>
Cc: linux-kernel@...r.kernel.org, Mel Gorman <mel@....ul.ie>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Christoph Lameter <cl@...ux.com>,
Chris Metcalf <cmetcalf@...era.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Frederic Weisbecker <fweisbec@...il.com>,
Russell King <linux@....linux.org.uk>, linux-mm@...ck.org,
Pekka Enberg <penberg@...nel.org>,
Matt Mackall <mpm@...enic.com>,
Sasha Levin <levinsasha928@...il.com>,
Rik van Riel <riel@...hat.com>,
Andi Kleen <andi@...stfloor.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org, Avi Kivity <avi@...hat.com>,
Michal Nazarewicz <mina86@...a86.com>,
Milton Miller <miltonm@....com>
Subject: Re: [v7 7/8] mm: only IPI CPUs to drain local pages if they exist
On Thu, 26 Jan 2012 12:02:00 +0200
Gilad Ben-Yossef <gilad@...yossef.com> wrote:
> Calculate a cpumask of CPUs with per-cpu pages in any zone
> and only send an IPI requesting CPUs to drain these pages
> to the buddy allocator if they actually have pages when
> asked to flush.
>
> This patch saves 85%+ of IPIs asking to drain per-cpu
> pages in case of severe memory preassure that leads
> to OOM since in these cases multiple, possibly concurrent,
> allocation requests end up in the direct reclaim code
> path so when the per-cpu pages end up reclaimed on first
> allocation failure for most of the proceeding allocation
> attempts until the memory pressure is off (possibly via
> the OOM killer) there are no per-cpu pages on most CPUs
> (and there can easily be hundreds of them).
>
> This also has the side effect of shortening the average
> latency of direct reclaim by 1 or more order of magnitude
> since waiting for all the CPUs to ACK the IPI takes a
> long time.
>
> Tested by running "hackbench 400" on a 8 CPU x86 VM and
> observing the difference between the number of direct
> reclaim attempts that end up in drain_all_pages() and
> those were more then 1/2 of the online CPU had any per-cpu
> page in them, using the vmstat counters introduced
> in the next patch in the series and using proc/interrupts.
>
> In the test sceanrio, this was seen to save around 3600 global
> IPIs after trigerring an OOM on a concurrent workload:
>
>
> ...
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1165,7 +1165,36 @@ void drain_local_pages(void *arg)
> */
> void drain_all_pages(void)
> {
> - on_each_cpu(drain_local_pages, NULL, 1);
> + int cpu;
> + struct per_cpu_pageset *pcp;
> + struct zone *zone;
> +
> + /* Allocate in the BSS so we wont require allocation in
> + * direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y
> + */
> + static cpumask_t cpus_with_pcps;
> +
> + /*
> + * We don't care about racing with CPU hotplug event
> + * as offline notification will cause the notified
> + * cpu to drain that CPU pcps and on_each_cpu_mask
> + * disables preemption as part of its processing
> + */
hmmm.
> + for_each_online_cpu(cpu) {
> + bool has_pcps = false;
> + for_each_populated_zone(zone) {
> + pcp = per_cpu_ptr(zone->pageset, cpu);
> + if (pcp->pcp.count) {
> + has_pcps = true;
> + break;
> + }
> + }
> + if (has_pcps)
> + cpumask_set_cpu(cpu, &cpus_with_pcps);
> + else
> + cpumask_clear_cpu(cpu, &cpus_with_pcps);
> + }
> + on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1);
> }
Can we end up sending an IPI to a now-unplugged CPU? That won't work
very well if that CPU is now sitting on its sysadmin's desk.
There's also the case of CPU online. We could end up failing to IPI a
CPU which now has some percpu pages. That's not at all serious - 90%
is good enough in page reclaim. But this thinking merits a mention in
the comment. Or we simply make this code hotplug-safe.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists