[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOtvUMcshnvQs4q4ySbtySWv_qHeEnHiD4USBSiOLGFNHSwzUw@mail.gmail.com>
Date: Mon, 30 Jan 2012 17:14:37 +0200
From: Gilad Ben-Yossef <gilad@...yossef.com>
To: Mel Gorman <mel@....ul.ie>
Cc: linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Christoph Lameter <cl@...ux.com>,
Chris Metcalf <cmetcalf@...era.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Frederic Weisbecker <fweisbec@...il.com>,
Russell King <linux@....linux.org.uk>, linux-mm@...ck.org,
Pekka Enberg <penberg@...nel.org>,
Matt Mackall <mpm@...enic.com>,
Sasha Levin <levinsasha928@...il.com>,
Rik van Riel <riel@...hat.com>,
Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org, Avi Kivity <avi@...hat.com>,
Michal Nazarewicz <mina86@...a86.com>,
Milton Miller <miltonm@....com>
Subject: Re: [v7 7/8] mm: only IPI CPUs to drain local pages if they exist
On Mon, Jan 30, 2012 at 4:59 PM, Mel Gorman <mel@....ul.ie> wrote:
> On Thu, Jan 26, 2012 at 12:02:00PM +0200, Gilad Ben-Yossef wrote:
>> Calculate a cpumask of CPUs with per-cpu pages in any zone
>> and only send an IPI requesting CPUs to drain these pages
>> to the buddy allocator if they actually have pages when
>> asked to flush.
>>
...
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index d2186ec..4135983 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1165,7 +1165,36 @@ void drain_local_pages(void *arg)
>> */
>> void drain_all_pages(void)
>> {
>> - on_each_cpu(drain_local_pages, NULL, 1);
>> + int cpu;
>> + struct per_cpu_pageset *pcp;
>> + struct zone *zone;
>> +
>> + /* Allocate in the BSS so we wont require allocation in
>> + * direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y
>> + */
>> + static cpumask_t cpus_with_pcps;
>> +
>> + /*
>> + * We don't care about racing with CPU hotplug event
>> + * as offline notification will cause the notified
>> + * cpu to drain that CPU pcps and on_each_cpu_mask
>> + * disables preemption as part of its processing
>> + */
>> + for_each_online_cpu(cpu) {
>> + bool has_pcps = false;
>> + for_each_populated_zone(zone) {
>> + pcp = per_cpu_ptr(zone->pageset, cpu);
>> + if (pcp->pcp.count) {
>> + has_pcps = true;
>> + break;
>> + }
>> + }
>> + if (has_pcps)
>> + cpumask_set_cpu(cpu, &cpus_with_pcps);
>> + else
>> + cpumask_clear_cpu(cpu, &cpus_with_pcps);
>> + }
>
> Lets take two CPUs running this code at the same time. CPU 1 has per-cpu
> pages in all zones. CPU 2 has no per-cpu pages in any zone. If both run
> at the same time, CPU 2 can be clearing the mask for CPU 1 before it has
> had a chance to send the IPI. This means we'll miss sending IPIs to CPUs
> that we intended to.
I'm confused. You seem to be assuming that each CPU is looking at its own pcps
only (per zone). Assuming no change in the state of the pcps when both CPUs
run this code at the same time, both of them should mark the bit for
their respective
CPUs the same, unless one of them raced and managed to send the IPI to clear
pcps from the other, at which point you might see one of them send a
spurious IPI
to drains pcps that have been drained - but that isn't bad.
At least, that is what I meant the code to do and what I believe it
does. What have I
missed?
> As I was willing to send no IPI at all;
>
> Acked-by: Mel Gorman <mel@....ul.ie>
Thank you for the review and the ACK :-)
>
> But if this gets another revision, add a comment saying that two CPUs
> can interfere with each other running at the same time but we don't
> care.
>
>> + on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1);
>> }
>>
Gilad
--
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@...yossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com
"Unfortunately, cache misses are an equal opportunity pain provider."
-- Mike Galbraith, LKML
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists