[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zk9btqmu.fsf@rustcorp.com.au>
Date: Mon, 14 May 2012 12:28:49 +0930
From: Rusty Russell <rusty@...tcorp.com.au>
To: KOSAKI Motohiro <kosaki.motohiro@...il.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
LKML <linux-kernel@...r.kernel.org>, anton@...ba.org,
Arnd Bergmann <arnd@...db.de>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mike Travis <travis@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [PULL] cpumask: finally make them variable size w/ CPUMASK_OFFSTACK.
On Thu, 10 May 2012 02:42:43 -0400, KOSAKI Motohiro <kosaki.motohiro@...il.com> wrote:
> (5/10/12 12:54 AM), Rusty Russell wrote:
> > On Wed, 09 May 2012 22:43:39 -0400, KOSAKI Motohiro<kosaki.motohiro@...il.com> wrote:
> >>> Or is there a reason we shouldn't even try to allocate here?
> >>
> >> 1) your code always use GFP_KERNEL. it is trouble maker when alloc_pages w/ GFP_ATOMIC.
> >
> > Oh :(
> >
> > How about the below instead?
>
> This code still slow than original. when calling reclaim path, new allocation is almost always
> fail. then, your code almost always invoke all cpu batch invalidation. i.e. many ipi.
I don't know this code. Does that happen often? Do we really need to
optimize the out-of-memory path?
But I should have used on_each_cpu_cond() helper which does this for us
(except it falls back to individial IPIs) which would make this code
neater.
> >> 2) When CONFIG_CPUMASK_OFFSTACK=n and NR_CPUS is relatively large, cpumask on stack may
> >> cause stack overflow. because of, alloc_pages() can be called from
> >> very deep call stack.
> >
> > You can't have large NR_CPUS without CONFIG_CPUMASK_OFFSTACK=y,
> > otherwise you'll get many other stack overflows, too.
>
> Original code put cpumask bss instead stack then. :-)
Yes, and this is what it looks like if we convert it directly, but I
still don't want to encourage people to do this :(
Cheers,
Rusty.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1179,7 +1179,7 @@ void drain_all_pages(void)
* Allocate in the BSS so we wont require allocation in
* direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y
*/
- static cpumask_t cpus_with_pcps;
+ static DECLARE_BITMAP(cpus_with_pcps, NR_CPUS);
/*
* We don't care about racing with CPU hotplug event
@@ -1197,11 +1197,12 @@ void drain_all_pages(void)
}
}
if (has_pcps)
- cpumask_set_cpu(cpu, &cpus_with_pcps);
+ cpumask_set_cpu(cpu, to_cpumask(cpus_with_pcps));
else
- cpumask_clear_cpu(cpu, &cpus_with_pcps);
+ cpumask_clear_cpu(cpu, to_cpumask(cpus_with_pcps));
}
- on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1);
+ on_each_cpu_mask(to_cpumask(cpus_with_pcps),
+ drain_local_pages, NULL, 1);
}
#ifdef CONFIG_HIBERNATION
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists