[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080827.000506.177643294.davem@davemloft.net>
Date: Wed, 27 Aug 2008 00:05:06 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: nickpiggin@...oo.com.au
Cc: travis@....com, davej@...hat.com, torvalds@...ux-foundation.org,
Alan.Brunelle@...com, mingo@...e.hu, tglx@...utronix.de,
rjw@...k.pl, linux-kernel@...r.kernel.org,
kernel-testers@...r.kernel.org, akpm@...ux-foundation.org,
arjan@...ux.intel.com, rusty@...tcorp.com.au,
suresh.b.siddha@...el.com, tony.luck@...el.com, steiner@....com,
cl@...ux-foundation.org
Subject: Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c -
bisected
From: Nick Piggin <nickpiggin@...oo.com.au>
Date: Wed, 27 Aug 2008 16:54:32 +1000
> 5% is a pretty nasty performance hit... what sort of benchmarks are we
> talking about here?
>
> I just made some pretty crazy changes to the VM to get "only" around 5
> or so % performance improvement in some workloads.
>
> What places are making heavy use of cpumasks that causes such a slowdown?
> Hopefully callers can mostly be improved so they don't need to use cpumasks
> for common cases.
It's almost certainly from the cross-call dispatch call chain.
As just one example, just to do a TLB flush mm->cpu_vm_mask probably
gets passed around as an aggregate two or three times on the way down
to the APIC programming code on x86. That's two or three 512 byte
copies on the stack :)
Look at the sparc64 SMP code for how I solved the problem there.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists