[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87aamzww2m.fsf@basil.nowhere.org>
Date: Thu, 30 Sep 2010 09:05:05 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Christoph Lameter <cl@...ux.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mel Gorman <mel@....ul.ie>, Rob Mueller <robm@...tmail.fm>,
linux-kernel@...r.kernel.org, Bron Gondwana <brong@...tmail.fm>,
linux-mm <linux-mm@...ck.org>
Subject: Re: Default zone_reclaim_mode = 1 on NUMA kernel is bad forfile/email/web servers
Christoph Lameter <cl@...ux.com> writes:
>
> 1. Fix the ACPI information to indicate lower memory access differences
> (was that info actually acurate?) so that zone reclaim defaults to off.
The reason the ACPI information is set this way is that the people who
tune the BIOS have some workload they care about which prefers zone
reclaim off and they know they can force this "faster setting" by faking
the distances.
Basically they're working around a Linux performance quirk.
Really I think some variant of Motohiro-san's patch is the right
solution: most problems with zone reclaim are related to IO
intensive workloads and it never made sense to have the unmapped
disk cache local on a system with reasonably small NUMA factor.
The only problem is on extremly big NUMA systems where remote nodes
are so slow that it's too slow even for read() and write().
I have been playing with the idea of adding a new "nearby interleave"
NUMA mode for this, but didn't have time to implement it so far.
For application I don't think we can ever solve it completely, this
probably always needs some kind of tuning. Currently the NUMA policy
APIs are not too good for this because they are too static, e.g. in some
cases "nearby" without fixed node affinity would also help here.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists