[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5328753B.2050107@intel.com>
Date: Tue, 18 Mar 2014 09:32:59 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
lsf@...ts.linux-foundation.org,
Wu Fengguang <fengguang.wu@...el.com>
Subject: [LSF/MM TOPIC] Testing Large-Memory Hardware
I have a quick topic that could perhaps be addressed along with the
testing topic that Dave Jones proposed. I won't be attending, but there
will be a couple of other Intel folks there. This should be a fairly
quick thing to address.
Topic:
Fengguang Wu who runs the wonderful LKP and 0day build tests was
recently asking if I thought there was value in adding a large-memory
system, say with 1TB of RAM. LKP is the system that generates these
kinds of automated bug reports and performance tests:
http://lkml.org/lkml/2014/3/9/201
My gut reaction was that we'd probably be better served by putting
resources in to systems with higher core counts rather than lots of RAM.
I have encountered the occasional boot bug on my 1TB system, but it's
far from a frequent occurrence, and even more infrequent to encounter
things at runtime.
Would folks agree with that? What kinds of tests, benchmarks, stress
tests, etc... do folks run that are both valuable and can only be run on
a system with a large amount of actual RAM?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists