[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100120093555.GA24355@basil.fritz.box>
Date: Wed, 20 Jan 2010 10:35:55 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Corey Ashford <cjashfor@...ux.vnet.ibm.com>
Cc: Andi Kleen <andi@...stfloor.org>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Paul Mackerras <paulus@...ba.org>,
Stephane Eranian <eranian@...glemail.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Frederic Weisbecker <fweisbec@...il.com>,
Xiao Guangrong <xiaoguangrong@...fujitsu.com>,
Dan Terpstra <terpstra@...s.utk.edu>,
Philip Mucci <mucci@...s.utk.edu>,
Maynard Johnson <mpjohn@...ibm.com>, Carl Love <cel@...ibm.com>
Subject: Re: [RFC] perf_events: support for uncore a.k.a. nest units
> Yes, I agree. Also it's easy to construct a system design that doesn't
> have a hierarchical topology. A simple example would be a cluster of 32
> nodes, each of which is connected to its 31 neighbors. Perhaps for the
I doubt it's needed or useful to describe all details of an interconnect.
If detailed distance information is needed a simple table like
the SLIT table exported by ACPI would seem easier to handle.
But at least some degree of locality (e.g. "local memory controller")
would make sense.
> purposes of just enumerating PMUs, a tree might be sufficient, but it's not
> clear to me that it is mathematically sufficient for all topologies, not to
> mention if it's intuitive enough to use. For example,
> highly-interconnected components might require that PMU leaf nodes be
> duplicated in multiple branches, i.e. PMU paths might not be unique in some
> topologies.
We already have cyclical graphs in sysfs using symlinks. I'm not
sure they are all that easy to parse/handle, but at least they
can be described.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists