[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7c86c4471001210143v5f9f54dbx6026949ced7d6ca@mail.gmail.com>
Date: Thu, 21 Jan 2010 10:43:09 +0100
From: stephane eranian <eranian@...glemail.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Andi Kleen <andi@...stfloor.org>,
Paul Mackerras <paulus@...ba.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Xiao Guangrong <xiaoguangrong@...fujitsu.com>,
Dan Terpstra <terpstra@...s.utk.edu>,
Philip Mucci <mucci@...s.utk.edu>,
Maynard Johnson <mpjohn@...ibm.com>,
Carl Love <cel@...ibm.com>, eranian@...gle.com
Subject: Re: [RFC] perf_events: support for uncore a.k.a. nest units
On Thu, Jan 21, 2010 at 9:59 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Thu, 2010-01-21 at 09:47 +0100, stephane eranian wrote:
>> I don't think that is correct. You can be using the uncore PMU on Nehalem
>> without any core PMU event. The only thing to realize is that uncore PMU
>> shares the same interrupt vector as core PMU. You need to configure which
>> core the uncore is going to interrupt on. This is done via a bitmask, so you
>> can interrupt more than one core at a time. Several strategies are possible.
>
> Ah, sharing the IRQ line is no problem. But from reading I got the
Given the PMU sharing model of perf_events, it seems you may have
multiple consumers of uncore PMU at the same time. That means you
will need to direct the interrupt onto all the CPU for which you currently
have a user. You may have multiple users per CPU, thus you need some
reference count to track all of that. The alternative is to systematically
broadcast the uncore PMU interrupt. Each core then checks whether or
not it has uncore users.
Note that all of this is independent of the type of event, i.e., per-thread
or system-wide.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists