lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinjOmepMJwGbJAFanoWe6YKyGz+WBdz7CZxoD5o@mail.gmail.com>
Date:	Sat, 27 Nov 2010 00:25:36 +0800
From:	Lin Ming <lin@...g.vg>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Lin Ming <ming.m.lin@...el.com>, Ingo Molnar <mingo@...e.hu>,
	Andi Kleen <andi@...stfloor.org>,
	lkml <linux-kernel@...r.kernel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu

On Fri, Nov 26, 2010 at 7:41 PM, Stephane Eranian <eranian@...gle.com> wrote:
> On Fri, Nov 26, 2010 at 12:36 PM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
>> On Fri, 2010-11-26 at 12:25 +0100, Stephane Eranian wrote:
>>> On Fri, Nov 26, 2010 at 12:24 PM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
>>> > On Fri, 2010-11-26 at 09:18 +0100, Stephane Eranian wrote:
>>> >
>>> >> In the perf_event model, given that any one of the 4 cores can be used
>>> >> to program uncore events, you have no choice but to broadcast to all
>>> >> 4 cores. Each has to demultiplex and figure out which of its counters
>>> >> have overflowed.
>>> >
>>> > Not really, you can redirect all these events to the first online cpu of
>>> > the node.
>>> >
>>> > You can re-write event->cpu in pmu::event_init(), and register cpu
>>> > hotplug notifiers to migrate the state around.
>>> >
>>> I am sure you could. But then the user thinks the event is controlled
>>> from CPUx when it's actually from CPUz. I am sure it can work but
>>> that's confusing, especially interrupt-wise.
>>
>> Well, its either that or keeping a node wide state like we do for AMD
>> and serialize everything from there.
>>
>> And I'm not sure what's most expensive, steering the interrupt to one
>> core only, or broadcasting every interrupt, I'd favour the first
>> approach.
>
> I think the one core-only approach will limit the spurious interrupt aspect.
> In perfmon, that's how I had it setup. The first CPU where uncore is
> accessed owns the uncore PMU for the socket, thus all interrupts are
> routed there. What you are proposing is the same. Now you can chose
> you hardcode which is the default core to handle this, or (better) you
> use the first core that accesses uncore.
>
>>
>> The whole thing is a node-wide resource, so the user needs to think in
>> nodes anyway, we already do a cpu->node mapping for identifying the
>> thing.
>>
> Agreed.
>

Hi, all

Thanks for all the comments.
I'm on travel Nov 27 to Nov 30.

I'll address the comments when I'm back.

Thanks,
Lin Ming
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ