lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Jan 2014 14:23:30 +0100
From:	Alexander Gordeev <agordeev@...hat.com>
To:	Andi Kleen <ak@...ux.intel.com>
Cc:	linux-kernel@...r.kernel.org,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Jiri Olsa <jolsa@...hat.com>, Ingo Molnar <mingo@...nel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Andi Kleen <ak@...ux.jf.intel.com>
Subject: Re: [PATCH RFC v2 0/4] perf: IRQ-bound performance events

On Sun, Jan 05, 2014 at 09:59:49AM -0800, Andi Kleen wrote:
> > This is version 2 of RFC "perf: IRQ-bound performance events". That is an
> > introduction of IRQ-bound performance events - ones that only count in a
> > context of a hardware interrupt handler. Ingo suggested to extend this
> > functionality to softirq and threaded handlers as well:
> 
> Did you measure the overhead in workloads that do a lot of interrupts?
> I assume two WRMSR could be a significant part of the cost of small interrupts.

No, that would be the next step. I hoped first to ensure the way I am
intruding into the current perf design is correct.

> For counting at least it would be likely a lot cheaper to just RDPMC
> and subtract manually.

Sigh, that seems as quite a rework for Intel PMU.

> The cache miss example below is certainly misleading, as cache misses
> by interrupts are often a "debt", that is they are forced on whoever
> is interrupted. I don't think that is a good use of this.

May be useless rather than misleading? :) Actually, cache and power use
are exactly the data I thought are useful if one wants to check the
dependency from the interrupt affinity mask. There was some discussion
on this topic some year ago:

On Mon, May 21, 2012 at 08:36:09AM -0700, Linus Torvalds wrote:
"So it may well make perfect sense to allow a mask of CPU's for
interrupt delivery, but just make sure that the mask all points to
CPU's on the same socket. That would give the hardware some leeway in
choosing the actual core - it's very possible that hardware could
avoid cores that are running with irq's disabled (possibly improving
latecy) or even more likely - avoid cores that are in deeper
powersaving modes."

So this RFC is kind of follow-up to come up with necessary tooling.

> I guess it can be useful for cycles.
> 
> -Andi

Thanks, Andi!

-- 
Regards,
Alexander Gordeev
agordeev@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ