lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1248445569.6987.74.camel@twins>
Date:	Fri, 24 Jul 2009 16:26:09 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Frédéric Weisbecker <fweisbec@...il.com>
Cc:	Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <efault@....de>,
	Paul Mackerras <paulus@...ba.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Anton Blanchard <anton@...ba.org>,
	Li Zefan <lizf@...fujitsu.com>,
	Zhaolei <zhaolei@...fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	"K . Prasad" <prasad@...ux.vnet.ibm.com>,
	Alan Stern <stern@...land.harvard.edu>
Subject: Re: [RFC][PATCH 5/5] perfcounter: Add support for kernel hardware 
 breakpoints

On Fri, 2009-07-24 at 16:02 +0200, Frédéric Weisbecker wrote:
> 2009/7/23 Peter Zijlstra <a.p.zijlstra@...llo.nl>:
> > On Mon, 2009-07-20 at 13:08 -0400, Frederic Weisbecker wrote:
> >> This adds the support for kernel hardware breakpoints in perfcounter.
> >> It is added as a new type of software counter and can be defined by
> >> using the counter number 5 and by passsing the address of the
> >> breakpoint to set through the config attribute.
> >
> > Is there a limit to these hardware breakpoints? If so, the software
> > counter model is not sufficient, since we assume we can always schedule
> > all software counters. However if you were to add more counters than you
> > have hardware breakpoints you're hosed.
> >
> >
> 
> Hmm, indeed. But this patch handles this case:
> 
> +static const struct pmu *bp_perf_counter_init(struct perf_counter *counter)
> +{
> +       if (hw_breakpoint_perf_init((unsigned long)counter->attr.config))
> +               return NULL;
> +
> 
> IIRC, hw_breakpoint_perf_init() calls register_kernel_breakpoint() which in turn
> returns -ENOSPC if we haven't any breakpoint room left.
> 
> It seems we can only set 4 breakpoints simultaneously in x86, or
> something close to that.

Ah, that's not the correct way of doing that. Suppose that you would
register 4 breakpoint counter to one task, that would leave you unable
to register a breakpoint counter on another task. Even though these
breakpoints would never be scheduled simultaneously.

Also, regular perf counters would multiplex counters when over-committed
on a hardware resource, allowing you to create more such breakpoints
than you have actual hardware slots.

The way to do this is to create a breakpoint pmu which would simply fail
the pmu->enable() method if there are insufficient hardware resources
available.

Also, your init routine, the above hw_breakpoint_perf_init(), will have
to verify that when the counter is part of a group, this and all other
hw breakpoint counters in that group can, now, but also in the future,
be scheduled simultaneously.

This means that there should be some arbitration towards other in-kernel
hw breakpoint users, because if you allow all 4 hw breakpoints in a
group and then let another hw breakpoint users have one, you can never
schedule that group again.

[ which raises a fun point, Paulus do we handle groups having multiple
  'hardware' pmu's in? ]

Now, for the actual counter implementation you can probably re-use the
swcounter code, but you also need a pmu implementation.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ