lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090914213352.GK6045@nowhere>
Date:	Mon, 14 Sep 2009 23:33:54 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	"K.Prasad" <prasad@...ux.vnet.ibm.com>
Cc:	Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
	Alan Stern <stern@...land.harvard.edu>,
	Peter Zijlstra <peterz@...radead.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Jan Kiszka <jan.kiszka@...mens.com>,
	Jiri Slaby <jirislaby@...il.com>,
	Li Zefan <lizf@...fujitsu.com>, Avi Kivity <avi@...hat.com>,
	Paul Mackerras <paulus@...ba.org>,
	Mike Galbraith <efault@....de>,
	Masami Hiramatsu <mhiramat@...hat.com>
Subject: Re: [PATCH 3/5] hw-breakpoints: Rewrite the hw-breakpoints layer
	on top of perf counters

On Mon, Sep 14, 2009 at 10:47:41PM +0530, K.Prasad wrote:
> > Yeah it would be very convenient to have that. Is it possible considering
> > the current internal design of perf?
> >
> 
> It is already done by hw-breakpoints (which can also support cpumask) and
> finding ways to re-use existing breakpoint code post perf integration should
> do the trick.


Yeah but a single struct hw_breakpoint is attached to only
one counter, which can't be cross cpu wide in essence.

Also since we have a 1:1 relationship between the counter and the
struct hw_breakpoint, it would be nice to merge its content into
the counter and drop the struct hw_breakpoint,
that would schrink some code which always have to handle both.

 
> There's an unconditional __perf_counter_init() and
> perf_install_in_context() in register_user_hw_breakpoint_cpu() in
> your patch which can rather be done after checks that ensure the
> availability of a debug register on every CPU requested, update some
> book-keeping variables and run on those CPUs (through IPIs?). By virtue
> of being 'pinned' onto the CPU, I presume, they would remain enabled until
> being removed through an explicit 'unregister' request - functionally
> the same as the present register_kernel_hw_breakpoint() in -tip.
>


But that would rather overlap the role of perfcounter which
is supposed to handle the context attachment to a cpu.


> The other suggestion to enable/disable all breakpoints atomically (to
> implement breakpoints on all CPUs), if possible, would be elegant too.



It's not necessarilly needed to disable all running breakpoints because
a wide brakpoint is coming. The patchset implements all the constraints
that check every possible conflicts (check that there are never more
than HBP_NUM breakpoints running).

Also the idea of preempting a breakpoint looks rather invasive for
the other users.


> In any case, an iterative registration for all CPUs from the end-user
> doesn't provide the abstraction that exists, and is undesirable.


True, this is something that existed in your work and that I couldn't
support after the rebase against perfcounter.

That said, only ftrace and kgdb would use it. But still, this is
something we may want to fix to avoid ftrace and kgdb to handle
these iterations.



> For instance, it cannot handle cases where a CPU becomes online post
> breakpoint registration (and such misses are expensive during
> debugging).


Oh that's not that much a problem I think.
You can register a breakpoint for all possible cpu.
So that once a cpu gets online, the breakpoint is toggled.
If perfcounter support cpu hotplug...not sure.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ