[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100127161146.GC22447@nowhere>
Date: Wed, 27 Jan 2010 17:11:51 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: "K.Prasad" <prasad@...ux.vnet.ibm.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Alan Stern <stern@...land.harvard.edu>
Subject: Re: [RFC Patch 2/2][Bugfix][x86][hw-breakpoint] Fix return-code to
notifier chain in hw_breakpoint_handler
On Wed, Jan 27, 2010 at 03:58:26PM +0530, K.Prasad wrote:
> On Mon, Jan 25, 2010 at 11:11:04PM +0100, Frederic Weisbecker wrote:
> > Is that < TASK_SIZE an accurate check? We want support for
> > userspace breakpoints on perf tools later, and those don't want
> > signals.
> >
>
> Well, signal generation for user-space breakpoints happened
> unconditionally for 'historical' reasons (guess that Alan Stern's
> original patch had it that way).
>
> We could change that into a 'ptrace-only' signal generation now.
Yeah, now that we can have multiple-purpose concurrent breakpoints,
this is necessary.
> > We do this cleanup in the beginning of the breakpoint handler:
> >
> > current->thread.debugreg6 &= ~DR_TRAP_BITS;
> >
> > And from ptrace.c:ptrace_triggered():
> >
> > thread->debugreg6 |= (DR_TRAP0 << i);
> >
> > This is called on perf_bp_event().
> > Instead of checking if this is a userspace thread, we should actually
> > check if this is a ptrace breakpoint by looking at this
> > in the end of hw_breakpoint_handler().
> >
> > current->thread.debugreg6 & DR_TRAP_BITS
> >
> > Only ptrace breakpoints require signals.
> >
>
> Yes, this does look like a clean way to limit signals to those requests
> that are interested (I was looking at round-about ways like doing a
> lookup based on callback functions).
>
> I will send the next version of the patch with the above changes.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists