[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100128200951.GD18683@nowhere>
Date: Thu, 28 Jan 2010 21:09:56 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Jason Wessel <jason.wessel@...driver.com>
Cc: linux-kernel@...r.kernel.org, kgdb-bugreport@...ts.sourceforge.net,
mingo@...e.hu, "K.Prasad" <prasad@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Alan Stern <stern@...land.harvard.edu>
Subject: Re: [PATCH 3/3] perf,hw_breakpoint,kgdb: No mutex taken for
kerneldebugger
On Thu, Jan 28, 2010 at 11:49:14AM -0600, Jason Wessel wrote:
> Frederic Weisbecker wrote:
> >> +static int hw_break_release_slot(int breakno)
> >> +{
> >> + struct perf_event **pevent;
> >> + int ret;
> >> + int cpu;
> >> +
> >> + for_each_online_cpu(cpu) {
> >> + pevent = per_cpu_ptr(breakinfo[breakno].pev, cpu);
> >> + ret = dbg_release_bp_slot(*pevent);
> >>
> >
> >
> >
> > So, you are missing some return errors there. Actually, a slot
> > release shouldn't return an error.
> >
> >
> >
>
> This is a trick so to speak. Either all the slot releases will return
> 0 or -1 depending on if the mutex is available, so it is not really
> missed.
Oh right, I forgot everything was freezed here :)
> > Ok, best effort fits well for reserve, but is certainly not
> > suitable for release. We can't leave a fake occupied slot like
> > this. If it fails, we should do this asynchronously, using the
> > usual release_bp_slot, may be toward the workqueues.
> >
> >
> >
> >
>
> If it fails the debugger tried to remove it again later. It seems to
> me like it is a don't care corner case. You get a printk if it ever
> does happen (which it really shouldn't).
Yeah truly it's a corner case, especially if the debugger can handle that
later.
May be just add a comment so that future reviewers don't stick to
this part.
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists