[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090311131007.GB1074@elte.hu>
Date: Wed, 11 Mar 2009 14:10:07 +0100
From: Ingo Molnar <mingo@...e.hu>
To: "K.Prasad" <prasad@...ux.vnet.ibm.com>
Cc: Alan Stern <stern@...land.harvard.edu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Roland McGrath <roland@...hat.com>
Subject: Re: [patch 02/11] x86 architecture implementation of Hardware
Breakpoint interfaces
* K.Prasad <prasad@...ux.vnet.ibm.com> wrote:
> Even if #3 was implemented as described, we would still retain
> a majority of the complexity in balance_kernel_vs_user() to
> check newer tasks with requests for breakpoint registers.
Not if it's implemented in a really simple way:
Kernel gets debug registers in db4..db3..db2..db1 order, and its
allocation is essentially hardcoded - i.e. we dont try to be
fancy.
User-space (gdb) on the other hand will try to allocate in the
db1..db2..db3..db4 order.
Maintain a 'max debug register index' value driven by ptrace and
maintain a 'min debug register index' driven by kernel-space
hw-breakpoint allocations.
If they meet somewhere inbetween then we have overcommit which
we dont allow. In all other cases (which i proffer covers 100%
of the sane cases) they will mix amicably.
Sure, user-space can in principle do db4..db3..db2..db1
allocations as well, but it would be silly and GDB does not do
that.
So there's no real overlap between register usage - hence no
need for any complex scheduling smarts. Keep it simple first,
and only add complexity when it's justified.
[ for the special case of an architecture having just a single
debug register this will bring the expected behavior of either
allowing gdb to use the breakpoint or allowing the kernel to
use it. ]
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists