lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 May 2009 11:37:40 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Roland McGrath <roland@...hat.com>, jdike@...toit.com,
	utrace-devel@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC, PATCH 0/2] utrace/ptrace: simplify/cleanup ptrace attach


* Christoph Hellwig <hch@...radead.org> wrote:

> On Wed, May 06, 2009 at 11:05:12AM +0200, Ingo Molnar wrote:
> > It might be more effective if you also wrote patches and if you 
> > would shop for maintainer Acks, instead of just "pinging" people? 
> > ;-) We've already got enough would-be-managers on lkml really.
> 
> I have no interest touching tons of architectures where the 
> maintainers are much better of looking at those lowlevel bits. 
> [...]

That's a somewhat naive expectation. Currently ptrace has a low 
mindshare and an even lower know-how share, even amongst 
architecture maintainers. Much of the ptrace code has been many 
years ago and often it has been copied over from other architectures 
and has been hacked to work sort-of. There's positive exceptions for 
sure, but generally ptrace know-how is extremely limited and there's 
a lot of architectures with little proactivity.

It is far more efficient if Roland, Oleg (or you, if you are 
interested in this stuff - which you seem to be) did RFC patches and 
asked for maintainer acks, than to depend on maintainers to do it.

We have about a dozen core kernel features that still have not 
propagated to all architectures: irqflags-tracking (for lockdep), 
genirq, stacktrace support, latencytop support, and more. We are 
just getting around to make GENERIC_TIME the only option [maybe..] - 
after years of migration.

We've got 22 architectures and they tend to slow down certain types 
of core kernel changes significantly.

> [...] See the case where Roland tried to do ARM but still hasn't 
> gotten any feedback as a negative example.

That really reinforces my point: arch maintainers are even less 
inclined to do it proactively.

> > Really, the above isnt a blocker list, it's your personal 
> > wish-list for the future. Cleaning up ptrace itself is already 
> > an upstream advantage worth having - for years ptrace was barely 
> > maintained. It interfaces to enough critical projects (gdb, 
> > strace, UML, etc.) to be a realiable (and testable) basis for 
> > utrace.
> 
> The cleanups aren't there for cleanup purposes, but to actually 
> allow the utrace-based ptrace being used unconditionally.  There 
> is really no point in merging a second conditional ptrace 
> implementation that has to be maintained while we add another one 
> that doesn't add a single new feature.

I'm well aware of what these patches are trying to achieve.

We've got the main mass of architectures covered:

 arch/ia64/Kconfig:	select HAVE_ARCH_TRACEHOOK
 arch/powerpc/Kconfig:	select HAVE_ARCH_TRACEHOOK
 arch/s390/Kconfig:	select HAVE_ARCH_TRACEHOOK
 arch/sh/Kconfig:	select HAVE_ARCH_TRACEHOOK
 arch/sparc/Kconfig:	select HAVE_ARCH_TRACEHOOK
 arch/x86/Kconfig:	select HAVE_ARCH_TRACEHOOK

I'd expect the remaining arch conversion to tracehooks to be 
deterministically finished if done by the ptrace folks - i.e. Roland 
and Oleg. It will take forever if all that happens is a 'ping' from 
you ;-)

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ