lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 5 Jul 2014 17:04:02 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Jiri Kosina <jkosina@...e.cz>
Cc:	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
	Jiri Slaby <jslaby@...e.cz>,
	Stephen Rothwell <sfr@...b.auug.org.au>,
	linux-kernel@...r.kernel.org, rostedt@...dmis.org,
	mingo@...hat.com, Andrew Morton <akpm@...ux-foundation.org>,
	andi@...stfloor.org, paulmck@...ux.vnet.ibm.com,
	Pavel Machek <pavel@....cz>, jirislaby@...il.com,
	Vojtech Pavlik <vojtech@...e.cz>, Michael Matz <matz@...e.de>
Subject: Re: kGraft to -next [was: 00/21 kGraft]

Hello,

On Sat, Jul 05, 2014 at 10:49:18PM +0200, Jiri Kosina wrote:
> Quite frankly, I have to say that I don't personally see *that* big 
> difference. In both cases we are making assumptions about semantics, and 
> are trying to get "as close as possible" to the point in time where the 
> set of assumptions can be made as minimal as possible.
>
> With userspace thread, this is kernel/userspace boundary. With kthread, 
> this is starting of new iteration of the main loop / try_to_freeze().

This is really different.  With kernel/userspace boundary, if the
patch is correct, the mechanism, sans bugs, should be able to
guarantee that the patched result is correct.  With kthreads, it
can't.  The boundary between the old and new worlds might or might not
be correct.  How can they be the same?

The fact that they may coincide often can be useful as a guideline or
whatever but I'm completely against just mushing it together when it
isn't correct.  This kind of things quickly lead to ambiguous
situations where people are not sure about the specific semantics or
guarantees of the construct and implement weird voodoo code followed
by voodoo fixes.  We already had a full round of that with the kernel
freezer itself, where people thought that the freezer magically makes
PM work properly for a subsystem.  Let's please not do that again.

> > This is the mechanism itself being flaky and buggy.  This can make 
> > things fail not because the patch itself is semantically wrong but 
> > because the mechanism stopped the kernel at the wrong place.  
> 
> Well, to be precise, we are not "stopping" the kernel at all.

Sure, whatever the term is, this is the boundary that the mechanism
considers it safe to swap the code, right?  User/kernel boundary
should be able to serve that purpose.  Freezing points can't.

> > If kthread can't be safely supported now, that's fine but then make it 
> > clear that functions which may be executed by kthreads aren't supported.
> 
> So one of the aproaches implied by this would be first merging a light 
> version of kGraft which doesn't support kthreads, and working towards a 
> solution for kthreads as well (which might be tangential to kGraft; if 
> most of the kthreads get converted to workqueues, it's a win-win 
> situation anyway); would such kGraft-light get your Ack then? :)

Yes, I think that converting things over to workqueue or
kthread_worker is generally a good idea but I'm not sure I'm in the
position to ack or nack kgraft as a whole.  I am not too sure about
the capability itself (neither positive or negative) and it'd take
quite a bit of energy to scrutinize and compare all the alternatives.
It'd be awesome if people who are working on the features can talk to
each other and see whether things can be combined.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ