lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 May 2014 00:32:10 +0200 (CEST)
From:	Jiri Kosina <jkosina@...e.cz>
To:	Steven Rostedt <rostedt@...dmis.org>
cc:	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	Ingo Molnar <mingo@...nel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>,
	Seth Jennings <sjenning@...hat.com>,
	Ingo Molnar <mingo@...hat.com>, Jiri Slaby <jslaby@...e.cz>,
	linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 0/2] kpatch: dynamic kernel patching

On Fri, 16 May 2014, Steven Rostedt wrote:

> > With lazy-switching implemented in kgraft, this can never happen.
> > 
> > So I'd like to ask for a little bit more explanation why you think the 
> > stop_machine()-based patching provides more sanity/consistency assurance 
> > than the lazy switching we're doing.
> 
> Here's what I'm more concerned with. With "lazy" switching you can have
> two tasks running two different versions of bar(). What happens if the
> locking of data within bar changes? Say the data was protected
> incorrectly with mutex(X) and you now need to protect it with mutex(Y).
> 
> With stop machine, you can make sure everyone is out of bar() and all
> tasks will use the same mutex to access the data. But with a lazy
> approach, one task can be protecting the data with mutex(X) and the
> other with mutex(Y) causing both tasks to be accessing the data at the
> same time.
> 
> *That* is what I'm more concerned about.

That's true, and we come back to what has been said at the very beginning 
for both aproaches -- you can't really get away without manual human 
inspection of the patches that are being applied.

The case you have outlined is indeed problematic for the "lazy switching" 
aproach, and can be worked around (interim function, which takes both 
mutexes in well defined order, for example).

You can construct a broken locking scenario for stop_machine() aproach as 
well -- consider a case when you are exchaing a function which changes the 
locking order of two locks/mutexes. How do you deal with the rest of the 
code where the locks are being acquired, but not through the functions 
you've exchanged?

So again -- there is no disagreement, I believe, about the fact that "live 
patches" can't be reliably auto-generated, and human inspection will 
always be necessary. Given the intended use-case (serious CVEs mostly, 
handled by distro vendors), this is fine.

-- 
Jiri Kosina
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ