lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Feb 2015 13:12:17 +0100
From:	Vojtech Pavlik <vojtech@...e.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Jiri Kosina <jkosina@...e.cz>,
	Josh Poimboeuf <jpoimboe@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Seth Jennings <sjenning@...hat.com>,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Borislav Petkov <bp@...en8.de>, live-patching@...r.kernel.org
Subject: Re: live kernel upgrades (was: live kernel patching design)

On Tue, Feb 24, 2015 at 10:44:05AM +0100, Ingo Molnar wrote:

> > This is the most common argument that's raised when live 
> > patching is discussed. "Why do need live patching when we 
> > have redundancy?"
> 
> My argument is that if we start off with a latency of 10 
> seconds and improve that gradually, it will be good for 
> everyone with a clear, actionable route for even those who 
> cannot take a 10 seconds delay today.

Sure, we can do it that way. 

Or do it in the other direction.

Today we have a tool (livepatch) in the kernel that can apply trivial
single-function fixes without a measurable disruption to applications.

And we can improve it gradually to expand the range of fixes it can
apply.

Dependent functions can be done by kGraft's lazy migration.

Limited data structure changes can be handled by shadowing.

Major data structure and/or locking changes require stopping the kernel,
and trapping all tasks at the kernel/userspace boundary is clearly the
cleanest way to do that. I comes at a steep latency cost, though.

Full code replacement without change scope consideration requires full
serialization and deserialization of hardware and userspace
interface state, which is something we don't have today and would
require work on every single driver. Possible, but probably a decade of
effort.

With this approach you have something useful at every point and every
piece of effort put in gives you a rewars.

> Lets see the use cases:
> 
> > [...] Examples would be legacy applications which can't 
> > run in an active-active cluster and need to be restarted 
> > on failover.
> 
> Most clusters (say web frontends) can take a stoppage of a 
> couple of seconds.

It's easy to find examples of workloads that can be stopped. It doesn't
rule out a significant set of those where stopping them is very
expensive.

> > Another usecase is large HPC clusters, where all nodes 
> > have to run carefully synchronized. Once one gets behind 
> > in a calculation cycle, others have to wait for the 
> > results and the efficiency of the whole cluster goes 
> > down. [...]
> 
> I think calculation nodes on large HPC clusters qualify as 
> the specialized case that I mentioned, where the update 
> latency could be brought down into the 1 second range.
> 
> But I don't think calculation nodes are patched in the 
> typical case: you might want to patch Internet facing 
> frontend systems, the rest is left as undisturbed as 
> possible. So I'm not even sure this is a typical usecase.

They're not patched for security bugs, but stability bugs are an
important issue for multi-month calculations.

> In any case, there's no hard limit on how fast such a 
> kernel upgrade can get in principle, and the folks who care 
> about that latency will sure help out optimizing it and 
> many HPC projects are well funded.

So far, unless you come up with an effective solutions, if you're
catching all tasks at the kernel/userspace boundary (the "Kragle"
approach), the service interruption is effectively unbounded due to
tasks in D state.

> > The value of live patching is in near zero disruption.
> 
> Latency is a good attribute of a kernel upgrade mechanism, 
> but it's by far not the only attribute and we should 
> definitely not design limitations into the approach and 
> hurt all the other attributes, just to optimize that single 
> attribute.

It's an attribute I'm not willing to give up. On the other hand, I
definitely wouldn't argue against having modes of operation where the
latency is higher and the tool is more powerful.

> I.e. don't make it a single-issue project.

There is no need to worry about that. 

-- 
Vojtech Pavlik
Director SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists