lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.1405060955180.3969@pobox.suse.cz>
Date:	Tue, 6 May 2014 10:03:46 +0200 (CEST)
From:	Jiri Kosina <jkosina@...e.cz>
To:	Ingo Molnar <mingo@...nel.org>
cc:	David Lang <david@...g.hm>, Josh Poimboeuf <jpoimboe@...hat.com>,
	Seth Jennings <sjenning@...hat.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...hat.com>, Jiri Slaby <jslaby@...e.cz>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] kpatch: dynamic kernel patching

On Tue, 6 May 2014, Ingo Molnar wrote:

> So what I'm curious about, what is the actual 'in the field' distro 
> experience, about the type of live-patches that get pushed with 
> urgency?

This is of course a very good question. We've done some very light 
preparatory analysis and went through patches which would make most sense 
to be shipped as hot/live patches without enough time for proper downtime 
scheduling (i.e. CVE severity high enough (local root), etc). Most of the 
time, these turn out to be a one-or-few liners, mostly adding extra check, 
fixing bounds, etc. There were just one or two in a few years history 
where some extra care would be needed.

Of course, I guess that most valuable input regarding this could be coming 
from kSplice guys, as they have the biggest in-field experience with this, 
and are shipping a lot of non-trivial patches this way, not just the 
"super-critical" ones. But I am not really sure whether we can expect any 
input from them, unfortunately.

> My guess would be that the overwhelming majority of live-patches don't 
> change data structures - and hence the right initial model would be to 
> ensure (via tooling, and via review) that 'v1' and 'v2' data is exactly 
> the same.

I fully agree ... that's why we are not pro-actively dealing with that in 
kGraft, for the sake of keeping it stupid at simple (at least in the early 
stages).

> The most abstract way to 'live patch' a kernel is to do a checkpoint 
> save on all application state, to reboot the kernel, boot into the 
> patched kernel and then restore all application state seemlessly.

In some sense yes, but it seems to still be rather costly operation 
(especially checkpointing on a HUGE machine might take some time, if I 
understand the technology correctly) compared to just flipping a few bytes 
(while maintaining overall correctness, of course) in the kernel memory.

Basically, CR might be the best solution for very complex and large kernel 
updates. What I am currently most concerned about though, are scenarios 
where datacenter owners need to urgently apply a local root one-liner fix 
with as little hassle as possible.

-- 
Jiri Kosina
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ