[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mafs0frfpt8yp.fsf@kernel.org>
Date: Tue, 24 Jun 2025 18:12:14 +0200
From: Pratyush Yadav <pratyush@...nel.org>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: Pratyush Yadav <pratyush@...nel.org>, Mike Rapoport <rppt@...nel.org>,
Jason Gunthorpe <jgg@...pe.ca>, jasonmiu@...gle.com, graf@...zon.com,
changyuanl@...gle.com, dmatlack@...gle.com, rientjes@...gle.com,
corbet@....net, rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
kanie@...ux.alibaba.com, ojeda@...nel.org, aliceryhl@...gle.com,
masahiroy@...nel.org, akpm@...ux-foundation.org, tj@...nel.org,
yoann.congal@...le.fr, mmaurer@...gle.com, roman.gushchin@...ux.dev,
chenridong@...wei.com, axboe@...nel.dk, mark.rutland@....com,
jannh@...gle.com, vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com, joel.granados@...nel.org,
rostedt@...dmis.org, anna.schumaker@...cle.com, song@...nel.org,
zhangguopeng@...inos.cn, linux@...ssschuh.net,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-mm@...ck.org, gregkh@...uxfoundation.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com
Subject: Re: [RFC v2 05/16] luo: luo_core: integrate with KHO
On Fri, Jun 20 2025, Pasha Tatashin wrote:
> On Fri, Jun 20, 2025 at 11:28 AM Pratyush Yadav <pratyush@...nel.org> wrote:
>> On Thu, Jun 19 2025, Pasha Tatashin wrote:
[...]
>> Outside of hypervisor live update, I have a very clear use case in mind:
>> userspace memory handover (on guest side). Say a guest running an
>> in-memory cache like memcached with many gigabytes of cache wants to
>> reboot. It can just shove the cache into a memfd, give it to LUO, and
>> restore it after reboot. Some services that suffer from long reboots are
>> looking into using this to reduce downtime. Since it pretty much
>> overlaps with the hypervisor work for now, I haven't been talking about
>> it as much.
>>
>> Would you also call this use case "live update"? Does it also fit with
>> your vision of where LUO should go?
>
> Yes, absolutely. The use case you described (preserving a memcached
> instance via memfd) is a perfect fit for LUO's vision.
>
> While the primary use case driving this work is supporting the
> preservation of virtual machines on a hypervisor, the framework itself
> is not restricted to that scenario. We define "live update" as the
> process of updating the kernel from one version to another while
> preserving FD-based resources and keeping selected devices
> operational. The machine itself can be running storage, database,
> networking, containers, or anything else.
>
> A good parallel is Kernel Live Patching: we don't distinguish what
> workload is running on a machine when applying a security patch; we
> simply patch the running kernel. In the same way, Live Update is
> designed to be workload-agnostic. Whether the system is running an
> in-memory database, containers, or VMs, its primary goal is to enable
> a full kernel update while preserving the userspace-requested state.
Okay, then we are on the same page and I can live with whatever name we
go with :-)
BTW, I think it would be useful to make this clarification on the LUO
docs as well so the intended use case/audience of the API is clear.
Currently the doc string in luo_core.c only talks about hypervisors and
VMs.
--
Regards,
Pratyush Yadav
Powered by blists - more mailing lists