[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mafs0ms5zn0nm.fsf@kernel.org>
Date: Fri, 10 Oct 2025 00:57:49 +0200
From: Pratyush Yadav <pratyush@...nel.org>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: pratyush@...nel.org, jasonmiu@...gle.com, graf@...zon.com,
changyuanl@...gle.com, rppt@...nel.org, dmatlack@...gle.com,
rientjes@...gle.com, corbet@....net, rdunlap@...radead.org,
ilpo.jarvinen@...ux.intel.com, kanie@...ux.alibaba.com,
ojeda@...nel.org, aliceryhl@...gle.com, masahiroy@...nel.org,
akpm@...ux-foundation.org, tj@...nel.org, yoann.congal@...le.fr,
mmaurer@...gle.com, roman.gushchin@...ux.dev, chenridong@...wei.com,
axboe@...nel.dk, mark.rutland@....com, jannh@...gle.com,
vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com, joel.granados@...nel.org,
rostedt@...dmis.org, anna.schumaker@...cle.com, song@...nel.org,
zhangguopeng@...inos.cn, linux@...ssschuh.net,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-mm@...ck.org, gregkh@...uxfoundation.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com, lennart@...ttering.net, brauner@...nel.org,
linux-api@...r.kernel.org, linux-fsdevel@...r.kernel.org,
saeedm@...dia.com, ajayachandra@...dia.com, jgg@...dia.com,
parav@...dia.com, leonro@...dia.com, witu@...dia.com,
hughd@...gle.com, skhawaja@...gle.com, chrisl@...nel.org,
steven.sistare@...cle.com
Subject: Re: [PATCH v4 00/30] Live Update Orchestrator
On Tue, Oct 07 2025, Pasha Tatashin wrote:
> On Sun, Sep 28, 2025 at 9:03 PM Pasha Tatashin
> <pasha.tatashin@...een.com> wrote:
>>
[...]
> 4. New File-Lifecycle-Bound Global State
> ----------------------------------------
> A new mechanism for managing global state was proposed, designed to be
> tied to the lifecycle of the preserved files themselves. This would
> allow a file owner (e.g., the IOMMU subsystem) to save and retrieve
> global state that is only relevant when one or more of its FDs are
> being managed by LUO.
Is this going to replace LUO subsystems? If yes, then why? The global
state will likely need to have its own lifecycle just like the FDs, and
subsystems are a simple and clean abstraction to control that. I get the
idea of only "activating" a subsystem when one or more of its FDs are
participating in LUO, but we can do that while keeping subsystems
around.
>
> The key characteristics of this new mechanism are:
> The global state is optionally created on the first preserve() call
> for a given file handler.
> The state can be updated on subsequent preserve() calls.
> The state is destroyed when the last corresponding file is unpreserved
> or finished.
> The data can be accessed during boot.
>
> I am thinking of an API like this.
>
> 1. Add three more callbacks to liveupdate_file_ops:
> /*
> * Optional. Called by LUO during first get global state call.
> * The handler should allocate/KHO preserve its global state object and return a
> * pointer to it via 'obj'. It must also provide a u64 handle (e.g., a physical
> * address of preserved memory) via 'data_handle' that LUO will save.
> * Return: 0 on success.
> */
> int (*global_state_create)(struct liveupdate_file_handler *h,
> void **obj, u64 *data_handle);
>
> /*
> * Optional. Called by LUO in the new kernel
> * before the first access to the global state. The handler receives
> * the preserved u64 data_handle and should use it to reconstruct its
> * global state object, returning a pointer to it via 'obj'.
> * Return: 0 on success.
> */
> int (*global_state_restore)(struct liveupdate_file_handler *h,
> u64 data_handle, void **obj);
>
> /*
> * Optional. Called by LUO after the last
> * file for this handler is unpreserved or finished. The handler
> * must free its global state object and any associated resources.
> */
> void (*global_state_destroy)(struct liveupdate_file_handler *h, void *obj);
>
> The get/put global state data:
>
> /* Get and lock the data with file_handler scoped lock */
> int liveupdate_fh_global_state_get(struct liveupdate_file_handler *h,
> void **obj);
>
> /* Unlock the data */
> void liveupdate_fh_global_state_put(struct liveupdate_file_handler *h);
IMHO this looks clunky and overcomplicated. Each LUO FD type knows what
its subsystem is. It should talk to it directly. I don't get why we are
adding this intermediate step.
Here is how I imagine the proposed API would compare against subsystems
with hugetlb as an example (hugetlb support is still WIP, so I'm still
not clear on specifics, but this is how I imagine it will work):
- Hugetlb subsystem needs to track its huge page pools and which pages
are allocated and free. This is its global state. The pools get
reconstructed after kexec. Post-kexec, the free pages are ready for
allocation from other "regular" files and the pages used in LUO files
are reserved.
- Pre-kexec, when a hugetlb FD is preserved, it marks that as preserved
in hugetlb's global data structure tracking this. This is runtime data
(say xarray), and _not_ serialized data. Reason being, there are
likely more FDs to come so no point in wasting time serializing just
yet.
This can look something like:
hugetlb_luo_preserve_folio(folio, ...);
Nice and simple.
Compare this with the new proposed API:
liveupdate_fh_global_state_get(h, &hugetlb_data);
// This will have update serialized state now.
hugetlb_luo_preserve_folio(hugetlb_data, folio, ...);
liveupdate_fh_global_state_put(h);
We do the same thing but in a very complicated way.
- When the system-wide preserve happens, the hugetlb subsystem gets a
callback to serialize. It converts its runtime global state to
serialized state since now it knows no more FDs will be added.
With the new API, this doesn't need to be done since each FD prepare
already updates serialized state.
- If there are no hugetlb FDs, then the hugetlb subsystem doesn't put
anything in LUO. This is same as new API.
- If some hugetlb FDs are not restored after liveupdate and the finish
event is triggered, the subsystem gets its finish() handler called and
it can free things up.
I don't get how that would work with the new API.
My point is, I see subsystems working perfectly fine here and I don't
get how the proposed API is any better.
Am I missing something?
>
> Execution Flow:
> 1. Outgoing Kernel (First preserve() call):
> 2. Handler's preserve() is called. It needs the global state, so it calls
> liveupdate_fh_global_state_get(&h, &obj). LUO acquires h->global_state_lock.
> It sees h->global_state_obj is NULL.
> LUO calls h->ops->global_state_create(h, &h->global_state_obj, &handle).
> The handler allocates its state, preserves it with KHO, and returns its live
> pointer and a u64 handle.
> 3. LUO stores the handle internally for later serialization.
> 4. LUO sets *obj = h->global_state_obj and returns 0 with the lock still held.
> 5. The preserve() callback does its work using the obj.
> 6. It calls liveupdate_fh_global_state_put(h), which releases the lock.
>
> Global PREPARE:
> 1. LUO iterates handlers. If h->count > 0, it writes the stored data_handle into
> the LUO FDT.
>
> Incoming Kernel (First access):
> 1. When liveupdate_fh_global_state_get(&h, &obj) is called the first time. LUO
> acquires h->global_state_lock.
The huge page pools are allocated early-ish in boot. On x86, the 1 GiB
pages are allocated from setup_arch(). Other sizes are allocated later
in boot from a subsys_initcall. This is way before the first FD gets
restored, and in 1 GiB case even before LUO gets initialized.
At that point, it would be great if the hugetlb preserved data can be
retrieved. If not, then there needs to at least be some indication that
LUO brings huge pages with it, so that the kernel can trust that it will
be able to successfully get the pages later in boot.
This flow is tricky to implement in the proposed model. With subsystems,
it might just end up working with some early boot tricks to fetch LUO
data.
> 2. It sees h->global_state_obj is NULL, but it knows it has a preserved u64
> handle from the FDT. LUO calls h->ops->global_state_restore()
> 3. Reconstructs its state object, and returns the live pointer.
> 4. LUO sets *obj = h->global_state_obj and returns 0 with the lock held.
> 5. The caller does its work.
> 6. It calls liveupdate_fh_global_state_put(h) to release the lock.
>
> Last File Cleanup (in unpreserve or finish):
> 1. LUO decrements h->count to 0.
> 2. This triggers the cleanup logic.
> 3. LUO calls h->ops->global_state_destroy(h, h->global_state_obj).
> 4. The handler frees its memory and resources.
> 5. LUO sets h->global_state_obj = NULL, resetting it for a future live update
> cycle.
--
Regards,
Pratyush Yadav
Powered by blists - more mailing lists