[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+CK2bB+RdapsozPHe84MP4NVSPLo6vje5hji5MKSg8L6ViAbw@mail.gmail.com>
Date: Tue, 7 Oct 2025 13:10:30 -0400
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: pratyush@...nel.org, jasonmiu@...gle.com, graf@...zon.com,
changyuanl@...gle.com, pasha.tatashin@...een.com, rppt@...nel.org,
dmatlack@...gle.com, rientjes@...gle.com, corbet@....net,
rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com, kanie@...ux.alibaba.com,
ojeda@...nel.org, aliceryhl@...gle.com, masahiroy@...nel.org,
akpm@...ux-foundation.org, tj@...nel.org, yoann.congal@...le.fr,
mmaurer@...gle.com, roman.gushchin@...ux.dev, chenridong@...wei.com,
axboe@...nel.dk, mark.rutland@....com, jannh@...gle.com,
vincent.guittot@...aro.org, hannes@...xchg.org, dan.j.williams@...el.com,
david@...hat.com, joel.granados@...nel.org, rostedt@...dmis.org,
anna.schumaker@...cle.com, song@...nel.org, zhangguopeng@...inos.cn,
linux@...ssschuh.net, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-mm@...ck.org, gregkh@...uxfoundation.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com, Jonathan.Cameron@...wei.com,
quic_zijuhu@...cinc.com, aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com, ptyadav@...zon.de, lennart@...ttering.net,
brauner@...nel.org, linux-api@...r.kernel.org, linux-fsdevel@...r.kernel.org,
saeedm@...dia.com, ajayachandra@...dia.com, jgg@...dia.com, parav@...dia.com,
leonro@...dia.com, witu@...dia.com, hughd@...gle.com, skhawaja@...gle.com,
chrisl@...nel.org, steven.sistare@...cle.com
Subject: Re: [PATCH v4 00/30] Live Update Orchestrator
On Sun, Sep 28, 2025 at 9:03 PM Pasha Tatashin
<pasha.tatashin@...een.com> wrote:
>
> This series introduces the Live Update Orchestrator (LUO), a kernel
> subsystem designed to facilitate live kernel updates. LUO enables
> kexec-based reboots with minimal downtime, a critical capability for
> cloud environments where hypervisors must be updated without disrupting
> running virtual machines. By preserving the state of selected resources,
> such as file descriptors and memory, LUO allows workloads to resume
> seamlessly in the new kernel.
>
> The git branch for this series can be found at:
> https://github.com/googleprodkernel/linux-liveupdate/tree/luo/v4
>
> The patch series applies against linux-next tag: next-20250926
>
> While this series is showed cased using memfd preservation. There are
> works to preserve devices:
> 1. IOMMU: https://lore.kernel.org/all/20250928190624.3735830-16-skhawaja@google.com
> 2. PCI: https://lore.kernel.org/all/20250916-luo-pci-v2-0-c494053c3c08@kernel.org
>
> =======================================================================
> Changelog since v3:
> (https://lore.kernel.org/all/20250807014442.3829950-1-pasha.tatashin@soleen.com):
>
> - The main architectural change in this version is introduction of
> "sessions" to manage the lifecycle of preserved file descriptors.
> In v3, session management was left to a single userspace agent. This
> approach has been revised to improve robustness. Now, each session is
> represented by a file descriptor (/dev/liveupdate). The lifecycle of
> all preserved resources within a session is tied to this FD, ensuring
> automatic cleanup by the kernel if the controlling userspace agent
> crashes or exits unexpectedly.
>
> - The first three KHO fixes from the previous series have been merged
> into Linus' tree.
>
> - Various bug fixes and refactorings, including correcting memory
> unpreservation logic during a kho_abort() sequence.
>
> - Addressing all comments from reviewers.
>
> - Removing sysfs interface (/sys/kernel/liveupdate/state), the state
> can now be queried only via ioctl() API.
>
> =======================================================================
Hi all,
Following up on yesterday's Hypervisor Live Update meeting, we
discussed the requirements for the LUO to track dependencies,
particularly for IOMMU preservation and other stateful file
descriptors. This email summarizes the main design decisions and
outcomes from that discussion.
For context, the notes from the previous meeting can be found here:
https://lore.kernel.org/all/365acb25-4b25-86a2-10b0-1df98703e287@google.com
The notes for yesterday's meeting are not yes available.
The key outcomes are as follows:
1. User-Enforced Ordering
-------------------------
The responsibility for enforcing the correct order of operations will
lie with the userspace agent. If fd_A is a dependency for fd_B,
userspace must ensure that fd_A is preserved before fd_B. This same
ordering must be honored during the restoration phase after the reboot
(fd_A must be restored before fd_B). The kernel preserve the ordering.
2. Serialization in PRESERVE_FD
-------------------------------
To keep the global prepare() phase lightweight and predictable, the
consensus was to shift the heavy serialization work into the
PRESERVE_FD ioctl handler. This means that when userspace requests to
preserve a file, the file handler should perform the bulk of the
state-saving work immediately.
The proposed sequence of operations reflects this shift:
Shutdown Flow:
fd_preserve() (heavy serialization) -> prepare() (lightweight final
checks) -> Suspend VM -> reboot(KEXEC) -> freeze() (lightweight)
Boot & Restore Flow:
fd_restore() (lightweight object creation) -> Resume VM -> Heavy
post-restore IOCTLs (e.g., hardware page table re-creation) ->
finish() (lightweight cleanup)
This decision primarily serves as a guideline for file handler
implementations. For the LUO core, this implies minor API changes,
such as renaming can_preserve() to a more active preserve() and adding
a corresponding unpreserve() callback to be called during
UNPRESERVE_FD.
3. FD Data Query API
--------------------
We identified the need for a kernel API to allow subsystems to query
preserved FD data during the boot process, before userspace has
initiated the restore.
The proposed API would allow a file handler to retrieve a list of all
its preserved FDs, including their session names, tokens, and the
private data payload.
Proposed Data Structure:
struct liveupdate_fd {
char *session; /* session name */
u64 token; /* Preserved FD token */
u64 data; /* Private preserved data */
};
Proposed Function:
liveupdate_fd_data_query(struct liveupdate_file_handler *h,
struct liveupdate_fd *fds, long *count);
4. New File-Lifecycle-Bound Global State
----------------------------------------
A new mechanism for managing global state was proposed, designed to be
tied to the lifecycle of the preserved files themselves. This would
allow a file owner (e.g., the IOMMU subsystem) to save and retrieve
global state that is only relevant when one or more of its FDs are
being managed by LUO.
The key characteristics of this new mechanism are:
The global state is optionally created on the first preserve() call
for a given file handler.
The state can be updated on subsequent preserve() calls.
The state is destroyed when the last corresponding file is unpreserved
or finished.
The data can be accessed during boot.
I am thinking of an API like this.
1. Add three more callbacks to liveupdate_file_ops:
/*
* Optional. Called by LUO during first get global state call.
* The handler should allocate/KHO preserve its global state object and return a
* pointer to it via 'obj'. It must also provide a u64 handle (e.g., a physical
* address of preserved memory) via 'data_handle' that LUO will save.
* Return: 0 on success.
*/
int (*global_state_create)(struct liveupdate_file_handler *h,
void **obj, u64 *data_handle);
/*
* Optional. Called by LUO in the new kernel
* before the first access to the global state. The handler receives
* the preserved u64 data_handle and should use it to reconstruct its
* global state object, returning a pointer to it via 'obj'.
* Return: 0 on success.
*/
int (*global_state_restore)(struct liveupdate_file_handler *h,
u64 data_handle, void **obj);
/*
* Optional. Called by LUO after the last
* file for this handler is unpreserved or finished. The handler
* must free its global state object and any associated resources.
*/
void (*global_state_destroy)(struct liveupdate_file_handler *h, void *obj);
The get/put global state data:
/* Get and lock the data with file_handler scoped lock */
int liveupdate_fh_global_state_get(struct liveupdate_file_handler *h,
void **obj);
/* Unlock the data */
void liveupdate_fh_global_state_put(struct liveupdate_file_handler *h);
Execution Flow:
1. Outgoing Kernel (First preserve() call):
2. Handler's preserve() is called. It needs the global state, so it calls
liveupdate_fh_global_state_get(&h, &obj). LUO acquires h->global_state_lock.
It sees h->global_state_obj is NULL.
LUO calls h->ops->global_state_create(h, &h->global_state_obj, &handle).
The handler allocates its state, preserves it with KHO, and returns its live
pointer and a u64 handle.
3. LUO stores the handle internally for later serialization.
4. LUO sets *obj = h->global_state_obj and returns 0 with the lock still held.
5. The preserve() callback does its work using the obj.
6. It calls liveupdate_fh_global_state_put(h), which releases the lock.
Global PREPARE:
1. LUO iterates handlers. If h->count > 0, it writes the stored data_handle into
the LUO FDT.
Incoming Kernel (First access):
1. When liveupdate_fh_global_state_get(&h, &obj) is called the first time. LUO
acquires h->global_state_lock.
2. It sees h->global_state_obj is NULL, but it knows it has a preserved u64
handle from the FDT. LUO calls h->ops->global_state_restore()
3. Reconstructs its state object, and returns the live pointer.
4. LUO sets *obj = h->global_state_obj and returns 0 with the lock held.
5. The caller does its work.
6. It calls liveupdate_fh_global_state_put(h) to release the lock.
Last File Cleanup (in unpreserve or finish):
1. LUO decrements h->count to 0.
2. This triggers the cleanup logic.
3. LUO calls h->ops->global_state_destroy(h, h->global_state_obj).
4. The handler frees its memory and resources.
5. LUO sets h->global_state_obj = NULL, resetting it for a future live update
cycle.
Pasha
Pasha
Powered by blists - more mailing lists