[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRcSpbwBabFjeYe3@kernel.org>
Date: Fri, 14 Nov 2025 13:29:41 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: pratyush@...nel.org, jasonmiu@...gle.com, graf@...zon.com,
dmatlack@...gle.com, rientjes@...gle.com, corbet@....net,
rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
kanie@...ux.alibaba.com, ojeda@...nel.org, aliceryhl@...gle.com,
masahiroy@...nel.org, akpm@...ux-foundation.org, tj@...nel.org,
yoann.congal@...le.fr, mmaurer@...gle.com, roman.gushchin@...ux.dev,
chenridong@...wei.com, axboe@...nel.dk, mark.rutland@....com,
jannh@...gle.com, vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com,
joel.granados@...nel.org, rostedt@...dmis.org,
anna.schumaker@...cle.com, song@...nel.org, zhangguopeng@...inos.cn,
linux@...ssschuh.net, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
gregkh@...uxfoundation.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com, ptyadav@...zon.de, lennart@...ttering.net,
brauner@...nel.org, linux-api@...r.kernel.org,
linux-fsdevel@...r.kernel.org, saeedm@...dia.com,
ajayachandra@...dia.com, jgg@...dia.com, parav@...dia.com,
leonro@...dia.com, witu@...dia.com, hughd@...gle.com,
skhawaja@...gle.com, chrisl@...nel.org
Subject: Re: [PATCH v5 02/22] liveupdate: luo_core: integrate with KHO
On Fri, Nov 07, 2025 at 04:03:00PM -0500, Pasha Tatashin wrote:
> Integrate the LUO with the KHO framework to enable passing LUO state
> across a kexec reboot.
>
> When LUO is transitioned to a "prepared" state, it tells KHO to
> finalize, so all memory segments that were added to KHO preservation
> list are getting preserved. After "Prepared" state no new segments
> can be preserved. If LUO is canceled, it also tells KHO to cancel the
> serialization, and therefore, later LUO can go back into the prepared
> state.
>
> This patch introduces the following changes:
> - During the KHO finalization phase allocate FDT blob.
> - Populate this FDT with a LUO compatibility string ("luo-v1").
>
> LUO now depends on `CONFIG_KEXEC_HANDOVER`. The core state transition
> logic (`luo_do_*_calls`) remains unimplemented in this patch.
>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
> ---
> include/linux/liveupdate.h | 6 +
> include/linux/liveupdate/abi/luo.h | 54 +++++++
> kernel/liveupdate/luo_core.c | 243 ++++++++++++++++++++++++++++-
> kernel/liveupdate/luo_internal.h | 17 ++
> mm/mm_init.c | 4 +
> 5 files changed, 323 insertions(+), 1 deletion(-)
> create mode 100644 include/linux/liveupdate/abi/luo.h
> create mode 100644 kernel/liveupdate/luo_internal.h
>
> diff --git a/include/linux/liveupdate.h b/include/linux/liveupdate.h
> index 730b76625fec..0be8804fc42a 100644
> --- a/include/linux/liveupdate.h
> +++ b/include/linux/liveupdate.h
> @@ -13,6 +13,8 @@
>
> #ifdef CONFIG_LIVEUPDATE
>
> +void __init liveupdate_init(void);
> +
> /* Return true if live update orchestrator is enabled */
> bool liveupdate_enabled(void);
>
> @@ -21,6 +23,10 @@ int liveupdate_reboot(void);
>
> #else /* CONFIG_LIVEUPDATE */
>
> +static inline void liveupdate_init(void)
> +{
> +}
The common practice is to place brackets at the same line with function
declaration.
...
> +static int __init luo_early_startup(void)
> +{
> + phys_addr_t fdt_phys;
> + int err, ln_size;
> + const void *ptr;
> +
> + if (!kho_is_enabled()) {
> + if (liveupdate_enabled())
> + pr_warn("Disabling liveupdate because KHO is disabled\n");
> + luo_global.enabled = false;
> + return 0;
> + }
> +
> + /* Retrieve LUO subtree, and verify its format. */
> + err = kho_retrieve_subtree(LUO_FDT_KHO_ENTRY_NAME, &fdt_phys);
> + if (err) {
> + if (err != -ENOENT) {
> + pr_err("failed to retrieve FDT '%s' from KHO: %pe\n",
> + LUO_FDT_KHO_ENTRY_NAME, ERR_PTR(err));
> + return err;
> + }
> +
> + return 0;
> + }
> +
> + luo_global.fdt_in = __va(fdt_phys);
phys_to_virt is clearer, isn't it?
> + err = fdt_node_check_compatible(luo_global.fdt_in, 0,
> + LUO_FDT_COMPATIBLE);
...
> +void __init liveupdate_init(void)
> +{
> + int err;
> +
> + err = luo_early_startup();
> + if (err) {
> + pr_err("The incoming tree failed to initialize properly [%pe], disabling live update\n",
> + ERR_PTR(err));
> + luo_global.enabled = false;
> + }
> +}
> +
> +/* Called during boot to create LUO fdt tree */
^ create outgoing
> +static int __init luo_late_startup(void)
> +{
> + int err;
> +
> + if (!liveupdate_enabled())
> + return 0;
> +
> + err = luo_fdt_setup();
> + if (err)
> + luo_global.enabled = false;
> +
> + return err;
> +}
> +late_initcall(luo_late_startup);
It would be nice to have a comment explaining why late_initcall() is fine
and why there's no need to initialize the outgoing fdt earlier.
> +/**
> + * luo_alloc_preserve - Allocate, zero, and preserve memory.
I think this and the "free" counterparts would be useful for any KHO users,
even those that don't need LUO.
> + * @size: The number of bytes to allocate.
> + *
> + * Allocates a physically contiguous block of zeroed pages that is large
> + * enough to hold @size bytes. The allocated memory is then registered with
> + * KHO for preservation across a kexec.
> + *
> + * Note: The actual allocated size will be rounded up to the nearest
> + * power-of-two page boundary.
> + *
> + * @return A virtual pointer to the allocated and preserved memory on success,
> + * or an ERR_PTR() encoded error on failure.
> + */
> +void *luo_alloc_preserve(size_t size)
> +{
> + struct folio *folio;
> + int order, ret;
> +
> + if (!size)
> + return ERR_PTR(-EINVAL);
> +
> + order = get_order(size);
> + if (order > MAX_PAGE_ORDER)
> + return ERR_PTR(-E2BIG);
High order allocations would likely fail or at least cause a heavy reclaim.
For now it seems that we won't be needing really large contiguous chunks so
maybe limiting this to PAGE_ALLOC_COSTLY_ORDER?
Later if we'd need higher order allocations we can try to allocate with
__GFP_NORETRY or __GFP_RETRY_MAYFAIL with a fallback to vmalloc.
> +
> + folio = folio_alloc(GFP_KERNEL | __GFP_ZERO, order);
> + if (!folio)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = kho_preserve_folio(folio);
> + if (ret) {
> + folio_put(folio);
> + return ERR_PTR(ret);
> + }
> +
> + return folio_address(folio);
> +}
> +
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists