[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aFP7wwCviqxujKDg@kernel.org>
Date: Thu, 19 Jun 2025 15:00:03 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: Pratyush Yadav <pratyush@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>,
jasonmiu@...gle.com, graf@...zon.com, changyuanl@...gle.com,
dmatlack@...gle.com, rientjes@...gle.com, corbet@....net,
rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
kanie@...ux.alibaba.com, ojeda@...nel.org, aliceryhl@...gle.com,
masahiroy@...nel.org, akpm@...ux-foundation.org, tj@...nel.org,
yoann.congal@...le.fr, mmaurer@...gle.com, roman.gushchin@...ux.dev,
chenridong@...wei.com, axboe@...nel.dk, mark.rutland@....com,
jannh@...gle.com, vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com,
joel.granados@...nel.org, rostedt@...dmis.org,
anna.schumaker@...cle.com, song@...nel.org, zhangguopeng@...inos.cn,
linux@...ssschuh.net, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
gregkh@...uxfoundation.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com
Subject: Re: [RFC v2 05/16] luo: luo_core: integrate with KHO
On Wed, Jun 18, 2025 at 01:43:18PM -0400, Pasha Tatashin wrote:
> > > > > >> >
> > > > > >> > What I meant is that even without KHO_DEBUGFS, LUO drives KHO, but then
> > > > > >> > KHO calls into LUO from the notifier, which makes the control flow
> > > > > >> > somewhat convoluted. If LUO is supposed to be the only thing that
> > > > > >> > interacts directly with KHO, maybe we should get rid of the notifier and
> > > > > >> > only let LUO drive things.
> > > > > >>
> > > > > >> Yes, we should. I think we should consider the KHO notifiers and self
> > > > > >> orchestration as obsoleted by LUO. That's why it was in debugfs
> > > > > >> because we were not ready to commit to it.
> > > > > >
> > > > > > We could do that, however, there is one example KHO user
> > > > > > `reserve_mem`, that is also not liveupdate related. So, it should
> > > > > > either be removed or modified to be handled by LUO.
> > > > >
> > > > > It still depends on kho_finalize() being called, so it still needs
> > > > > something to trigger its serialization. It is not automatic. And with
> > > > > your proposed patch to make debugfs interface optional, it can't even be
> > > > > used with the config disabled.
> > > >
> > > > At least for now, it can still be used via LUO going into prepare
> > > > state, since LUO changes KHO into finalized state and reserve_mem is
> > > > registered to be called back from KHO.
> > > >
> > > > > So if it must be explicitly triggered to be preserved, why not let the
> > > > > trigger point be LUO instead of KHO? You can make reservemem a LUO
> > > > > subsystem instead.
> > > >
> > > > Yes, LUO can do that, the only concern I raised is that `reserve_mem`
> > > > is not really live update related.
> > >
> > > I only now realized what bothered me about "liveupdate". It's the name of
> > > the driving usecase rather then the name of the technology it implements.
> > > In the end what LUO does is a (more) sophisticated control for KHO.
> > >
> > > But essentially it's not that it actually implements live update, it
> > > provides kexec handover control plane that enables live update.
> > >
> > > And since the same machinery can be used regardless of live update, and I'm
> > > sure other usecases will appear as soon as the technology will become more
> > > mature, it makes me think that we probably should just
> > > s/liveupdate_/kho_control/g or something along those lines.
> >
> > I disagree, LUO is for liveupdate flows, and is designed specifically
> > around the live update flows: brownout/blackout/post-liveupdate, it
> > should not be generalized to anticipate some other random states, and
> > it should only support participants that are related to live update:
> > iommufd/vfiofd/kvmfd/memfd/eventfd and controled via "liveupdated" the
> > userspace agent.
But it's not how the things work. Once there's an API anyone can use it,
right?
How do you intend to restrict this API usage to subsystems that are related
to the live update flow? Or userspace driving ioctls outside "liveupdated"
user agent?
There are a lot of examples of kernel subsystems that were designed for a
particular thing and later were extended to support additional use cases.
I'm not saying LUO should "anticipate some other random states", what I'm
saying is that usecases other than liveupdate may appear and use the APIs
LUO provides for something else.
> > KHO is for preserving memory, LUO uses KHO as a backbone for Live Update.
If we make LUO the only uABI to drive KHO it becomes misnamed from the
start.
As you mentioned yourself, reserve_mem and potentially IMA and kexec
telemetry are not necessarily related to LUO, but it still would be useful
to support them without LUO.
While it's easy to make memblock a LUO subsystem to me it seems
semantically wrong naming.
> > > > > Although to be honest, things like reservemem (or IMA perhaps?) don't
> > > > > really fit well with the explicit trigger mechanism. They can be carried
> > > >
> > > > Agreed. Another example I was thinking about is "kexec telemetry":
> > > > precise time information about kexec, including shutdown, purgatory,
> > > > boot. We are planning to propose kexec telemetry, and it could be LUO
> > > > subsystem. On the other hand, it could be useful even without live
> > > > update, just to measure precise kexec reboot time.
> > > >
> > > > > across kexec without needing userspace explicitly driving it. Maybe we
> > > > > allow LUO subsystems to mark themselves as auto-preservable and LUO will
> > > > > preserve them regardless of state being prepared? Something to think
> > > > > about later down the line I suppose.
> > > >
> > > > We can start with adding `reserve_mem` as regular subsystem, and make
> > > > this auto-preserve option a future expansion, when if needed.
> > > > Presumably, `luoctl prepare` would work for whoever plans to use just
> > > > `reserve_mem`.
> > >
> > > I think it would be nice to support auto-preserve sooner than later.
> >
> > Makes sense.
> >
> > > reserve_mem can already be useful for ftrace and pstore folks and if it
> > > would survive a kexec without any userspace intervention it would be great.
> >
> > The pstore use case is only potential, correct? Or can it already use
> > reserve_mem?
pstore can use reserve_mem already.
> So currently, KHO provides the following two types of internal API:
>
> Preserve memory and metadata
> =========================
> kho_preserve_folio() / kho_preserve_phys()
> kho_unpreserve_folio() / kho_unpreserve_phys()
> kho_restore_folio()
>
> kho_add_subtree() kho_retrieve_subtree()
>
> State machine
> ===========
> register_kho_notifier() / unregister_kho_notifier()
>
> kho_finalize() / kho_abort()
>
> We should remove the "State machine", and only keep the "Preserve
> Memory" API functions. At the time these functions are called, KHO
> should do the magic of making sure that the memory gets preserved
> across the reboot.
>
> This way, reserve_mem_init() would call: kho_preserve_folio() and
> kho_add_subtree() during boot, and be done with it.
Right, but we still need something to drive kho_mem_serialize().
And it has to be done before kexec load, at least until we resolve this.
Currently this is triggered either by KHO debugfs or by LUO ioctls. If we
completely drop KHO debugfs and notifiers, we still need something that
would trigger the magic.
I'm not saying we should keep KHO debugfs and notifiers, I'm saying that if
we make LUO the only thing driving KHO, liveupdate is not an appropriate
name.
> Pasha
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists