[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpHVxpEC4xCW1QkEkMS3A2SU3yVcm8sX_-CLa=x7uqXeTA@mail.gmail.com>
Date: Tue, 20 Aug 2024 00:26:25 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: akpm@...ux-foundation.org, kent.overstreet@...ux.dev, corbet@....net,
arnd@...db.de, mcgrof@...nel.org, paulmck@...nel.org, thuth@...hat.com,
tglx@...utronix.de, bp@...en8.de, xiongwei.song@...driver.com,
ardb@...nel.org, david@...hat.com, vbabka@...e.cz, mhocko@...e.com,
hannes@...xchg.org, roman.gushchin@...ux.dev, dave@...olabs.net,
willy@...radead.org, liam.howlett@...cle.com, pasha.tatashin@...een.com,
souravpanda@...gle.com, keescook@...omium.org, dennis@...nel.org,
jhubbard@...dia.com, yuzhao@...gle.com, vvvvvv@...gle.com,
rostedt@...dmis.org, iamjoonsoo.kim@....com, rientjes@...gle.com,
minchan@...gle.com, kaleshsingh@...gle.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-modules@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 1/5] alloc_tag: load module tags into separate continuous memory
On Tue, Aug 20, 2024 at 12:13 AM Mike Rapoport <rppt@...nel.org> wrote:
>
> On Mon, Aug 19, 2024 at 08:15:07AM -0700, Suren Baghdasaryan wrote:
> > When a module gets unloaded there is a possibility that some of the
> > allocations it made are still used and therefore the allocation tags
> > corresponding to these allocations are still referenced. As such, the
> > memory for these tags can't be freed. This is currently handled as an
> > abnormal situation and module's data section is not being unloaded.
> > To handle this situation without keeping module's data in memory,
> > allow codetags with longer lifespan than the module to be loaded into
> > their own separate memory. The in-use memory areas and gaps after
> > module unloading in this separate memory are tracked using maple trees.
> > Allocation tags arrange their separate memory so that it is virtually
> > contiguous and that will allow simple allocation tag indexing later on
> > in this patchset. The size of this virtually contiguous memory is set
> > to store up to 100000 allocation tags and max_module_alloc_tags kernel
> > parameter is introduced to change this size.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > ---
> > .../admin-guide/kernel-parameters.txt | 4 +
> > include/asm-generic/codetag.lds.h | 19 ++
> > include/linux/alloc_tag.h | 13 +-
> > include/linux/codetag.h | 35 ++-
> > kernel/module/main.c | 67 +++--
> > lib/alloc_tag.c | 245 ++++++++++++++++--
> > lib/codetag.c | 101 +++++++-
> > scripts/module.lds.S | 5 +-
> > 8 files changed, 429 insertions(+), 60 deletions(-)
>
> ...
>
> > diff --git a/include/linux/codetag.h b/include/linux/codetag.h
> > index c2a579ccd455..c4a3dd60205e 100644
> > --- a/include/linux/codetag.h
> > +++ b/include/linux/codetag.h
> > @@ -35,8 +35,13 @@ struct codetag_type_desc {
> > size_t tag_size;
> > void (*module_load)(struct codetag_type *cttype,
> > struct codetag_module *cmod);
> > - bool (*module_unload)(struct codetag_type *cttype,
> > + void (*module_unload)(struct codetag_type *cttype,
> > struct codetag_module *cmod);
> > + void (*module_replaced)(struct module *mod, struct module *new_mod);
> > + bool (*needs_section_mem)(struct module *mod, unsigned long size);
> > + void *(*alloc_section_mem)(struct module *mod, unsigned long size,
> > + unsigned int prepend, unsigned long align);
> > + void (*free_section_mem)(struct module *mod, bool unused);
> > };
> >
> > struct codetag_iterator {
> > @@ -71,11 +76,31 @@ struct codetag_type *
> > codetag_register_type(const struct codetag_type_desc *desc);
> >
> > #if defined(CONFIG_CODE_TAGGING) && defined(CONFIG_MODULES)
> > +
> > +bool codetag_needs_module_section(struct module *mod, const char *name,
> > + unsigned long size);
> > +void *codetag_alloc_module_section(struct module *mod, const char *name,
> > + unsigned long size, unsigned int prepend,
> > + unsigned long align);
> > +void codetag_free_module_sections(struct module *mod);
> > +void codetag_module_replaced(struct module *mod, struct module *new_mod);
> > void codetag_load_module(struct module *mod);
> > -bool codetag_unload_module(struct module *mod);
> > -#else
> > +void codetag_unload_module(struct module *mod);
> > +
> > +#else /* defined(CONFIG_CODE_TAGGING) && defined(CONFIG_MODULES) */
> > +
> > +static inline bool
> > +codetag_needs_module_section(struct module *mod, const char *name,
> > + unsigned long size) { return false; }
> > +static inline void *
> > +codetag_alloc_module_section(struct module *mod, const char *name,
> > + unsigned long size, unsigned int prepend,
> > + unsigned long align) { return NULL; }
> > +static inline void codetag_free_module_sections(struct module *mod) {}
> > +static inline void codetag_module_replaced(struct module *mod, struct module *new_mod) {}
> > static inline void codetag_load_module(struct module *mod) {}
> > -static inline bool codetag_unload_module(struct module *mod) { return true; }
> > -#endif
> > +static inline void codetag_unload_module(struct module *mod) {}
> > +
> > +#endif /* defined(CONFIG_CODE_TAGGING) && defined(CONFIG_MODULES) */
>
> Maybe I'm missing something, but can't alloc_tag::module_unload() just copy
> the tags that cannot be freed somewhere outside of module sections and then
> free the module?
>
> The heavy lifting would be localized to alloc_tags rather than spread all
> over.
Hi Mike,
We can't copy those tags because allocations already have references
to them. We would have to find and update those references to point to
the new locations of these tags. That means potentially scanning all
page extensions/pages in the system and updating their tag references
in some race-less fashion. So, quite not trivial.
Thanks,
Suren.
>
> --
> Sincerely yours,
> Mike.
Powered by blists - more mailing lists