[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ycis/J1U2DB6Zx7j@pc638.lan>
Date: Sun, 26 Dec 2021 18:57:16 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
Manfred Spraul <manfred@...orfullife.com>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vasily Averin <vvs@...tuozzo.com>, cgel.zte@...il.com,
shakeelb@...gle.com, rdunlap@...radead.org, dbueso@...e.de,
unixbhaskar@...il.com, chi.minghao@....com.cn, arnd@...db.de,
Zeal Robot <zealci@....com.cn>, linux-mm@...ck.org,
1vier1@....de, stable@...r.kernel.org
Subject: Re: [PATCH] mm/util.c: Make kvfree() safe for calling while holding
spinlocks
On Sat, Dec 25, 2021 at 10:58:29PM +0000, Matthew Wilcox wrote:
> On Sat, Dec 25, 2021 at 07:54:12PM +0100, Uladzislau Rezki wrote:
> > +static void drain_vmap_area(struct work_struct *work)
> > +{
> > + if (mutex_trylock(&vmap_purge_lock)) {
> > + __purge_vmap_area_lazy(ULONG_MAX, 0);
> > + mutex_unlock(&vmap_purge_lock);
> > + }
> > +}
> > +
> > +static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area);
>
> Presuambly if the worker fails to get the mutex, it should reschedule
> itself? And should it even trylock or just always lock?
>
mutex_trylock() has no sense here. It should just always get the lock.
Otherwise we can miss the point to purge. Agree with your opinion.
>
> This kind of ties into something I've been wondering about -- we have
> a number of places in the kernel which cache 'freed' vmalloc allocations
> in order to speed up future allocations of the same size. Kind of like
> slab. Would we be better off trying to cache frequent allocations
> inside vmalloc instead of always purging them?
>
Hm... Some sort of caching would be good. Though it will require some
time to think over all details and design itself. We can cache VAs
instead of purging them until some point or threshold. So basically
we can keep it in our data structures, associate it with some cache,
based on size and reuse it later in the alloc_vmap_area().
All that is related to "vmap_area" caching. Another option is to cache
the "vm_struct". It includes "vmap_area" + pages to drive the mapping.
It is a higher level of caching and i am not sure if an implementation
would be so straightforward.
--
Vlad Rezki
Powered by blists - more mailing lists