[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191009060539.fmpqesc4wfisulrl@beryllium.lan>
Date: Wed, 9 Oct 2019 08:05:39 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: vmalloc: Use the vmap_area_lock to protect
ne_fit_preload_node
On Tue, Oct 08, 2019 at 06:04:59PM +0200, Uladzislau Rezki wrote:
> > so, we do not guarantee, instead we minimize number of allocations
> > with GFP_NOWAIT flag. For example on my 4xCPUs i am not able to
> > even trigger the case when CPU is not preloaded.
> >
> > I can test it tomorrow on my 12xCPUs to see its behavior there.
> >
> Tested it on different systems. For example on my 8xCPUs system that
> runs PREEMPT kernel i see only few GFP_NOWAIT allocations, i.e. it
> happens when we land to another CPU that was not preloaded.
>
> I run the special test case that follows the preload pattern and path.
> So 20 "unbind" threads run it and each does 1000000 allocations. As a
> result only 3.5 times among 1000000, during splitting, CPU was not
> preloaded thus, GFP_NOWAIT was used to obtain an extra object.
>
> It is obvious that slightly modified approach still minimizes allocations
> in atomic context, so it can happen but the number is negligible and can
> be ignored, i think.
Thanks for doing the tests. In this case I would suggest to get rid of
the preempt_disable() micro optimization, since there is almost no
gain in doing so. Do you send a patch? :)
Thanks,
Daniel
Powered by blists - more mailing lists