[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191011181532.nardqmokz7yxtsu3@linutronix.de>
Date: Fri, 11 Oct 2019 20:15:32 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Uladzislau Rezki (Sony)" <urezki@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Daniel Wagner <dwagner@...e.de>,
Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Hillf Danton <hdanton@...a.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v2 1/1] mm/vmalloc: remove preempt_disable/enable when do
preloading
On 2019-10-11 00:33:18 [+0200], Uladzislau Rezki (Sony) wrote:
> Get rid of preempt_disable() and preempt_enable() when the
> preload is done for splitting purpose. The reason is that
> calling spin_lock() with disabled preemtion is forbidden in
> CONFIG_PREEMPT_RT kernel.
>
> Therefore, we do not guarantee that a CPU is preloaded, instead
> we minimize the case when it is not with this change.
>
> For example i run the special test case that follows the preload
> pattern and path. 20 "unbind" threads run it and each does
> 1000000 allocations. Only 3.5 times among 1000000 a CPU was
> not preloaded. So it can happen but the number is negligible.
>
> V1 -> V2:
> - move __this_cpu_cmpxchg check when spin_lock is taken,
> as proposed by Andrew Morton
> - add more explanation in regard of preloading
> - adjust and move some comments
>
> Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose")
> Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
Acked-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Thank you.
Sebastian
Powered by blists - more mailing lists