[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150115171634.685237a4.akpm@linux-foundation.org>
Date: Thu, 15 Jan 2015 17:16:34 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Jesper Dangaard Brouer <brouer@...hat.com>,
rostedt@...dmis.org, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 1/2] mm/slub: optimize alloc/free fastpath by
removing preemption on/off
On Thu, 15 Jan 2015 16:40:32 +0900 Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> We had to insert a preempt enable/disable in the fastpath a while ago
> in order to guarantee that tid and kmem_cache_cpu are retrieved on the
> same cpu. It is the problem only for CONFIG_PREEMPT in which scheduler
> can move the process to other cpu during retrieving data.
>
> Now, I reach the solution to remove preempt enable/disable in the fastpath.
> If tid is matched with kmem_cache_cpu's tid after tid and kmem_cache_cpu
> are retrieved by separate this_cpu operation, it means that they are
> retrieved on the same cpu. If not matched, we just have to retry it.
>
> With this guarantee, preemption enable/disable isn't need at all even if
> CONFIG_PREEMPT, so this patch removes it.
>
> I saw roughly 5% win in a fast-path loop over kmem_cache_alloc/free
> in CONFIG_PREEMPT. (14.821 ns -> 14.049 ns)
I'm surprised. preempt_disable/enable are pretty fast. I wonder why
this makes a measurable difference. Perhaps preempt_enable()'s call to
preempt_schedule() added pain?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists