[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAMEtUuwWUe3RvmX+pOM4=JV4Hmt0pTr7nny2u6pMv8jQQ0sWqA@mail.gmail.com>
Date: Mon, 2 Mar 2015 11:52:03 -0800
From: Alexei Starovoitov <ast@...mgrid.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Tom Zanussi <tom.zanussi@...ux.intel.com>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
Namhyung Kim <namhyung@...nel.org>,
Andi Kleen <andi@...stfloor.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 07/15] mm: Add ___GFP_NOTRACE
On Mon, Mar 2, 2015 at 11:33 AM, Steven Rostedt <rostedt@...dmis.org> wrote:
> On Mon, 2 Mar 2015 11:24:04 -0800
> Alexei Starovoitov <ast@...mgrid.com> wrote:
>
>> well, percentage of tracepoints called from NMI is tiny
>> comparing to the rest, so assuming nmi context
>> everywhere is very inefficient.
>> Like we can use pre-allocated pool of map entries when
>> tracepoint is called from NMI, but we shouldn't be using
>> it in other cases. Just like ring buffers and other things
>> have nmi and non-nmi pools and code paths, it doesn't
>> make sense to disallow kmalloc all together.
>> btw, calling kmalloc is _faster_ than taking
>> objects from cache-cold special nmi only pool.
>
> Please show the numbers and post the tests when stating something like
> that.
sure. here is the work that Jesper is doing accelerate slub:
http://thread.gmane.org/gmane.linux.kernel.mm/126138
he's measuring kmalloc/kfree pair as 19ns
which is way less then cost of cache miss on cold buffer
we'll get from custom pool. We want to minimize cache
misses and not absolute number of instructions.
yes, in custom pool one can receive a buffer few cycles
faster then from kmalloc, but it will be cold.
'prefetch of next' trick that slub is using won't work
for custom pool. That is the main problem with it.
By using main allocator the buffers are coming hot.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists