[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141027213523.799da09c@redhat.com>
Date: Mon, 27 Oct 2014 21:35:23 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: brouer@...hat.com, Alexander Duyck <alexander.duyck@...il.com>,
Alexei Starovoitov <ast@...mgrid.com>,
Eric Dumazet <edumazet@...gle.com>,
Network Development <netdev@...r.kernel.org>,
Christoph Lameter <cl@...ux.com>
Subject: Re: irq disable in __netdev_alloc_frag() ?
On Wed, 22 Oct 2014 20:51:16 -0700
Eric Dumazet <eric.dumazet@...il.com> wrote:
> On my hosts, this hard irq masking is pure noise.
On my hosts I can measure a significant difference between using
local_irq_disable() vs. local_irq_save(flags)
* 2.860 ns cost for local_irq_{disable,enable}
* 14.840 ns cost for local_irq_save()+local_irq_restore()
This is quite significant in my nanosec world ;-)
> What CPU are you using Alexander ?
I'm using a E5-2695 (Ivy-bridge)
You can easily reproduce my results on your own system with my
time_bench_sample module here:
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/time_bench_sample.c#L173
> Same could be done with some kmem_cache_alloc() : SLAB uses hard irq
> masking while some caches are never used from hard irq context.
Sounds interesting.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists