[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <491228C8.3010100@cosmosbay.com>
Date: Thu, 06 Nov 2008 00:14:16 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Stephen Hemminger <shemminger@...tta.com>
CC: David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [RFC] loopback: optimization
Stephen Hemminger a écrit :
> Convert loopback device from using common network queues to a per-cpu
> receive queue with NAPI. This gives a small 1% performance gain when
> measured over 5 runs of tbench. Not sure if it's worth bothering
> though.
>
> Signed-off-by: Stephen Hemminger <shemminger@...tta.com>
>
>
> --- a/drivers/net/loopback.c 2008-11-04 15:36:29.000000000 -0800
> +++ b/drivers/net/loopback.c 2008-11-05 10:00:20.000000000 -0800
> @@ -59,7 +59,10 @@
>
> +/* Special case version of napi_schedule since loopback device has no hard irq */
> +void napi_schedule_irq(struct napi_struct *n)
> +{
> + if (napi_schedule_prep(n)) {
> + list_add_tail(&n->poll_list, &__get_cpu_var(softnet_data).poll_list);
> + __raise_softirq_irqoff(NET_RX_SOFTIRQ);
> + }
> +}
> +
Stephen, I dont get it.
Sure loopback device cannot generate hard irqs, but what prevent's a real hardware
interrupt to call NIC driver that can call napi_schedule() and corrupt softnet_data.poll_list ?
Why not using a queue dedicated on loopback directly in cpu_var(softnet_data) ?
(ie not using a napi structure for each cpu and each loopback dev)
This queue would be irq safe yes.
net_rx_action could handle this list without local_irq_disable()/local_irq_enable() games.
Hum, maybe complex for loopback_dev_stop() to purge all queues without interfering with other namespaces.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists