[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65634d660904090943lf273d9cg92be105acef3e6af@mail.gmail.com>
Date: Thu, 9 Apr 2009 09:43:07 -0700
From: Tom Herbert <therbert@...gle.com>
To: David Miller <davem@...emloft.net>
Cc: shemminger@...tta.com, netdev@...r.kernel.org
Subject: Re: [PATCH] Software receive packet steering
>>> -extern int netif_receive_skb(struct sk_buff *skb);
>>> +extern int __netif_receive_skb(struct sk_buff *skb);
>>> +
>>> +static inline int netif_receive_skb(struct sk_buff *skb)
>>> +{
>>> +#ifdef CONFIG_NET_SOFTRPS
>>> + return netif_rx(skb);
>>> +#else
>>> + return __netif_receive_skb(skb);
>>> +#endif
>>> +}
>>
>> Ugh, this forces all devices receiving back into a single backlog
>> queue.
>
> Yes, it basically turns off NAPI.
>
NAPI is still useful, but it does take a higher packet load before
polling kicks in. I believe this is similarly true for HW multi
queue, and could actually be worse depending on the number of queues
traffic is being split across (in my bnx2x experiment 16 core AMD with
16 queues, I was seeing around 300K interrupts per second, no benefit
from NAPI).
The bimodal behavior between polling and non-polling states does give
us fits. I looked at the parked mode idea, but the latency hit seems
too high. We've considered holding the interface in polling state for
longer periods of time, maybe this could trade off CPU cycles (on the
core taking interrupts) for lower latency and higher throughput.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists