[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20141001.162529.2246298941833907545.davem@davemloft.net>
Date: Wed, 01 Oct 2014 16:25:29 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: sowmini.varadhan@...cle.com
Cc: raghuram.kothakota@...cle.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 0/2] sunvnet: Packet processing in
non-interrupt context.
From: Sowmini Varadhan <sowmini.varadhan@...cle.com>
Date: Wed, 1 Oct 2014 16:23:15 -0400
> On (10/01/14 16:15), David Miller wrote:
>> >
>> > If I make this a NAPI driver that uses napi_gro_receive, I would
>> > still have to deal with a budget, right?
>>
>> Absolutely, and YOU MUST, because the budget keeps one device from
>> hogging the RX packet input path from other devices on a given cpu.
>
> yes, but limiting the budget of sk_buffs read mid-way during descriptor
> read is deadly to perf because of the ensuing LDC stop/start exchange -
> ends up being even worse than the baseline.
>
> Doesnt the netif_rx/process_backlog infra already do a napi_schedule,
> thus avoiding the above concern?
The limit is by default 64 packets, it won't matter.
I think you're overplaying the things that block use of NAPI, how
about implementing it properly, using netif_gso_receive() and proper
RCU accesses, and coming back with some real performance measurements?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists