[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141003144024.GA12448@oracle.com>
Date: Fri, 3 Oct 2014 10:40:24 -0400
From: Sowmini Varadhan <sowmini.varadhan@...cle.com>
To: David Miller <davem@...emloft.net>
Cc: raghuram.kothakota@...cle.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 0/2] sunvnet: Packet processing in non-interrupt
context.
On (10/02/14 13:43), David Miller wrote:
> For example, you can now move everything into software IRQ context,
> just disable the VIO interrupt and unconditionally go into NAPI
> context from the VIO event.
> No more irqsave/irqrestore.
> Then the TX path even can run mostly lockless, it just needs
> to hold the VIO lock for a minute period of time. The caller
> holds the xmit_lock of the network device to prevent re-entry
> into the ->ndo_start_xmit() path.
>
let me check into this and get back. I think the xmit path
may also need to have some kind of locking for the dring access
and ldc_write? I think you are suggesting that I should also
move the control-packet processing to vnet_event_napi(), which
I have not done in my patch. I will examine where that leads.
But there is one thing that I do not understand - why does my hack
to lie to net_rx_action() by always returning "budget"
make such a difference to throughput?
Even if I set the budget to be as low as 64 (so I would get
called repeatedly under NAPIs polling infra), I have to
turn on the "liar" commented code in my patch for the throughput
shoots up to 2 - 2.5 Gbps (whereas it's otherwise around 300 Mbps).
Eyeballing the net_rx_action() code quickly did not make the explanation
for this pop out at me (yet).
Pure polling (i.e., workq) gives me about 1.5 Gbps, and pure-tasklet
(i.e, just setting up a tasklet from vnet_event to handle data) gives me
approx 2 Gbps. So I dont understand why NAPI doesn't give me something
similar, if I adhere strictly to the documentation.
--Sowmini
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists