[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20141003.120802.1213573830649867131.davem@davemloft.net>
Date: Fri, 03 Oct 2014 12:08:02 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: sowmini.varadhan@...cle.com
Cc: raghuram.kothakota@...cle.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 0/2] sunvnet: Packet processing in
non-interrupt context.
From: Sowmini Varadhan <sowmini.varadhan@...cle.com>
Date: Fri, 3 Oct 2014 10:40:24 -0400
> On (10/02/14 13:43), David Miller wrote:
>> For example, you can now move everything into software IRQ context,
>> just disable the VIO interrupt and unconditionally go into NAPI
>> context from the VIO event.
>> No more irqsave/irqrestore.
>> Then the TX path even can run mostly lockless, it just needs
>> to hold the VIO lock for a minute period of time. The caller
>> holds the xmit_lock of the network device to prevent re-entry
>> into the ->ndo_start_xmit() path.
>>
>
> let me check into this and get back. I think the xmit path
> may also need to have some kind of locking for the dring access
> and ldc_write? I think you are suggesting that I should also
> move the control-packet processing to vnet_event_napi(), which
> I have not done in my patch. I will examine where that leads.
I think you should be able to get rid of all of the in-driver
locking in the fast paths.
NAPI ->poll() is non-reentrant, so all RX processing occurs
strictly in a serialized environment.
Once you do TX reclaim in NAPI context, then all you have to do is
take the generic netdev TX queue lock during the evaluation of whether
to wakeup the TX queue or not. Worst case you need to hold the
TX netdev queue lock across the whole TX reclaim operation.
The VIO lock really ought to be entirely superfluous in the data
paths.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists