[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9a755f3-9a44-83d5-4426-1238c96c8e15@gmail.com>
Date: Fri, 9 Apr 2021 22:32:49 +0300
From: Claudiu Manoil <claudiu.manoil@...il.com>
To: Jakub Kicinski <kuba@...nel.org>,
Claudiu Manoil <claudiu.manoil@....com>
Cc: "Y.b. Lu" <yangbo.lu@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"David S . Miller" <davem@...emloft.net>,
Richard Cochran <richardcochran@...il.com>,
Vladimir Oltean <vladimir.oltean@....com>,
Russell King <linux@...linux.org.uk>
Subject: Re: [net-next, v2, 2/2] enetc: support PTP Sync packet one-step
timestamping
On 09.04.2021 19:09, Jakub Kicinski wrote:
> On Fri, 9 Apr 2021 06:37:53 +0000 Claudiu Manoil wrote:
>>> On Thu, 8 Apr 2021 09:02:50 -0700 Jakub Kicinski wrote:
>>>> if (priv->flags & ONESTEP_BUSY) {
>>>> skb_queue_tail(&priv->tx_skbs, skb);
>>>> return ...;
>>>> }
>>>> priv->flags |= ONESTEP_BUSY;
>>>
>>> Ah, if you have multiple queues this needs to be under a separate
>>> spinlock, 'cause netif_tx_lock() won't be enough.
>>
>> Please try test_and_set_bit_lock()/ clear_bit_unlock() based on Jakub's
>> suggestion, and see if it works for you / whether it can replace the mutex.
>
> I was thinking that with multiple queues just a bit won't be sufficient
> because:
>
> xmit: work:
> test_bit... // already set
> dequeue // empty
> enqueue
> clear_bit()
>
> That frame will never get sent, no?
I don't see any issue with Yangbo's initial design actually, I was just
suggesting him to replace the mutex with a bit lock, based on your comments.
That means:
xmit: work: clean_tx_ring: //Tx conf
skb_queue_tail()
skb_dequeue()
test_and_set_bit_lock()
clear_bit_unlock()
The skb queue is one per device, as it needs to serialize ptp skbs
for that device (due to the restriction that a ptp packet cannot be
enqueued for transmission if there's another ptp packet waiting
for transmission in a h/w descriptor ring).
If multiple ptp skbs are coming in from different xmit queues at the
same time (same device), they are enqueued in the common priv->tx_skbs
queue (skb_queue_tail() is protected by locks), and the worker thread is
started.
The worker dequeues the first ptp skb, and places the packet in the h/w
descriptor ring for transmission. Then dequeues the second skb and waits
at the lock (or mutex or whatever lock is preferred).
Upon transmission of the ptp packet the lock is released by the Tx
confirmation napi thread (clean_tx_ring()) and the next PTP skb can be
placed in the corresponding descriptor ring for transmission by the
worker thread.
So the way I understood your comments is that you'd rather use a spin
lock in the worker thread instead of a mutex.
>
> Note that skb_queue already has a lock so you'd just need to make that
> lock protect the flag/bit as well, overall the number of locks remains
> the same. Take the queue's lock, check the flag, use
> __skb_queue_tail(), release etc.
>
This is a good optimization idea indeed, to use the priv->tx_skb skb
list's spin lock, instead of adding another lock.
Powered by blists - more mailing lists