lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210409124412.3728a224@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date:   Fri, 9 Apr 2021 12:44:12 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Claudiu Manoil <claudiu.manoil@...il.com>
Cc:     Claudiu Manoil <claudiu.manoil@....com>,
        "Y.b. Lu" <yangbo.lu@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "David S . Miller" <davem@...emloft.net>,
        Richard Cochran <richardcochran@...il.com>,
        Vladimir Oltean <vladimir.oltean@....com>,
        Russell King <linux@...linux.org.uk>
Subject: Re: [net-next, v2, 2/2] enetc: support PTP Sync packet one-step
 timestamping

On Fri, 9 Apr 2021 22:32:49 +0300 Claudiu Manoil wrote:
> On 09.04.2021 19:09, Jakub Kicinski wrote:
> > On Fri, 9 Apr 2021 06:37:53 +0000 Claudiu Manoil wrote:  
> >> Please try test_and_set_bit_lock()/ clear_bit_unlock() based on Jakub's
> >> suggestion, and see if it works for you / whether it can replace the mutex.  
> > 
> > I was thinking that with multiple queues just a bit won't be sufficient
> > because:
> > 
> > xmit:				work:
> > test_bit... // already set
> > 				dequeue // empty
> > enqueue
> > 				clear_bit()
> > 
> > That frame will never get sent, no?  
> 
> I don't see any issue with Yangbo's initial design actually, I was just
> suggesting him to replace the mutex with a bit lock, based on your comments.
> That means:
> xmit:		work:				clean_tx_ring: //Tx conf
> skb_queue_tail()		
> 		skb_dequeue()
> 		test_and_set_bit_lock()
> 						clear_bit_unlock()
> 
> The skb queue is one per device, as it needs to serialize ptp skbs
> for that device (due to the restriction that a ptp packet cannot be 
> enqueued for transmission if there's another ptp packet waiting
> for transmission in a h/w descriptor ring).
> 
> If multiple ptp skbs are coming in from different xmit queues at the 
> same time (same device), they are enqueued in the common priv->tx_skbs 
> queue (skb_queue_tail() is protected by locks), and the worker thread is 
> started.
> The worker dequeues the first ptp skb, and places the packet in the h/w 
> descriptor ring for transmission. Then dequeues the second skb and waits 
> at the lock (or mutex or whatever lock is preferred).
> Upon transmission of the ptp packet the lock is released by the Tx 
> confirmation napi thread (clean_tx_ring()) and the next PTP skb can be 
> placed in the corresponding descriptor ring for transmission by the 
> worker thread.

I see. I thought you were commenting on my scheme. Yes, if only the
worker is allowed to send there is no race, that should work well.

In my suggestion I was trying to allow the first frame to be sent
directly without going via the queue and requiring the worker to be
scheduled in.

> So the way I understood your comments is that you'd rather use a spin 
> lock in the worker thread instead of a mutex.

Not exactly, my main objection was that the mutex was used for wake up.
Worker locks it, completion path unlocks it.

Your suggestion of using a bit works well. Just Instead of a loop the
worker needs to send a single skb, and completion needs to schedule it
again.

> > Note that skb_queue already has a lock so you'd just need to make that
> > lock protect the flag/bit as well, overall the number of locks remains
> > the same. Take the queue's lock, check the flag, use
> > __skb_queue_tail(), release etc.
> 
> This is a good optimization idea indeed, to use the priv->tx_skb skb 
> list's spin lock, instead of adding another lock.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ