[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a20290fa3448849e84d2d97b2978d4e05033cd80.camel@kernel.org>
Date: Fri, 04 Dec 2020 15:57:36 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
Eran Ben Elisha <eranbe@...dia.com>,
Tariq Toukan <tariqt@...dia.com>
Subject: Re: [net-next V2 08/15] net/mlx5e: Add TX PTP port object support
On Fri, 2020-12-04 at 15:17 -0800, Jakub Kicinski wrote:
> On Fri, 04 Dec 2020 13:57:49 -0800 Saeed Mahameed wrote:
> > > > option 2) route PTP traffic to a special SQs per ring, this SQ
> > > > will
> > > > be
> > > > PTP port accurate, Normal traffic will continue through regular
> > > > SQs
> > > >
> > > > Pros: Regular non PTP traffic not affected.
> > > > Cons: High memory footprint for creating special SQs
> > > >
> > > > So we prefer (2) + private flag to avoid the performance hit
> > > > and
> > > > the
> > > > redundant memory usage out of the box.
> > >
> > > Option 3 - have only one special PTP queue in the system. PTP
> > > traffic
> > > is rather low rate, queue per core doesn't seem necessary.
> >
> > We only forward ptp traffic to the new special queue but we create
> > more
> > than one to avoid internal locking as we will utilize the tx
> > softirq
> > percpu.
>
> In other words to make the driver implementation simpler we'll have
> a pretty basic feature hidden behind a ethtool priv knob and a number
> of queues which doesn't match reality reported to user space. Hm.
I look at these queues as a special HW objects to allow the accurate
PTP stamping, they piggyback on the reported txqs, so they are
transparent, they just increase the memory footprint of each ring.
for the priv flags, one of the floating ideas was to
use hwtstamp_rx_filters flags:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/net_tstamp.h#L107
Our hardware timestamps all packets for free whether you request it or
not, Currently there is no option to setup "ALL_PTP" traffic in ethtool
-T, but we can add this flag as it make sense to be in ethtool -T, thus
we could use it in mlx5 to determine if user selected ALL_PTP, then ptp
packets will go through this accurate special path.
This is not a W/A or an abuse to the new flag, it just means if you
select ALL_PTP then a side effect will be our HW will be more accurate
for PTP traffic.
What do you think ?
Regarding reducing to a single special queue, i will discuss with Eran
and the Team on Sunday.
Thanks,
Saeed.
Powered by blists - more mailing lists