lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PH7PR11MB5885EAC3A3687F97267F072E8E272@PH7PR11MB5885.namprd11.prod.outlook.com>
Date: Mon, 18 Nov 2024 15:31:47 +0000
From: "Olech, Milena" <milena.olech@...el.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
	"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Nguyen, Anthony L"
	<anthony.l.nguyen@...el.com>, "Kitszel, Przemyslaw"
	<przemyslaw.kitszel@...el.com>, "Lobakin, Aleksander"
	<aleksander.lobakin@...el.com>
Subject: RE: [PATCH iwl-net 09/10] idpf: add support for Rx timestamping

On 11/14/2024 9:54 PM, Willem de Bruijn wrote:

> Milena Olech wrote:
> > Add Rx timestamp function when the Rx timestamp value is read directly 
> > from the Rx descriptor. Add supported Rx timestamp modes.
> > 
> > Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> > Signed-off-by: Milena Olech <milena.olech@...el.com>
> > ---
> >  drivers/net/ethernet/intel/idpf/idpf_ptp.c  | 74 
> > ++++++++++++++++++++-  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 
> > 30 +++++++++  drivers/net/ethernet/intel/idpf/idpf_txrx.h |  7 +-
> >  3 files changed, 109 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_ptp.c 
> > b/drivers/net/ethernet/intel/idpf/idpf_ptp.c
> > index f34642d10768..f9f7613f2a6d 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_ptp.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_ptp.c
> > @@ -317,12 +317,41 @@ static int idpf_ptp_gettimex64(struct ptp_clock_info *info,
> >  	return 0;
> >  }
> >
> > +/**
> > + * idpf_ptp_update_phctime_rxq_grp - Update the cached PHC time for a given Rx
> > + *				     queue group.
> 
> Why does each receive group have a separate cached value?
> They're all caches of the same device clock.

That's correct - they all caches values of the same PHC, however I would
like to have an effective method to access this value in hotpath where
I'm extending the Rx timestamp value to 64 bit.

For the same reason I cached Tx timestamp caps in
idpf_vport_init_fast_path_txqs.

>
> > + * @grp: receive queue group in which Rx timestamp is enabled
> > + * @split: Indicates whether the queue model is split or single queue
> > + * @systime: Cached system time
> > + */
> > +static void
> > +idpf_ptp_update_phctime_rxq_grp(const struct idpf_rxq_group *grp, bool split,
> > +				u64 systime)
> > +{
> > +	struct idpf_rx_queue *rxq;
> > +	u16 i;
> > +
> > +	if (!split) {
> > +		for (i = 0; i < grp->singleq.num_rxq; i++) {
> > +			rxq = grp->singleq.rxqs[i];
> > +			if (rxq)
> > +				WRITE_ONCE(rxq->cached_phc_time, systime);
> > +		}
> > +	} else {
> > +		for (i = 0; i < grp->splitq.num_rxq_sets; i++) {
> > +			rxq = &grp->splitq.rxq_sets[i]->rxq;
> > +			if (rxq)
> > +				WRITE_ONCE(rxq->cached_phc_time, systime);
> > +		}
> > +	}
> > +}
> > +
> 
> > +/**
> > + * idpf_ptp_set_rx_tstamp - Enable or disable Rx timestamping
> > + * @vport: Virtual port structure
> > + * @rx_filter: bool value for whether timestamps are enabled or 
> > +disabled  */ static void idpf_ptp_set_rx_tstamp(struct idpf_vport 
> > +*vport, int rx_filter) {
> > +	vport->tstamp_config.rx_filter = rx_filter;
> > +
> > +	if (rx_filter == HWTSTAMP_FILTER_NONE)
> > +		return;
> 
> Should this clear the bit if it was previously set, instead of returning immediately?
> > +
> > +	for (u16 i = 0; i < vport->num_rxq_grp; i++) {
> > +		struct idpf_rxq_group *grp = &vport->rxq_grps[i];
> > +		u16 j;
> > +
> > +		if (idpf_is_queue_model_split(vport->rxq_model)) {
> > +			for (j = 0; j < grp->singleq.num_rxq; j++)
> > +				idpf_queue_set(PTP, grp->singleq.rxqs[j]);
> > +		} else {
> > +			for (j = 0; j < grp->splitq.num_rxq_sets; j++)
> > +				idpf_queue_set(PTP,
> > +					       &grp->splitq.rxq_sets[j]->rxq);
> > +		}
> > +	}
> > +}
> 
> > +static void
> > +idpf_rx_hwtstamp(const struct idpf_rx_queue *rxq,
> > +		 const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
> > +		 struct sk_buff *skb)
> > +{
> > +	u64 cached_time, ts_ns;
> > +	u32 ts_high;
> > +
> > +	if (!(rx_desc->ts_low & VIRTCHNL2_RX_FLEX_TSTAMP_VALID))
> > +		return;
> > +
> > +	cached_time = READ_ONCE(rxq->cached_phc_time);
> > +
> > +	ts_high = le32_to_cpu(rx_desc->ts_high);
> > +	ts_ns = idpf_ptp_tstamp_extend_32b_to_64b(cached_time, ts_high);
> > +
> > +	*skb_hwtstamps(skb) = (struct skb_shared_hwtstamps) {
> > +		.hwtstamp = ns_to_ktime(ts_ns),
> > +	};
> 
> Simpler: skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ts_ns);

Thanks,
Milena

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ