[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250114162648.GK5497@kernel.org>
Date: Tue, 14 Jan 2025 16:26:48 +0000
From: Simon Horman <horms@...nel.org>
To: Milena Olech <milena.olech@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
anthony.l.nguyen@...el.com, przemyslaw.kitszel@...el.com,
Josh Hay <joshua.a.hay@...el.com>
Subject: Re: [PATCH v4 iwl-next 08/10] idpf: add Tx timestamp flows
On Tue, Jan 14, 2025 at 01:11:13PM +0100, Milena Olech wrote:
> Add functions to request Tx timestamp for the PTP packets, read the Tx
> timestamp when the completion tag for that packet is being received,
> extend the Tx timestamp value and set the supported timestamping modes.
>
> Tx timestamp is requested for the PTP packets by setting a TSYN bit and
> index value in the Tx context descriptor. The driver assumption is that
> the Tx timestamp value is ready to be read when the completion tag is
> received. Then the driver schedules delayed work and the Tx timestamp
> value read is requested through virtchnl message. At the end, the Tx
> timestamp value is extended to 64-bit and provided back to the skb.
>
> Co-developed-by: Josh Hay <joshua.a.hay@...el.com>
> Signed-off-by: Josh Hay <joshua.a.hay@...el.com>
> Signed-off-by: Milena Olech <milena.olech@...el.com>
...
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_ptp.c b/drivers/net/ethernet/intel/idpf/idpf_ptp.c
...
> +/**
> + * idpf_ptp_request_ts - Request an available Tx timestamp index
> + * @tx_q: Transmit queue on which the Tx timestamp is requested
> + * @skb: The SKB to associate with this timestamp request
> + * @idx: Index of the Tx timestamp latch
> + *
> + * Request tx timestamp index negotiated during PTP init that will be set into
> + * Tx descriptor.
> + *
> + * Return: 0 and the index that can be provided to Tx descriptor on success,
> + * -errno otherwise.
> + */
> +int idpf_ptp_request_ts(struct idpf_tx_queue *tx_q, struct sk_buff *skb,
> + u32 *idx)
> +{
> + struct idpf_ptp_tx_tstamp *ptp_tx_tstamp;
> + struct list_head *head;
> +
> + /* Get the index from the free latches list */
> + spin_lock_bh(&tx_q->cached_tstamp_caps->lock_free);
> +
> + head = &tx_q->cached_tstamp_caps->latches_free;
> + if (list_empty(head)) {
> + spin_unlock_bh(&tx_q->cached_tstamp_caps->lock_in_use);
Hi Milena and Josh,
Should the line above be:
spin_unlock_bh(&tx_q->cached_tstamp_caps->lock_free);
^^^^^^^^^
Flagged by Smatch.
> + return -ENOBUFS;
> + }
> +
> + ptp_tx_tstamp = list_first_entry(head, struct idpf_ptp_tx_tstamp,
> + list_member);
> + list_del(&ptp_tx_tstamp->list_member);
> + spin_unlock_bh(&tx_q->cached_tstamp_caps->lock_free);
> +
> + ptp_tx_tstamp->skb = skb_get(skb);
> + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
> +
> + /* Move the element to the used latches list */
> + spin_lock_bh(&tx_q->cached_tstamp_caps->lock_in_use);
> + list_add(&ptp_tx_tstamp->list_member,
> + &tx_q->cached_tstamp_caps->latches_in_use);
> + spin_unlock_bh(&tx_q->cached_tstamp_caps->lock_in_use);
> +
> + *idx = ptp_tx_tstamp->idx;
> +
> + return 0;
> +}
...
Powered by blists - more mailing lists