[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b39d76a-b2be-4d09-a4b6-efb01c4be006@intel.com>
Date: Wed, 9 Apr 2025 11:08:46 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: "Olech, Milena" <milena.olech@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Nguyen, Anthony L"
<anthony.l.nguyen@...el.com>, "Kitszel, Przemyslaw"
<przemyslaw.kitszel@...el.com>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Tantilov, Emil S"
<emil.s.tantilov@...el.com>, "Linga, Pavan Kumar"
<pavan.kumar.linga@...el.com>, "Salin, Samuel" <samuel.salin@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH v10 iwl-next 09/11] idpf: add Tx
timestamp capabilities negotiation
On 4/9/2025 7:04 AM, Olech, Milena wrote:
> On 4/8/2025 11:23 PM, Jacob Keller wrote:
>
>> On 4/8/2025 3:31 AM, Milena Olech wrote:
>>> +static void idpf_ptp_release_vport_tstamp(struct idpf_vport *vport)
>>> +{
>>> + struct idpf_ptp_tx_tstamp *ptp_tx_tstamp, *tmp;
>>> + struct list_head *head;
>>> +
>>> + /* Remove list with free latches */
>>> + spin_lock(&vport->tx_tstamp_caps->lock_free);
>>> +
>>> + head = &vport->tx_tstamp_caps->latches_free;
>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) {
>>> + list_del(&ptp_tx_tstamp->list_member);
>>> + kfree(ptp_tx_tstamp);
>>> + }
>>> +
>>> + spin_unlock(&vport->tx_tstamp_caps->lock_free);
>>> +
>>> + /* Remove list with latches in use */
>>> + spin_lock(&vport->tx_tstamp_caps->lock_in_use);
>>> +
>>> + head = &vport->tx_tstamp_caps->latches_in_use;
>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) {
>>> + list_del(&ptp_tx_tstamp->list_member);
>>> + kfree(ptp_tx_tstamp);
>>> + }
>>> +
>>> + spin_unlock(&vport->tx_tstamp_caps->lock_in_use);
>>> +
>>> + kfree(vport->tx_tstamp_caps);
>>> + vport->tx_tstamp_caps = NULL;
>>> +}
>> Could you provide a summary and overview of the locking scheme used
>> here? I see you have multiple spin locks for both the free bits and the
>> in-use bits, and its a bit hard to grasp the reasoning behind this. We
>> had a lot of issues getting locking for Tx timestamps correct in ice,
>> though most of that had to do with quirks in the hardware.
>>
>
> Ofc :) So the main idea is to have a list of free latches (indexes) and a
> list of latches that are being used - by used I mean that the timestamp
> for this index is requested and being processed.
>
> So at the beginning, the driver negotiates the list of latches with the CP
> and adds them to the free list. When the timestamp is requested, driver
> takes the first item of the free latches and moves it to 'in-use' list.
> Similarly, when the timestamp is read, driver moves the index from
> 'in use' to 'free'.
>
Ok. Is there a reason these need separate locks instead of just sharing
the same lock?
> Regards,
> Milena
>
>> Thanks,
>> Jake
>>
Powered by blists - more mailing lists