[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW4PR11MB58897E22AD4DF1C410B9F62D8EB72@MW4PR11MB5889.namprd11.prod.outlook.com>
Date: Thu, 10 Apr 2025 14:11:49 +0000
From: "Olech, Milena" <milena.olech@...el.com>
To: "Keller, Jacob E" <jacob.e.keller@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Nguyen, Anthony L"
<anthony.l.nguyen@...el.com>, "Kitszel, Przemyslaw"
<przemyslaw.kitszel@...el.com>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Tantilov, Emil S"
<emil.s.tantilov@...el.com>, "Linga, Pavan Kumar"
<pavan.kumar.linga@...el.com>, "Salin, Samuel" <samuel.salin@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH v10 iwl-next 09/11] idpf: add Tx
timestamp capabilities negotiation
On 4/9/2025 8:09 PM, Jacob Keller wrote:
>On 4/9/2025 7:04 AM, Olech, Milena wrote:
>> On 4/8/2025 11:23 PM, Jacob Keller wrote:
>>
>>> On 4/8/2025 3:31 AM, Milena Olech wrote:
>>>> +static void idpf_ptp_release_vport_tstamp(struct idpf_vport *vport)
>>>> +{
>>>> + struct idpf_ptp_tx_tstamp *ptp_tx_tstamp, *tmp;
>>>> + struct list_head *head;
>>>> +
>>>> + /* Remove list with free latches */
>>>> + spin_lock(&vport->tx_tstamp_caps->lock_free);
>>>> +
>>>> + head = &vport->tx_tstamp_caps->latches_free;
>>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) {
>>>> + list_del(&ptp_tx_tstamp->list_member);
>>>> + kfree(ptp_tx_tstamp);
>>>> + }
>>>> +
>>>> + spin_unlock(&vport->tx_tstamp_caps->lock_free);
>>>> +
>>>> + /* Remove list with latches in use */
>>>> + spin_lock(&vport->tx_tstamp_caps->lock_in_use);
>>>> +
>>>> + head = &vport->tx_tstamp_caps->latches_in_use;
>>>> + list_for_each_entry_safe(ptp_tx_tstamp, tmp, head, list_member) {
>>>> + list_del(&ptp_tx_tstamp->list_member);
>>>> + kfree(ptp_tx_tstamp);
>>>> + }
>>>> +
>>>> + spin_unlock(&vport->tx_tstamp_caps->lock_in_use);
>>>> +
>>>> + kfree(vport->tx_tstamp_caps);
>>>> + vport->tx_tstamp_caps = NULL;
>>>> +}
>>> Could you provide a summary and overview of the locking scheme used
>>> here? I see you have multiple spin locks for both the free bits and the
>>> in-use bits, and its a bit hard to grasp the reasoning behind this. We
>>> had a lot of issues getting locking for Tx timestamps correct in ice,
>>> though most of that had to do with quirks in the hardware.
>>>
>>
>> Ofc :) So the main idea is to have a list of free latches (indexes) and a
>> list of latches that are being used - by used I mean that the timestamp
>> for this index is requested and being processed.
>>
>> So at the beginning, the driver negotiates the list of latches with the CP
>> and adds them to the free list. When the timestamp is requested, driver
>> takes the first item of the free latches and moves it to 'in-use' list.
>> Similarly, when the timestamp is read, driver moves the index from
>> 'in use' to 'free'.
>>
>
>Ok. Is there a reason these need separate locks instead of just sharing
>the same lock?
>
That's a very good question. In fact in most places I need to move item
from the first to the second list, so I could use the same spinlock for
both.
The only place where only one is used is sending virtchnl message to get
the Tx timestamp value, where I search for items on 'in use' list.
But it does not mean that we cannot share lock, because when 'in use'
is processed, it is not possible to request the new index (because we need
the lock to move from 'free' to 'in use').
So to summarize, at the end of the day I don't see any specific reason
of having two.
Let me know what are your thoughts, but I guess it is safe to remove one
lock.
Milena
>> Regards,
>> Milena
>>
>>> Thanks,
>>> Jake
>>>
>
>
Powered by blists - more mailing lists