lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <03b62ac1-7d7e-4267-b55c-1b57651f55be@linux.intel.com>
Date: Mon, 2 Sep 2024 19:50:20 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: "Kumar, Sanjay K" <sanjay.k.kumar@...el.com>,
 Jacob Pan <jacob.jun.pan@...ux.intel.com>, "Tian, Kevin"
 <kevin.tian@...el.com>
Cc: baolu.lu@...ux.intel.com, "iommu@...ts.linux.dev"
 <iommu@...ts.linux.dev>, LKML <linux-kernel@...r.kernel.org>,
 "Liu, Yi L" <yi.l.liu@...el.com>, "Zhang, Tina" <tina.zhang@...el.com>
Subject: Re: [PATCH] iommu/vt-d: Fix potential soft lockup due to reclaim

On 2024/9/2 13:44, Kumar, Sanjay K wrote:
> Sorry, just catching up with emails.
> I noticed another thing.
> In qi_submit_sync() input parameters, when the count is 0, the expectation should be that desc should be NULL right?
> In that case, the below code will cause a problem.
> 
> type = desc->qw0 & GENMASK_ULL(3, 0);
> 
> The above line requires caller (when calling with count = 0) to pass a fake non-NULL desc pointer. Should we fix this as well? Alternatively, we can fix it whenever we create the use case of caller calling with count=0.

No worries, Sanjay. I will take care of this later. This is actually not
a fix, but rather an extension of the helper to support a new use case
(count=0).

Thanks,
baolu

> 
> Thanks,
> Sanjay
> 
> -----Original Message-----
> From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> Sent: Thursday, July 25, 2024 7:42 PM
> To: Tian, Kevin <kevin.tian@...el.com>
> Cc: iommu@...ts.linux.dev; LKML <linux-kernel@...r.kernel.org>; Lu Baolu <baolu.lu@...ux.intel.com>; Liu, Yi L <yi.l.liu@...el.com>; Zhang, Tina <tina.zhang@...el.com>; Kumar, Sanjay K <sanjay.k.kumar@...el.com>; jacob.jun.pan@...ux.intel.com
> Subject: Re: [PATCH] iommu/vt-d: Fix potential soft lockup due to reclaim
> 
> 
> On Fri, 26 Jul 2024 00:18:13 +0000, "Tian, Kevin" <kevin.tian@...el.com>
> wrote:
> 
>>> From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
>>> Sent: Friday, July 26, 2024 5:11 AM
>>>>>>> @@ -1463,8 +1462,14 @@ int qi_submit_sync(struct intel_iommu
>>>>> *iommu,
>>>>>>> struct qi_desc *desc,
>>>>>>>   		raw_spin_lock(&qi->q_lock);
>>>>>>>   	}
>>>>>>>
>>>>>>> -	for (i = 0; i < count; i++)
>>>>>>> -		qi->desc_status[(index + i) % QI_LENGTH] =
>>>>>>> QI_DONE;
>>>>>>> +	/*
>>>>>>> +	 * The reclaim code can free descriptors from multiple
>>>>>>> submissions
>>>>>>> +	 * starting from the tail of the queue. When count ==
>>>>>>> 0, the
>>>>>>> +	 * status of the standalone wait descriptor at the
>>>>>>> tail of the queue
>>>>>>> +	 * must be set to QI_TO_BE_FREED to allow the reclaim
>>>>>>> code to proceed.
>>>>>>> +	 */
>>>>>>> +	for (i = 0; i <= count; i++)
>>>>>>> +		qi->desc_status[(index + i) % QI_LENGTH] =
>>>>>>> QI_TO_BE_FREED;
>>>>>>
>>>>>> We don't really need a new flag. Just set them to QI_FREE and
>>>>>> then reclaim QI_FREE slots until hitting qi->head in
>>>>>> reclaim_free_desc().
>>>>> We do need to have a separate state for descriptors pending to
>>>>> be freed. Otherwise, reclaim code will advance pass the intended range.
>>>>>   
>>>>
>>>> The commit msg said that QI_DONE is currently used for conflicting
>>>> purpose.
>>>>
>>>> Using QI_FREE keeps it only for signaling that a wait desc is
>>>> completed.
>>>>
>>>> The key is that reclaim() should not change a desc's state before
>>>> it's consumed by the owner. Instead we always let the owner to
>>>> change the state and reclaim() only does scan and adjust the
>>>> tracking fields then such race condition disappears.
>>>>
>>>> In this example T2's slots are changed to QI_FREE by T2 after it
>>>> completes all the checks. Only at this point those slots can be
>>>> reclaimed.
>>>
>>> The problem is that without the TO_BE_FREED state, the reclaim code
>>> would have no way of knowing which ones are to be reclaimed and
>>> which ones are currently free. Therefore, it cannot track free_cnt.
>>>
>>> The current reclaim code is not aware of owners nor how many to reclaim.
>>>
>>> If I make the following changes and run, free_cnt will keep going up
>>> and system cannot boot. Perhaps you meant another way?
>>>
>>> --- a/drivers/iommu/intel/dmar.c
>>> +++ b/drivers/iommu/intel/dmar.c
>>> @@ -1204,8 +1204,7 @@ static void free_iommu(struct intel_iommu
>>> *iommu)
>>>    */
>>>   static inline void reclaim_free_desc(struct q_inval *qi)  {
>>> -       while (qi->desc_status[qi->free_tail] == QI_TO_BE_FREED) {
>>> -               qi->desc_status[qi->free_tail] = QI_FREE;
>>> +       while (qi->desc_status[qi->free_tail] == QI_FREE) {
>>>                  qi->free_tail = (qi->free_tail + 1) % QI_LENGTH;
>>>                  qi->free_cnt++;
>>
>> Here miss a check to prevent reclaiming unused slots:
>>
>> 		if (qi->free_tail == qi->free_head)
>> 			break;
>>
>> In the example flow reclaim_free_desc() in T1 will only reclaim slots
>> used by T1 as slots of T2 are either QI_IN_USE or QI_DONE. T2 slots
>> will be reclaimed when T2 calls reclaim_free_desc() after setting them
>> to QI_FREE, and reclaim will stop at qi->free_head.
>>
>> If for some reason T2 completes earlier than T1. reclaim_free_desc()
>> in T2 does nothing as the first slot qi->free_tail belongs to T1 still
>> IN_USE. T2's slots will then wait until reclaim is triggered by T1 later.
>>
> This makes sense. Unlike the original code, we now only allow freeing the submitter's own descriptors.
> 
>>>          }
>>> @@ -1466,10 +1465,10 @@ int qi_submit_sync(struct intel_iommu
>>> *iommu, struct qi_desc *desc,
>>>           * The reclaim code can free descriptors from multiple
>>> submissions
>>>           * starting from the tail of the queue. When count == 0, the
>>>           * status of the standalone wait descriptor at the tail of
>>> the queue
>>> -        * must be set to QI_TO_BE_FREED to allow the reclaim code to
>>> proceed.
>>> +        * must be set to QI_FREE to allow the reclaim code to proceed.
>>>           */
>>>          for (i = 0; i <= count; i++)
>>> -               qi->desc_status[(index + i) % QI_LENGTH] =
>>> QI_TO_BE_FREED;
>>> +               qi->desc_status[(index + i) % QI_LENGTH] = QI_FREE;
>>>
>>>          reclaim_free_desc(qi);
>>>          raw_spin_unlock_irqrestore(&qi->q_lock, flags); diff --git
>>> a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index
>>> 1ab39f9145f2..eaf015b4353b 100644
>>> --- a/drivers/iommu/intel/iommu.h
>>> +++ b/drivers/iommu/intel/iommu.h
>>> @@ -382,8 +382,7 @@ enum {
>>>          QI_FREE,
>>>          QI_IN_USE,
>>>          QI_DONE,
>>> -       QI_ABORT,
>>> -       QI_TO_BE_FREED
>>> +       QI_ABORT
>>>   };
>>>
>>> Thanks,
>>>
>>> Jacob
>>
> 
> 
> Thanks,
> 
> Jacob
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ