lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Mar 2022 11:26:24 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Miles Chen <miles.chen@...iatek.com>
Cc:     wsd_upstream@...iatek.com, linux-kernel@...r.kernel.org,
        Libo.Kang@...iatek.com, yf.wang@...iatek.com,
        iommu@...ts.linux-foundation.org,
        linux-mediatek@...ts.infradead.org, Ning.Li@...iatek.com,
        matthias.bgg@...il.com, stable@...r.kernel.org, will@...nel.org,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v2] iommu/iova: Reset max32_alloc_size after cleaning

On 2022-03-01 23:29, Miles Chen via iommu wrote:
> Hi Yunfei,
> 
>>> Since __alloc_and_insert_iova_range fail will set the current alloc
>>> iova size to max32_alloc_size (iovad->max32_alloc_size = size),
>>> when the retry is executed into the __alloc_and_insert_iova_range
>>> function, the retry action will be blocked by the check condition
>>> (size >= iovad->max32_alloc_size) and goto iova32_full directly,
>>> causes the action of retry regular alloc iova in
>>> __alloc_and_insert_iova_range to not actually be executed.
>>>
>>> Based on the above, so need reset max32_alloc_size before retry alloc
>>> iova when alloc iova fail, that is set the initial dma_32bit_pfn value
>>> of iovad to max32_alloc_size, so that the action of retry alloc iova
>>> in __alloc_and_insert_iova_range can be executed.
>>
>> Have you observed this making any difference in practice?
>>
>> Given that both free_cpu_cached_iovas() and free_global_cached_iovas()
>> call iova_magazine_free_pfns(), which calls remove_iova(), which calls
>> __cached_rbnode_delete_update(), I'm thinking no...
>>
>> Robin.
>>
> 
> Like Robin pointed out, if some cached iovas are freed by
> free_global_cached_iovas()/free_cpu_cached_iovas(),
> the max32_alloc_size should be reset to iovad->dma_32bit_pfn.
> 
> If no cached iova is freed, resetting max32_alloc_size before
> the retry allocation only give us a retry. Is it possible that
> other users free their iovas during the additional retry?

No, it's not possible, since everyone's serialised by iova_rbtree_lock. 
If the caches were already empty and the retry gets the lock first, it 
will still fail again - forcing a reset of max32_alloc_size only means 
it has to take the slow path to that failure. If another caller *did* 
manage to get in and free something between free_global_cached_iovas() 
dropping the lock and alloc_iova() re-taking it, then that would have 
legitimately reset max32_alloc_size anyway.

Thanks,
Robin.

> alloc_iova_fast()
>    retry:
>      alloc_iova() // failed, iovad->max32_alloc_size = size
>      free_cpu_cached_iovas()
>        iova_magazine_free_pfns()
>          remove_iova()
> 	  __cached_rbnode_delete_update()
> 	    iovad->max32_alloc_size = iovad->dma_32bit_pfn // reset
>      free_global_cached_iovas()
>        iova_magazine_free_pfns()
>          remove_iova()
> 	  __cached_rbnode_delete_update()
> 	    iovad->max32_alloc_size = iovad->dma_32bit_pfn // reset
>      goto retry;
> 
> thanks,
> Miles
> _______________________________________________
> iommu mailing list
> iommu@...ts.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ