lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f256bce-0c37-4940-9218-9545daa46169@huawei.com>
Date: Wed, 6 Nov 2024 18:56:34 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>,
	Toke Høiland-Jørgensen <toke@...hat.com>,
	<davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>
CC: <zhangkun09@...wei.com>, <fanghaiqing@...wei.com>,
	<liuyonglong@...wei.com>, Robin Murphy <robin.murphy@....com>, Alexander
 Duyck <alexander.duyck@...il.com>, IOMMU <iommu@...ts.linux.dev>, Andrew
 Morton <akpm@...ux-foundation.org>, Eric Dumazet <edumazet@...gle.com>, Ilias
 Apalodimas <ilias.apalodimas@...aro.org>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>, kernel-team
	<kernel-team@...udflare.com>, Christoph Hellwig <hch@....de>,
	<m.szyprowski@...sung.com>
Subject: Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver
 has already unbound

+cc Christoph & Marek

On 2024/11/6 4:11, Jesper Dangaard Brouer wrote:

...

>>>
>>>> I am not sure if I understand the reasoning behind the above suggestion to 'wait
>>>> and see if this actually turns out to be a problem' when we already know that there
>>>> are some cases which need cache kicking/flushing for the waiting to work and those
>>>> kicking/flushing may not be easy and may take indefinite time too, not to mention
>>>> there might be other cases that need kicking/flushing that we don't know yet.
>>>>
>>>> Is there any reason not to consider recording the inflight pages so that unmapping
>>>> can be done for inflight pages before driver unbound supposing dynamic number of
>>>> inflight pages can be supported?
>>>>
>>>> IOW, Is there any reason you and jesper taking it as axiomatic that recording the
>>>> inflight pages is bad supposing the inflight pages can be unlimited and recording
>>>> can be done with least performance overhead?
>>>
>>> Well, page pool is a memory allocator, and it already has a mechanism to
>>> handle returning of memory to it. You're proposing to add a second,
>>> orthogonal, mechanism to do this, one that adds both overhead and
>>
>> I would call it as a replacement/improvement for the old one instead of
>> 'a second, orthogonal' as the old one doesn't really exist after this patch.
>>
> 
> Yes, are proposing doing a very radical change to the page_pool design.
> And this is getting proposed as a fix patch for IOMMU.
> 
> It is a very radical change that page_pool needs to keep track of *ALL* in-flight pages.

I am agreed that it is a radical change, that is why it is targetting net-next
tree instead of net tree even when there is a Fixes tag for it.

If there is a proper and non-radical way to fix that, I would prefer the
non-radical way too.

> 
> The DMA issue is a life-time issue of DMA object associated with the
> struct device.  Then, why are you not looking at extending the life-time

It seems it is not really about the life-time of DMA object associated with the
life-time of 'struct device', it seems to be the life-time of DMA API associated
with the life-time of the driver for the 'struct device' from the the opinion of
experts from IOMMU/DMA subsystem in [1] & [2].

I am not sure what is reasoning behind the above, but the implementation seems
to be the case as mentioned in [3]:
__device_release_driver -> device_unbind_cleanup -> arch_teardown_dma_ops

1. https://lkml.org/lkml/2024/8/6/632
2. https://lore.kernel.org/all/20240923175226.GC9634@ziepe.ca/
3. https://lkml.org/lkml/2024/10/15/686

> of the DMA object, or at least detect when DMA object goes away, such
> that we can change a setting in page_pool to stop calling DMA unmap for
> the pages in-flight once they get returned (which we have en existing
> mechanism for).

To be honest, I was mostly depending on the opinion of the experts from
IOMMU/DMA subsystem for the correct DMA API usage as mentioned above.
So I am not sure if skipping DMA unmapping for the inflight pages is the
correct DMA API usage?
If it is the correct DMA API usage, how to detect that if DMA unmapping
can be skipped?

>From previous discussion, skipping DMA unmapping may casue some resource
leaking, like the iova resoure behind the IOMMU and bounce buffer memory
behind the swiotlb.

Anyway, I may be wrong, CC'ing more experts to see if we can have some
clarifying from them.

> 
> 
>>> complexity, yet doesn't handle all cases (cf your comment about devmem).
>>
>> I am not sure if unmapping only need to be done using its own version DMA API
>> for devmem yet, but it seems waiting might also need to use its own version
>> of kicking/flushing for devmem as devmem might be held from the user space?
>>
>>>
>>> And even if it did handle all cases, force-releasing pages in this way
>>> really feels like it's just papering over the issue. If there are pages
>>> being leaked (or that are outstanding forever, which basically amounts
>>> to the same thing), that is something we should be fixing the root cause
>>> of, not just working around it like this series does.
>>
>> If there is a definite time for waiting, I am probably agreed with the above.
>>  From the previous discussion, it seems the time to do the kicking/flushing
>> would be indefinite depending how much cache to be scaned/flushed.
>>
>> For the 'papering over' part, it seems it is about if we want to paper over
>> different kicking/flushing or paper over unmapping using different DMA API.
>>
>> Also page_pool is not really a allocator, instead it is more like a pool
>> based on different allocator, such as buddy allocator or devmem allocator.
>> I am not sure it makes much to do the flushing when page_pool_destroy() is
>> called if the buddy allocator behind the page_pool is not under memory
>> pressure yet.
>>
> 
> I still see page_pool as an allocator like the SLUB/SLAB allocators,
> where slab allocators are created (and can be destroyed again), which we
> can allocate slab objects from.  SLAB allocators also use buddy
> allocator as their backing allocator.

I am not sure if SLUB/SLAB is that similar to page_pool for the specific
problem here, at least SLUB/SLAB doesn't seems to support dma mapping in
its core and doesn't seem to allow inflight cache when kmem_cache_destroy()
is called as its alloc API doesn't seems to take reference to s->refcount
and doesn't have the inflight cache calculating as page_pool does?
https://elixir.bootlin.com/linux/v6.12-rc6/source/mm/slab_common.c#L512


> 
> The page_pool is of-cause evolving with the addition of the devmem
> allocator as a different "backing" allocator type.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ