lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Nov 2023 17:33:13 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Jakub Kicinski <kuba@...nel.org>
CC:     Mina Almasry <almasrymina@...gle.com>, <davem@...emloft.net>,
        <pabeni@...hat.com>, <netdev@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>,
        Willem de Bruijn <willemb@...gle.com>,
        Kaiyuan Zhang <kaiyuanz@...gle.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Eric Dumazet <edumazet@...gle.com>,
        Christian König <christian.koenig@....com>,
        Jason Gunthorpe <jgg@...dia.com>,
        Matthew Wilcox <willy@...radead.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH RFC 3/8] memory-provider: dmabuf devmem memory provider

On 2023/11/15 6:25, Jakub Kicinski wrote:
> On Tue, 14 Nov 2023 16:23:29 +0800 Yunsheng Lin wrote:
>> I would expect net stack, page pool, driver still see the 'struct page',
>> only memory provider see the specific struct for itself, for the above,
>> devmem memory provider sees the 'struct page_pool_iov'.
> 
> You can't lie to the driver that an _iov is a page either.

Yes, agreed about that.

As a matter of fact, the driver should be awared of what kind of
memory provider is using when it calls page_pool_create() during
init process.

> The driver must explicitly "opt-in" to using the _iov variant,
> by calling the _iov set of APIs.
> 
> Only drivers which can support header-data split can reasonably
> use the _iov API, for data pages.

But those drivers can still allow allocating normal memory, right?
sometimes for data and header part, and sometimes for the header part.

Do those drivers need to support two sets of APIs? the one with _iov
for devmem, and the one without _iov for normal memory. It seems somewhat
unnecessary from driver' point of veiw to support two sets of APIs?
The driver seems to know which type of page it is expecting when calling
page_pool_alloc() with a specific page_pool instance.

Or do we use the API with _iov to allocate both devmem and normal memory
in the new driver supporting devmem page?  If that is the case, does it
really matter if the API is with _iov or not?

> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ