lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Jan 2023 10:19:14 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Jesper Dangaard Brouer <jbrouer@...hat.com>,
        Matthew Wilcox <willy@...radead.org>
CC:     <brouer@...hat.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        <netdev@...r.kernel.org>, <linux-mm@...ck.org>,
        Shakeel Butt <shakeelb@...gle.com>
Subject: Re: [PATCH v3 00/26] Split netmem from struct page

On 2023/1/12 18:15, Jesper Dangaard Brouer wrote:> On 11/01/2023 14.21, Matthew Wilcox wrote:
>> On Wed, Jan 11, 2023 at 04:25:46PM +0800, Yunsheng Lin wrote:
>>> On 2023/1/11 12:21, Matthew Wilcox (Oracle) wrote:
>>>> The MM subsystem is trying to reduce struct page to a single pointer.
>>>> The first step towards that is splitting struct page by its individual
>>>> users, as has already been done with folio and slab.  This patchset does
>>>> that for netmem which is used for page pools.
>>> As page pool is only used for rx side in the net stack depending on the
>>> driver, a lot more memory for the net stack is from page_frag_alloc_align(),
>>> kmem cache, etc.
>>> naming it netmem seems a little overkill, perhaps a more specific name for
>>> the page pool? such as pp_cache.
>>>
>>> @Jesper & Ilias
>>> Any better idea?
> 
> I like the 'netmem' name.

Fair enough.
I just pointed out why netmem might not be appropriate when we are not
figuring out how netmem will work through the whole networking stack yet.
It is eventually your and david/jakub's call to decide the naming anyway.

> 
>>> And it seem some API may need changing too, as we are not pooling 'pages'
>>> now.
> 
> IMHO it would be overkill to rename the page_pool to e.g. netmem_pool.
> as it would generate too much churn and will be hard to follow in git
> as the code filename page_pool.c would also have to be renamed.
> It guess we keep page_pool for historical reasons ;-)

I think this is a matter of conflict between backward and forward maintainability.
IMHO we should prefer forward maintainability over backward maintainability.

And greg offers a possible way to fix the backport problem:
https://www.spinics.net/lists/kernel/msg4648826.html

For git history, I suppose that is a pain we have to pay for the future
maintainability.

> 
>> I raised the question of naming in v1, six weeks ago, and nobody had
>> any better names.  Seems a little unfair to ignore the question at first
>> and then bring it up now.  I'd hate to miss the merge window because of
>> a late-breaking major request like this.
>>
>> https://lore.kernel.org/netdev/20221130220803.3657490-1-willy@infradead.org/
>>
>> I'd like to understand what we think we'll do in networking when we trim
>> struct page down to a single pointer,  All these usages that aren't from
>> page_pool -- what information does networking need to track per-allocation?
>> Would it make sense for the netmem to describe all memory used by the
>> networking stack, and have allocators other than page_pool also return
>> netmem, 
> 
> This is also how I see the future, that other netstack "allocators" can
> return and work-with 'netmem' objects.   IMHO we are already cramming

I am not sure how "other netstack 'allocators' can return and work-with
'netmem' objects" works, I suppose putting different union for different
allocators in struct netmem like struct page does? Isn't that bringing
the similar problem Matthew is trying to fix in this patchset?


> too many use-cases into page_pool (like the frag support Yunsheng
> added).  IMHO there are room for other netstack "allocators" that can

I do not understand why frag support is viewed as "cramming use-cases to
page pool".
In my defence, the frag support for rx is fix in the page pool, it just
extend the page pool to return smaller buffer than before. If I create other
allocator for that, I might invent a lot of wheel page pool already invented.

> utilize netmem.  The page_pool is optimized for RX-NAPI workloads, using
> it for other purposes is a mistake IMHO.  People should create other
> netstack "allocators" that solves their specific use-cases.  E.g. The TX
> path likely needs another "allocator" optimized for this TX use-case.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ