[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68d28743-1f07-4985-8fc5-9f5558879ac2@huawei.com>
Date: Wed, 17 Apr 2024 21:18:20 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Alexander Duyck <alexander.duyck@...il.com>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Andrew Morton
<akpm@...ux-foundation.org>, Eric Dumazet <edumazet@...gle.com>, David
Howells <dhowells@...hat.com>, Marc Dionne <marc.dionne@...istor.com>,
<linux-mm@...ck.org>, <linux-afs@...ts.infradead.org>
Subject: Re: [PATCH net-next v2 06/15] mm: page_frag: change page_frag_alloc_*
API to accept align param
On 2024/4/17 0:08, Alexander Duyck wrote:
> On Mon, Apr 15, 2024 at 6:22 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> When page_frag_alloc_* API doesn't need data alignment, the
>> ALIGN() operation is unnecessary, so change page_frag_alloc_*
>> API to accept align param instead of align_mask param, and do
>> the ALIGN()'ing in the inline helper when needed.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
>
> The vast majority of callers are using this aligned one way or
> another. If anything with your recent changes we should probably be
> making sure to align the fragsz as well as the offset since most
> callers were only using the alignment of the fragsz in order to get
> their alignment.
>
> My main concern is that this change implies that most are using an
> unaligned setup when it is in fact quite the opposite.
I think the above is depending on what we are about is 'offset unaligned'
or 'fragsz unaligned'.
'offset unaligned' seems like the most case here.
>
>> ---
>> include/linux/page_frag_cache.h | 20 ++++++++++++--------
>> include/linux/skbuff.h | 12 ++++++------
>> mm/page_frag_cache.c | 9 ++++-----
>> net/core/skbuff.c | 12 +++++-------
>> net/rxrpc/txbuf.c | 5 +++--
>> 5 files changed, 30 insertions(+), 28 deletions(-)
>>
>> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
>> index 04810d8d6a7d..cc0ede0912f3 100644
>> --- a/include/linux/page_frag_cache.h
>> +++ b/include/linux/page_frag_cache.h
>> @@ -25,21 +25,25 @@ struct page_frag_cache {
>>
>> void page_frag_cache_drain(struct page_frag_cache *nc);
>> void __page_frag_cache_drain(struct page *page, unsigned int count);
>> -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
>> - gfp_t gfp_mask, unsigned int align_mask);
>> +void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz,
>> + gfp_t gfp_mask);
>> +
>> +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
>> + unsigned int fragsz, gfp_t gfp_mask,
>> + unsigned int align)
>> +{
>> + nc->offset = ALIGN(nc->offset, align);
>> +
>> + return page_frag_alloc(nc, fragsz, gfp_mask);
>> +}
>>
>
> I would rather not have us breaking up the alignment into another
> function. It makes this much more difficult to work with. In addition
> you are adding offsets without actually adding to the pages which
> makes this seem exploitable. Basically just pass an alignment value of
> 32K and you are forcing a page eviction regardless.
Yes, as you mentioned in patch 9:
The "align >= PAGE_SIZE" fix should probably go with your change that
> reversed the direction.
>
>> static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
>> unsigned int fragsz, gfp_t gfp_mask,
>> unsigned int align)
>> {
>> WARN_ON_ONCE(!is_power_of_2(align));
>> - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
>> -}
>>
>> -static inline void *page_frag_alloc(struct page_frag_cache *nc,
>> - unsigned int fragsz, gfp_t gfp_mask)
>> -{
>> - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
>> + return __page_frag_alloc_align(nc, fragsz, gfp_mask, align);
>> }
>>
...
>> /*
>> * Frees a page fragment allocated out of either a compound or order 0 page.
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index ea052fa710d8..676e2d857f02 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -306,18 +306,17 @@ void napi_get_frags_check(struct napi_struct *napi)
>> local_bh_enable();
>> }
>>
>> -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
>> +void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align)
>> {
>> struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
>>
>> fragsz = SKB_DATA_ALIGN(fragsz);
>>
>
> So this is a perfect example. This caller is aligning the size by
> SMP_CACHE_BYTES. This is the most typical case. Either this or
> L1_CACHE_BYTES. As such all requests should be aligned to at least
> that. I would prefer it if we didn't strip the alignment code out of
> our main allocating function. If anything, maybe we should make it
> more specific that the expectation is that fragsz is a multiple of the
> alignment.
Let's discuss the above in patch 5.
Powered by blists - more mailing lists