[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <42fecf9d-70c9-b686-d2f7-080b299060d9@suse.cz>
Date: Tue, 26 Jan 2021 17:19:54 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Kevin Hao <haokexin@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: netdev@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH net-next 1/4] mm: page_frag: Introduce
page_frag_alloc_align()
On 1/23/21 12:59 PM, Kevin Hao wrote:
> In the current implementation of page_frag_alloc(), it doesn't have
> any align guarantee for the returned buffer address. But for some
> hardwares they do require the DMA buffer to be aligned correctly,
> so we would have to use some workarounds like below if the buffers
> allocated by the page_frag_alloc() are used by these hardwares for
> DMA.
> buf = page_frag_alloc(really_needed_size + align);
> buf = PTR_ALIGN(buf, align);
>
> These codes seems ugly and would waste a lot of memories if the buffers
> are used in a network driver for the TX/RX. So introduce
> page_frag_alloc_align() to make sure that an aligned buffer address is
> returned.
>
> Signed-off-by: Kevin Hao <haokexin@...il.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Agree with Jakub about static inline.
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5135,8 +5135,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
> }
> EXPORT_SYMBOL(__page_frag_cache_drain);
>
> -void *page_frag_alloc(struct page_frag_cache *nc,
> - unsigned int fragsz, gfp_t gfp_mask)
> +void *page_frag_alloc_align(struct page_frag_cache *nc,
> + unsigned int fragsz, gfp_t gfp_mask, int align)
> {
> unsigned int size = PAGE_SIZE;
> struct page *page;
> @@ -5188,10 +5188,18 @@ void *page_frag_alloc(struct page_frag_cache *nc,
> }
>
> nc->pagecnt_bias--;
> + offset = align ? ALIGN_DOWN(offset, align) : offset;
We don't change offset if align == 0, so I'd go with simpler
if (align)
offset = ...
> nc->offset = offset;
>
> return nc->va + offset;
> }
> +EXPORT_SYMBOL(page_frag_alloc_align);
> +
> +void *page_frag_alloc(struct page_frag_cache *nc,
> + unsigned int fragsz, gfp_t gfp_mask)
> +{
> + return page_frag_alloc_align(nc, fragsz, gfp_mask, 0);
> +}
> EXPORT_SYMBOL(page_frag_alloc);
>
> /*
>
Powered by blists - more mailing lists