[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZDlP2fevtfD5gMPd@casper.infradead.org>
Date: Fri, 14 Apr 2023 14:06:33 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Pankaj Raghav <p.raghav@...sung.com>
Cc: brauner@...nel.org, viro@...iv.linux.org.uk,
akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, mcgrof@...nel.org,
gost.dev@...sung.com, hare@...e.de
Subject: Re: [RFC 2/4] buffer: add alloc_folio_buffers() helper
On Fri, Apr 14, 2023 at 01:08:19PM +0200, Pankaj Raghav wrote:
> Folio version of alloc_page_buffers() helper. This is required to convert
> create_page_buffers() to create_folio_buffers() later in the series.
>
> It removes one call to compound_head() compared to alloc_page_buffers().
I would convert alloc_page_buffers() to folio_alloc_buffers() and
add
static struct buffer_head *alloc_page_buffers(struct page *page,
unsigned long size, bool retry)
{
return folio_alloc_buffers(page_folio(page), size, retry);
}
in buffer_head.h
(there are only five callers, so this feels like a better tradeoff
than creating a new function)
> Signed-off-by: Pankaj Raghav <p.raghav@...sung.com>
> ---
> fs/buffer.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 59 insertions(+)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 44380ff3a31f..0f9c2127543d 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -900,6 +900,65 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
> }
> EXPORT_SYMBOL_GPL(alloc_page_buffers);
>
> +/*
> + * Create the appropriate buffers when given a folio for data area and
> + * the size of each buffer.. Use the bh->b_this_page linked list to
> + * follow the buffers created. Return NULL if unable to create more
> + * buffers.
> + *
> + * The retry flag is used to differentiate async IO (paging, swapping)
> + * which may not fail from ordinary buffer allocations.
> + */
> +struct buffer_head *alloc_folio_buffers(struct folio *folio, unsigned long size,
> + bool retry)
> +{
> + struct buffer_head *bh, *head;
> + gfp_t gfp = GFP_NOFS | __GFP_ACCOUNT;
> + long offset;
> + struct mem_cgroup *memcg, *old_memcg;
> +
> + if (retry)
> + gfp |= __GFP_NOFAIL;
> +
> + /* The folio lock pins the memcg */
> + memcg = folio_memcg(folio);
> + old_memcg = set_active_memcg(memcg);
> +
> + head = NULL;
> + offset = folio_size(folio);
> + while ((offset -= size) >= 0) {
> + bh = alloc_buffer_head(gfp);
> + if (!bh)
> + goto no_grow;
> +
> + bh->b_this_page = head;
> + bh->b_blocknr = -1;
> + head = bh;
> +
> + bh->b_size = size;
> +
> + /* Link the buffer to its folio */
> + set_bh_folio(bh, folio, offset);
> + }
> +out:
> + set_active_memcg(old_memcg);
> + return head;
> +/*
> + * In case anything failed, we just free everything we got.
> + */
> +no_grow:
> + if (head) {
> + do {
> + bh = head;
> + head = head->b_this_page;
> + free_buffer_head(bh);
> + } while (head);
> + }
> +
> + goto out;
> +}
> +EXPORT_SYMBOL_GPL(alloc_folio_buffers);
> +
> static inline void
> link_dev_buffers(struct page *page, struct buffer_head *head)
> {
> --
> 2.34.1
>
Powered by blists - more mailing lists