[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151119155525.GB13055@bfoster.bfoster>
Date: Thu, 19 Nov 2015 10:55:25 -0500
From: Brian Foster <bfoster@...hat.com>
To: Octavian Purdila <octavian.purdila@...el.com>
Cc: xfs@....sgi.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] xfs: support for non-mmu architectures
On Wed, Nov 18, 2015 at 12:46:21AM +0200, Octavian Purdila wrote:
> Naive implementation for non-mmu architectures: allocate physically
> contiguous xfs buffers with alloc_pages. Terribly inefficient with
> memory and fragmentation on high I/O loads but it may be good enough
> for basic usage (which most non-mmu architectures will need).
>
> This patch was tested with lklfuse [1] and basic operations seems to
> work even with 16MB allocated for LKL.
>
> [1] https://github.com/lkl/linux
>
> Signed-off-by: Octavian Purdila <octavian.purdila@...el.com>
> ---
Interesting, though this makes me wonder why we couldn't have a new
_XBF_VMEM (for example) buffer type that uses vmalloc(). I'm not
familiar with mmu-less context, but I see that mm/nommu.c has a
__vmalloc() interface that looks like it ultimately translates into an
alloc_pages() call. Would that accomplish what this patch is currently
trying to do?
I ask because it seems like that would help clean up the code a bit, for
one. It might also facilitate some degree of testing of the XFS bits
(even if utilized sparingly in DEBUG mode if it weren't suitable enough
for generic/mmu use). We currently allocate and map the buffer pages
separately and I'm not sure if there's any particular reasons for doing
that outside of some congestion handling in the allocation code and
XBF_UNMAPPED buffers, the latter probably being irrelevant for nommu.
Any other thoughts on that?
> fs/xfs/xfs_buf.c | 27 +++++++++++++++++++++++++++
> 1 file changed, 27 insertions(+)
>
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 8ecffb3..50b5246 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
...
> @@ -816,11 +835,19 @@ xfs_buf_get_uncached(
> if (error)
> goto fail_free_buf;
>
> +#ifdef CONFIG_MMU
> for (i = 0; i < page_count; i++) {
> bp->b_pages[i] = alloc_page(xb_to_gfp(flags));
> if (!bp->b_pages[i])
> goto fail_free_mem;
> }
> +#else
> + bp->b_pages[0] = alloc_pages(flags, order_base_2(page_count));
> + if (!bp->b_pages[0])
> + goto fail_free_buf;
> + for (i = 1; i < page_count; i++)
> + bp->b_pages[i] = bp->b_pages[i-1] + 1;
> +#endif
We still have a path into __free_page() further down in this function if
_xfs_buf_map_pages() fails. Granted, _xfs_buf_map_pages() should
probably never fail in this case, but it still looks like a land mine at
the very least.
Brian
> bp->b_flags |= _XBF_PAGES;
>
> error = _xfs_buf_map_pages(bp, 0);
> --
> 1.9.1
>
> _______________________________________________
> xfs mailing list
> xfs@....sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists