[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE1zotJE1-qkHnSrJ068-7D5a5vYRr9cSy-MfXy4DjJ8d+YekQ@mail.gmail.com>
Date: Fri, 20 Nov 2015 16:09:06 +0200
From: Octavian Purdila <octavian.purdila@...el.com>
To: Dave Chinner <david@...morbit.com>
Cc: Brian Foster <bfoster@...hat.com>, xfs@....sgi.com,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] xfs: support for non-mmu architectures
On Fri, Nov 20, 2015 at 1:35 AM, Dave Chinner <david@...morbit.com> wrote:
> On Thu, Nov 19, 2015 at 10:55:25AM -0500, Brian Foster wrote:
>> On Wed, Nov 18, 2015 at 12:46:21AM +0200, Octavian Purdila wrote:
>> > Naive implementation for non-mmu architectures: allocate physically
>> > contiguous xfs buffers with alloc_pages. Terribly inefficient with
>> > memory and fragmentation on high I/O loads but it may be good enough
>> > for basic usage (which most non-mmu architectures will need).
>> >
>> > This patch was tested with lklfuse [1] and basic operations seems to
>> > work even with 16MB allocated for LKL.
>> >
>> > [1] https://github.com/lkl/linux
>> >
>> > Signed-off-by: Octavian Purdila <octavian.purdila@...el.com>
>> > ---
>>
>> Interesting, though this makes me wonder why we couldn't have a new
>> _XBF_VMEM (for example) buffer type that uses vmalloc(). I'm not
>> familiar with mmu-less context, but I see that mm/nommu.c has a
>> __vmalloc() interface that looks like it ultimately translates into an
>> alloc_pages() call. Would that accomplish what this patch is currently
>> trying to do?
>
> vmalloc is always a last resort. vmalloc space on 32 bit systems is
> extremely limited and it is easy to exhaust with XFS.
Doesn't vm_map_ram use the same space as vmalloc?
>
> Also, vmalloc limits the control we have over allocation context
> (e.g. the hoops we jump through in kmem_alloc_large() to maintain
> GFP_NOFS contexts), so just using vmalloc doesn't make things much
> simpler from an XFS perspective.
>
I have zero experience with XFS, sorry if I ask obvious questions.
AFAICS there are no memalloc_noio_save() hoops in the page allocation
part, just in the vm_map_ram part. Could we preserve the
memalloc_noio_save() part and use vmalloc instead of vm_map_ram?
>> I ask because it seems like that would help clean up the code a bit, for
>> one. It might also facilitate some degree of testing of the XFS bits
>> (even if utilized sparingly in DEBUG mode if it weren't suitable enough
>> for generic/mmu use). We currently allocate and map the buffer pages
>> separately and I'm not sure if there's any particular reasons for doing
>> that outside of some congestion handling in the allocation code and
>> XBF_UNMAPPED buffers, the latter probably being irrelevant for nommu.
>> Any other thoughts on that?
>
> We could probably clean the code up more (the allocation logic
> is now largely a historic relic) but I'm not convinced yet that we
> should be spending any time trying to specifically support mmu-less
> hardware.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists