[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170616131554.GD11676@redhat.com>
Date: Fri, 16 Jun 2017 15:15:54 +0200
From: Andrea Arcangeli <aarcange@...hat.com>
To: Prakash Sangappa <prakash.sangappa@...cle.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Dave Hansen <dave.hansen@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: Re: [PATCH RFC] hugetlbfs 'noautofill' mount option
Hello Prakash,
On Tue, May 09, 2017 at 01:59:34PM -0700, Prakash Sangappa wrote:
>
>
> On 5/9/17 1:58 AM, Christoph Hellwig wrote:
> > On Mon, May 08, 2017 at 03:12:42PM -0700, prakash.sangappa wrote:
> >> Regarding #3 as a general feature, do we want to
> >> consider this and the complexity associated with the
> >> implementation?
> > We have to. Given that no one has exclusive access to hugetlbfs
> > a mount option is fundamentally the wrong interface.
>
>
> A hugetlbfs filesystem may need to be mounted for exclusive use by
> an application. Note, recently the 'min_size' mount option was added
> to hugetlbfs, which would reserve minimum number of huge pages
> for that filesystem for use by an application. If the filesystem with
> min size specified, is not setup for exclusive use by an application,
> then the purpose of reserving huge pages is defeated. The
> min_size option was for use by applications like the database.
>
> Also, I am investigating enabling hugetlbfs mounts within user
> namespace's mount namespace. That would allow an application
> to mount a hugetlbfs filesystem inside a namespace exclusively for
> its use, running as a non root user. For this it seems like the 'min_size'
> should be subject to some user limits. Anyways, mounting inside
> user namespaces is a different discussion.
>
> So, if a filesystem has to be setup for exclusive use by an application,
> then different mount options can be used for that filesystem.
Before userfaultfd I used a madvise that triggered SIGBUS. Aside from
performance that is much lower than userfaultfd because of the return
to userland, SIGBUS handling and new enter kernel to communicate
through a pipe with a memory manager, it couldn't work reliably
because you're not going to get exact information on the virtual
address that triggered the fault if the SIGBUS triggers in some random
in a copy-user of some random syscall, depending on the syscall some
random error will be returned. So it couldn't work transparently to
the app as far as syscalls and get_user_pages drivers were concerned.
With your solution if you pass a corrupted pointer to a random read()
syscall you're going to get a error, but supposedly you already handle
any syscall error and stop the app.
This is a special case because you don't care about performance and
you don't care about not returning random EFAULT errors from syscalls
like read().
This mount option seems non intrusive enough and hugetlbfs is quite
special already, so I'm not particularly concerned by the fact it's
one more special tweak.
If it would be enough to convert the SIGBUS into a (killable) process
hang, you could still use uffd and there would be no need to send the
uffd to a manager. You'd find the corrupting buggy process stuck in
handle_userfault().
As an alternative to the mount option we could consider adding
UFFD_FEATURE_SIGBUS that tells the handle_userfault() to simply return
VM_FAULT_SIGBUS in presence of a pagefault event. You'd still get
weird EFAULT or erratic retvals from syscalls so it would only be
usable in for your robustness feature. Then you could use UFFDIO_COPY
too to fill the memory atomically which runs faster than a page fault
(fallocate punch hole still required to zap it).
Adding a single if (ctx->feature & UFFD_FEATURE_SIGBUS) goto out,
branch for this corner case to handle_userfault() isn't great and the
hugetlbfs mount option is absolutely zero cost to the handle_userfault
which is primarily why I'm not against it.. although it's not going to
be measurable so it would be ok also to add such feature.
Thanks,
Andrea
Powered by blists - more mailing lists