lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 28 Oct 2015 17:21:59 -0700
From:	Mike Kravetz <mike.kravetz@...cle.com>
To:	Hugh Dickins <hughd@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH v2 0/4] hugetlbfs fallocate hole punch race with page
 faults

On 10/28/2015 02:13 PM, Mike Kravetz wrote:
> On 10/28/2015 02:00 PM, Hugh Dickins wrote:
>> On Wed, 28 Oct 2015, Mike Kravetz wrote:
>>> On 10/27/2015 08:34 PM, Hugh Dickins wrote:
>>>
>>> Thanks for the detailed response Hugh.  I will try to address your questions
>>> and provide more reasoning behind the use case and need for this code.
>>
>> And thank you for your detailed response, Mike: that helped a lot.
>>
>>> Ok, here is a bit more explanation of the proposed use case.  It all
>>> revolves around a DB's use of hugetlbfs and the desire for more control
>>> over the underlying memory.  This additional control is achieved by
>>> adding existing fallocate and userfaultfd semantics to hugetlbfs.
>>>
>>> In this use case there is a single process that manages hugetlbfs files
>>> and the underlying memory resources.  It pre-allocates/initializes these
>>> files.
>>>
>>> In addition, there are many other processes which access (rw mode) these
>>> files.  They will simply mmap the files.  It is expected that they will
>>> not fault in any new pages.  Rather, all pages would have been pre-allocated
>>> by the management process.
>>>
>>> At some time, the management process determines that specific ranges of
>>> pages within the hugetlbfs files are no longer needed.  It will then punch
>>> holes in the files.  These 'free' pages within the holes may then be used
>>> for other purposes.  For applications like this (sophisticated DBs), huge
>>> pages are reserved at system init time and closely managed by the
>>> application.
>>> Hence, the desire for this additional control.
>>>
>>> So, when a hole containing N huge pages is punched, the management process
>>> wants to know that it really has N huge pages for other purposes.  Ideally,
>>> none of the other processes mapping this file/area would access the hole.
>>> This is an application error, and it can be 'caught' with  userfaultfd.
>>>
>>> Since these other (non-management) processes will never fault in pages,
>>> they would simply set up userfaultfd to catch any page faults immediately
>>> after mmaping the hugetlbfs file.
>>>
>>>>
>>>> But it sounds to me more as if the holes you want punched are not
>>>> quite like on other filesystems, and you want to be able to police
>>>> them afterwards with userfaultfd, to prevent them from being refilled.
>>>
>>> I am not sure if they are any different.
>>>
>>> One could argue that a hole punch operation must always result in all
>>> pages within the hole being deallocated.  As you point out, this could
>>> race with a fault.  Previously, there would be no way to determine if
>>> all pages had been deallocated because user space could not detect this
>>> race.  Now, userfaultfd allows user space to catch page faults.  So,
>>> it is now possible to catch/depend on hole punch deallocating all pages
>>> within the hole.
>>>
>>>>
>>>> Can't userfaultfd be used just slightly earlier, to prevent them from
>>>> being filled while doing the holepunch?  Then no need for this patchset?
>>>
>>> I do not think so, at least with current userfaultfd semantics.  The hole
>>> needs to be punched before being caught with UFFDIO_REGISTER_MODE_MISSING.
>>
>> Great, that makes sense.
>>
>> I was worried that you needed some kind of atomic treatment of the whole
>> extent punched, but all you need is to close the hole/fault race one
>> hugepage at a time.
>>
>> Throw away all of 1/4, 2/4, 3/4: I think all you need is your 4/4
>> (plus i_mmap_lock_write around the hugetlb_vmdelete_list of course).
>>
>> There you already do the single hugepage hugetlb_vmdelete_list()
>> under mutex_lock(&hugetlb_fault_mutex_table[hash]).
>>
>> And it should come as no surprise that hugetlb_fault() does most
>> of its work under that same mutex.
>>
>> So once remove_inode_hugepages() unlocks the mutex, that page is gone
>> from the file, and userfaultfd UFFDIO_REGISTER_MODE_MISSING will do
>> what you want, won't it?
>>
>> I don't think "my" code buys you anything at all: you're not in danger of
>> shmem's starvation livelock issue, partly because remove_inode_hugepages()
>> uses the simple loop from start to end, and partly because hugetlb_fault()
>> already takes the serializing mutex (no equivalent in shmem_fault()).
>>
>> Or am I dreaming?
> 
> I don't think you are dreaming.
> 
> I should have stepped back and thought about this more before before pulling
> in the shmem code.  It really is only a 'page at a time' operation, and we
> can use the fault mutex table for that.
> 
> I'll code it up with just the changes needed for 4/4 and put it through some
> stress testing.

Thanks again Hugh.  Testing was successful:  current hugetlbfs fallocate
stress testing and testing with "in development" hugetlbfs userfaultfd code.

Andrew, would you like a single patch that includes 4/4 of the series
and i_mmap_lock_write?  You could then throw away the previous patches
and the log would look nicer.

-- 
Mike Kravetz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ