[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae24c722-0b4a-def4-8cfe-e8b3b48a22c6@huawei.com>
Date: Thu, 28 Jul 2022 10:02:40 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Muchun Song <songmuchun@...edance.com>,
Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
David Hildenbrand <david@...hat.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
James Houghton <jthoughton@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Ray Fucillo <Ray.Fucillo@...ersystems.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH v4 4/8] hugetlbfs: catch and handle truncate racing
with page faults
On 2022/7/28 3:00, Mike Kravetz wrote:
> On 07/27/22 17:20, Miaohe Lin wrote:
>> On 2022/7/7 4:23, Mike Kravetz wrote:
>>> Most hugetlb fault handling code checks for faults beyond i_size.
>>> While there are early checks in the code paths, the most difficult
>>> to handle are those discovered after taking the page table lock.
>>> At this point, we have possibly allocated a page and consumed
>>> associated reservations and possibly added the page to the page cache.
>>>
>>> When discovering a fault beyond i_size, be sure to:
>>> - Remove the page from page cache, else it will sit there until the
>>> file is removed.
>>> - Do not restore any reservation for the page consumed. Otherwise
>>> there will be an outstanding reservation for an offset beyond the
>>> end of file.
>>>
>>> The 'truncation' code in remove_inode_hugepages must deal with fault
>>> code potentially removing a page/folio from the cache after the page was
>>> returned by filemap_get_folios and before locking the page. This can be
>>> discovered by a change in folio_mapping() after taking folio lock. In
>>> addition, this code must deal with fault code potentially consuming
>>> and returning reservations. To synchronize this, remove_inode_hugepages
>>> will now take the fault mutex for ALL indices in the hole or truncated
>>> range. In this way, it KNOWS fault code has finished with the page/index
>>> OR fault code will see the updated file size.
>>>
>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>> ---
>>
>> <snip>
>>
>>> @@ -5606,8 +5610,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>>>
>>> ptl = huge_pte_lock(h, mm, ptep);
>>> size = i_size_read(mapping->host) >> huge_page_shift(h);
>>> - if (idx >= size)
>>> + if (idx >= size) {
>>> + beyond_i_size = true;
>>
>> Thanks for your patch. There is one question:
>>
>> Since races between hugetlb pagefault and truncate is guarded by hugetlb_fault_mutex,
>> do we really need to check it again after taking the page table lock?
>>
>
> Well, the fault mutex can only guard a single hugetlb page. The fault mutex
> is actually an array/table of mutexes hashed by mapping address and file index.
> So, during truncation we take take the mutex for each page as they are
> unmapped and removed. So, the fault mutex only synchronizes operations
> on one specific page. The idea with this patch is to coordinate the fault
> code and truncate code when operating on the same page.
>
> In addition, changing the file size happens early in the truncate process
> before taking any locks/mutexes.
I wonder whether we can somewhat live with it to make code simpler. When changing the file size happens
after checking i_size but before taking the page table lock in hugetlb_fault, the truncate code would
remove the hugetlb page from the page cache for us after hugetlb_fault finishes if we don't roll back
when checking i_size again under the page table lock?
In a word, if hugetlb_fault see a truncated inode, back out early. If not, let truncate code does its
work. So we don't need to complicate the already complicated error path. Or am I miss something?
Thanks.
>
Powered by blists - more mailing lists