lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <367db2b5-5242-9be6-1d5f-d13e35f84167@huawei.com>
Date:   Wed, 7 Sep 2022 10:11:16 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Muchun Song <songmuchun@...edance.com>,
        David Hildenbrand <david@...hat.com>,
        Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
        Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Prakash Sangappa <prakash.sangappa@...cle.com>,
        James Houghton <jthoughton@...gle.com>,
        Mina Almasry <almasrymina@...gle.com>,
        Pasha Tatashin <pasha.tatashin@...een.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Ray Fucillo <Ray.Fucillo@...ersystems.com>,
        <linux-s390@...r.kernel.org>, <hca@...ux.ibm.com>,
        <gor@...ux.ibm.com>, Alexander Gordeev <agordeev@...ux.ibm.com>,
        Sven Schnelle <svens@...ux.ibm.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 4/8] hugetlb: handle truncate racing with page faults

On 2022/9/7 7:08, Mike Kravetz wrote:
> On 09/06/22 11:05, Mike Kravetz wrote:
>> On 09/06/22 09:48, Mike Kravetz wrote:
>>> On 09/06/22 15:57, Sven Schnelle wrote:
>>>> Hi Mike,
>>>>
>>>> Mike Kravetz <mike.kravetz@...cle.com> writes:
>>>>
>>>>> When page fault code needs to allocate and instantiate a new hugetlb
>>>>> page (huegtlb_no_page), it checks early to determine if the fault is
>>>>> beyond i_size.  When discovered early, it is easy to abort the fault and
>>>>> return an error.  However, it becomes much more difficult to handle when
>>>>> discovered later after allocating the page and consuming reservations
>>>>> and adding to the page cache.  Backing out changes in such instances
>>>>> becomes difficult and error prone.
>>>>>
>>>>> Instead of trying to catch and backout all such races, use the hugetlb
>>>>> fault mutex to handle truncate racing with page faults.  The most
>>>>> significant change is modification of the routine remove_inode_hugepages
>>>>> such that it will take the fault mutex for EVERY index in the truncated
>>>>> range (or hole in the case of hole punch).  Since remove_inode_hugepages
>>>>> is called in the truncate path after updating i_size, we can experience
>>>>> races as follows.
>>>>> - truncate code updates i_size and takes fault mutex before a racing
>>>>>   fault.  After fault code takes mutex, it will notice fault beyond
>>>>>   i_size and abort early.
>>>>> - fault code obtains mutex, and truncate updates i_size after early
>>>>>   checks in fault code.  fault code will add page beyond i_size.
>>>>>   When truncate code takes mutex for page/index, it will remove the
>>>>>   page.
>>>>> - truncate updates i_size, but fault code obtains mutex first.  If
>>>>>   fault code sees updated i_size it will abort early.  If fault code
>>>>>   does not see updated i_size, it will add page beyond i_size and
>>>>>   truncate code will remove page when it obtains fault mutex.
>>>>>
>>>>> Note, for performance reasons remove_inode_hugepages will still use
>>>>> filemap_get_folios for bulk folio lookups.  For indicies not returned in
>>>>> the bulk lookup, it will need to lookup individual folios to check for
>>>>> races with page fault.
>>>>>
>>>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>>>> ---
>>>>>  fs/hugetlbfs/inode.c | 184 +++++++++++++++++++++++++++++++------------
>>>>>  mm/hugetlb.c         |  41 +++++-----
>>>>>  2 files changed, 152 insertions(+), 73 deletions(-)
>>>>
>>>> With linux next starting from next-20220831 i see hangs with this
>>>> patch applied while running the glibc test suite. The patch doesn't
>>>> revert cleanly on top, so i checked out one commit before that one and
>>>> with that revision everything works.
>>>>
>>>> It looks like the malloc test suite in glibc triggers this. I cannot
>>>> identify a single test causing it, but instead the combination of
>>>> multiple tests. Running the test suite on a single CPU works. Given the
>>>> subject of the patch that's likely not a surprise.
>>>>
>>>> This is on s390, and the warning i get from RCU is:
>>>>
>>>> [ 1951.906997] rcu: INFO: rcu_sched self-detected stall on CPU
>>>> [ 1951.907009] rcu:     60-....: (6000 ticks this GP) idle=968c/1/0x4000000000000000 softirq=43971/43972 fqs=2765
>>>> [ 1951.907018]  (t=6000 jiffies g=116125 q=1008072 ncpus=64)
>>>> [ 1951.907024] CPU: 60 PID: 1236661 Comm: ld64.so.1 Not tainted 6.0.0-rc3-next-20220901 #340
>>>> [ 1951.907027] Hardware name: IBM 3906 M04 704 (z/VM 7.1.0)
>>>> [ 1951.907029] Krnl PSW : 0704e00180000000 00000000003d9042 (hugetlb_fault_mutex_hash+0x2a/0xd8)
>>>> [ 1951.907044]            R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
>>>> [ 1951.907095] Call Trace:
>>>> [ 1951.907098]  [<00000000003d9042>] hugetlb_fault_mutex_hash+0x2a/0xd8
>>>> [ 1951.907101] ([<00000000005845a6>] fault_lock_inode_indicies+0x8e/0x128)
>>>> [ 1951.907107]  [<0000000000584876>] remove_inode_hugepages+0x236/0x280
>>>> [ 1951.907109]  [<0000000000584a7c>] hugetlbfs_evict_inode+0x3c/0x60
>>>> [ 1951.907111]  [<000000000044fe96>] evict+0xe6/0x1c0
>>>> [ 1951.907116]  [<000000000044a608>] __dentry_kill+0x108/0x1e0
>>>> [ 1951.907119]  [<000000000044ac64>] dentry_kill+0x6c/0x290
>>>> [ 1951.907121]  [<000000000044afec>] dput+0x164/0x1c0
>>>> [ 1951.907123]  [<000000000042a4d6>] __fput+0xee/0x290
>>>> [ 1951.907127]  [<00000000001794a8>] task_work_run+0x88/0xe0
>>>> [ 1951.907133]  [<00000000001f77a0>] exit_to_user_mode_prepare+0x1a0/0x1a8
>>>> [ 1951.907137]  [<0000000000d0e42e>] __do_syscall+0x11e/0x200
>>>> [ 1951.907142]  [<0000000000d1d392>] system_call+0x82/0xb0
>>>> [ 1951.907145] Last Breaking-Event-Address:
>>>> [ 1951.907146]  [<0000038001d839c0>] 0x38001d839c0
>>>>
>>>> One of the hanging test cases is usually malloc/tst-malloc-too-large-malloc-hugetlb2.
>>>>
>>>> Any thoughts?
>>>
>>> Thanks for the report, I will take a look.
>>>
>>> My first thought is that this fix may not be applied,
>>> https://lore.kernel.org/linux-mm/Ywepr7C2X20ZvLdn@monkey/
>>> However, I see that that is in next-20220831.
>>>
>>> Hopefully, this will recreate on x86.
>>
>> One additional thought ...
>>
>> With this patch, we will take the hugetlb fault mutex for EVERY index in the
>> range being truncated or hole punched.  In the case of a very large file, that
>> is no different than code today where we take the mutex when removing pages
>> from the file.  What is different is taking the mutex for indices that are
>> part of holes in the file.  Consider a very large file with only one page at
>> the very large offset.  We would then take the mutex for each index in that
>> very large hole.  Depending on the size of the hole, this could appear as a
>> hang.
>>
>> For the above locking scheme to work, we need to take the mutex for indices
>> in holes in case there would happen to be a racing page fault.  However, there
>> are only a limited number of fault mutexes (it is a table).  So, we only really
>> need to take at a maximum num_fault_mutexes mutexes.  We could keep track of
>> these with a bitmap.
>>
>> I am not sure this is the issue you are seeing, but a test named
>> tst-malloc-too-large-malloc-hugetlb2 may be doing this.
>>
>> In any case, I think this issue needs to be addressed before this series can
>> move forward.
> 
> Well, even if we address the issue of taking the same mutex multiple times,

Can we change to take all the hugetlb fault mutex at the same time to ensure every possible
future hugetlb page fault will see a truncated i_size? Then we could just drop all the hugetlb
fault mutex before doing any heavy stuff? It seems hugetlb fault mutex could be dropped when
new i_size is guaranteed to be visible for any future hugetlb page fault users?
But I might miss something...

> this new synchronization scheme requires a folio lookup for EVERY index in
> the truncated or hole punched range.  This can easily 'stall' a CPU if there

If above thought holds, we could do batch folio lookup instead. Hopes my thought will help. ;)

Thanks,
Miaohe Lin


> is a really big hole in a file.  One can recreate this easily with fallocate
> to add a single page to a file at a really big offset, and then remove the file.
> 
> I am trying to come up with another algorithm to make this work.
> 
> Andrew, I wanted to give you a heads up that this series may need to be
> pulled if I can not come up with something quickly.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ