lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a565d5a-0540-4041-ce63-a8fd5d1bb340@redhat.com>
Date:   Wed, 26 Jan 2022 17:58:41 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     Jann Horn <jannh@...gle.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        stable <stable@...r.kernel.org>
Subject: Re: [v2 PATCH] fs/proc: task_mmu.c: don't read mapcount for migration
 entry

On 26.01.22 17:53, Yang Shi wrote:
> On Wed, Jan 26, 2022 at 3:57 AM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 26.01.22 12:48, Jann Horn wrote:
>>> On Wed, Jan 26, 2022 at 12:38 PM David Hildenbrand <david@...hat.com> wrote:
>>>> On 26.01.22 12:29, Jann Horn wrote:
>>>>> On Wed, Jan 26, 2022 at 11:51 AM David Hildenbrand <david@...hat.com> wrote:
>>>>>> On 20.01.22 21:28, Yang Shi wrote:
>>>>>>> The syzbot reported the below BUG:
>>>>>>>
>>>>>>> kernel BUG at include/linux/page-flags.h:785!
>>> [...]
>>>>>>> RIP: 0010:PageDoubleMap include/linux/page-flags.h:785 [inline]
>>>>>>> RIP: 0010:__page_mapcount+0x2d2/0x350 mm/util.c:744
>>> [...]
>>>>>> Does this point at the bigger issue that reading the mapcount without
>>>>>> having the page locked is completely unstable?
>>>>>
>>>>> (See also https://lore.kernel.org/all/CAG48ez0M=iwJu=Q8yUQHD-+eZDg6ZF8QCF86Sb=CN1petP=Y0Q@mail.gmail.com/
>>>>> for context.)
>>>>
>>>> Thanks for the pointer.
>>>>
>>>>>
>>>>> I'm not sure what you mean by "unstable". Do you mean "the result is
>>>>> not guaranteed to still be valid when the call returns", "the result
>>>>> might not have ever been valid", or "the call might crash because the
>>>>> page's state as a compound page is unstable"?
>>>>
>>>> A little bit of everything :)
>>> [...]
>>>>> In case you mean "the result might not have ever been valid":
>>>>> Yes, even with this patch applied, in theory concurrent THP splits
>>>>> could cause us to count some page mappings twice. Arguably that's not
>>>>> entirely correct.
>>>>
>>>> Yes, the snapshot is not atomic and, thereby, unreliable. That what I
>>>> mostly meant as "unstable".
>>>>
>>>>>
>>>>> In case you mean "the call might crash because the page's state as a
>>>>> compound page could concurrently change":
>>>>
>>>> I think that's just a side-product of the snapshot not being "correct",
>>>> right?
>>>
>>> I guess you could see it that way? The way I look at it is that
>>> page_mapcount() is designed to return a number that's at least as high
>>> as the number of mappings (rarely higher due to races), and using
>>> page_mapcount() on an unlocked page is legitimate if you're fine with
>>> the rare double-counting of references. In my view, the problem here
>>> is:
>>>
>>> There are different types of references to "struct page" - some of
>>> them allow you to call page_mapcount(), some don't. And in particular,
>>> get_page() doesn't give you a reference that can be used with
>>> page_mapcount(), but locking a (real, non-migration) PTE pointing to
>>> the page does give you such a reference.
>>
>> I assume the point is that as long as the page cannot be unmapped
>> because you block it from getting unmapped (PT lock), the compound page
>> cannot get split. As long as the page cannot get unmapped from that page
>> table you should have at least a mapcount of 1.
> 
> If you mean holding ptl could prevent THP from splitting, then it is
> not true since you may be in the middle of THP split just exactly like
> the race condition solved by this patch.

While you hold the PT lock and discover a mapped page, unmap_page()
cannot continue and unmap the page. That's what I meant "as long as the
page cannot be unmapped".

What doesn't work is if you hold the PT lock and discover a migration
entry, because then you're already past unmap_page(). That's the issue
you're fixing.

> 
> Just page lock or elevated page refcount could serialize against THP
> split AFAIK.
> 
>>
>> But yeah, using the mapcount of a page that is not even mapped
>> (migration entry) is clearly wrong.
>>
>> To summarize: reading the mapcount on an unlocked page will easily
>> return a wrong result and the result should not be relied upon. reading
>> the mapcount of a migration entry is dangerous and certainly wrong.
> 
> Depends on your usecase. Some just want to get a snapshot, just like
> smaps, they don't care.

Right, but as discussed, even the snapshot might be slightly wrong. That
might be just fine for smaps (and I would have enjoyed a comment in the
code stating that :) ).


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ