lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Nov 2022 12:06:56 +1100
From:   Alistair Popple <apopple@...dia.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Hugh Dickins <hughd@...gle.com>, Gavin Shan <gshan@...hat.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, william.kucharski@...cle.com,
        ziy@...dia.com, kirill.shutemov@...ux.intel.com,
        zhenyzha@...hat.com, shan.gavin@...il.com, riel@...riel.com,
        willy@...radead.org
Subject: Re: [PATCH] mm: migrate: Fix THP's mapcount on isolation


David Hildenbrand <david@...hat.com> writes:

> On 23.11.22 06:14, Hugh Dickins wrote:
>> On Wed, 23 Nov 2022, Gavin Shan wrote:
>> 
>>> The issue is reported when removing memory through virtio_mem device.
>>> The transparent huge page, experienced copy-on-write fault, is wrongly
>>> regarded as pinned. The transparent huge page is escaped from being
>>> isolated in isolate_migratepages_block(). The transparent huge page
>>> can't be migrated and the corresponding memory block can't be put
>>> into offline state.
>>>
>>> Fix it by replacing page_mapcount() with total_mapcount(). With this,
>>> the transparent huge page can be isolated and migrated, and the memory
>>> block can be put into offline state.
>>>
>>> Fixes: 3917c80280c9 ("thp: change CoW semantics for anon-THP")
>>> Cc: stable@...r.kernel.org   # v5.8+
>>> Reported-by: Zhenyu Zhang <zhenyzha@...hat.com>
>>> Suggested-by: David Hildenbrand <david@...hat.com>
>>> Signed-off-by: Gavin Shan <gshan@...hat.com>
>> Interesting, good catch, looked right to me: except for the Fixes
>> line
>> and mention of v5.8.  That CoW change may have added a case which easily
>> demonstrates the problem, but it would have been the wrong test on a THP
>> for long before then - but only in v5.7 were compound pages allowed
>> through at all to reach that test, so I think it should be
>> Fixes: 1da2f328fa64 ("mm,thp,compaction,cma: allow THP migration for
>> CMA allocations")
>> Cc: stable@...r.kernel.org   # v5.7+
>> Oh, no, stop: this is not so easy, even in the latest tree.
>> Because at the time of that "admittedly racy check", we have no hold
>> at all on the page in question: and if it's PageLRU or PageCompound
>> at one instant, it may be different the next instant.  Which leaves it
>> vulnerable to whatever BUG_ON()s there may be in the total_mapcount()
>> path - needs research.  *Perhaps* there are no more BUG_ON()s in the
>> total_mapcount() path than in the existing page_mapcount() path.
>> I suspect that for this to be safe (before your patch and more so
>> after),
>> it will be necessary to shift the "admittedly racy check" down after the
>> get_page_unless_zero() (and check the sequence of operations when a
>> compound page is initialized).
>
> Grabbing a reference first sounds like the right approach to me.

I think you're right. Without a page reference I don't think it is even
safe to look at struct page, at least not without synchronisation
against memory hot unplug which could remove the struct page. From a
quick glance I didn't see anything here that obviously did that though.

>> The races I'm talking about are much much rarer than the condition
>> you
>> are trying to avoid, so it's frustrating; but such races are real,
>> and increasing stable's exposure to them is not so good.
>
> Such checks are always racy and the code has to be able to deal with
> false negatives/postives (we're not even holding the page lock); as
> you state, we just don't want to trigger undefined behavior/BUG.
>
>
> I'm also curious how that migration code handles a THP that's in the
> swapcache. It better should handle such pages correctly, for example,
> by removing them from the swapcache first, otherwise that could block
> migration.
>
>
> For example, in mm/ksm.c:write_protect_page() we have
>
> "page_mapcount(page) + 1 + swapped != page_count(page)"
>
> page_mapcount() and "swapped==0/1" makes sense to me, because KSM only
> cares about order-0 pages, so no need for THP games.
>
>
> But we do have an even better helper in place already:
> mm/huge_memory.c:can_split_folio()
>
> Which cares about
>
> a) Swapcache for THP: each subpage could be in the swapcache
> b) Requires the caller to hold one reference to be safe
>
> But I am a bit confused about the "extra_pins" for !anon. Where do the
> folio_nr_pages() references come from?
>
> So *maybe* it makes sense to factor out can_split_folio() and call it
> something like: "folio_maybe_additionally_referenced"  [to clearly
> distinguish it from "folio_maybe_dma_pinned" that cares about actual
> page pinning (read/write page content)].
>
> Such a function could return false positives/negatives due to races
> and the caller would have to hold one reference and be able to deal
> with the semantics.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ