lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <6b7a43a6-e364-87c0-66ba-5d1f1ee68bf8@redhat.com> Date: Thu, 24 Nov 2022 17:44:30 +0800 From: Gavin Shan <gshan@...hat.com> To: David Hildenbrand <david@...hat.com>, Hugh Dickins <hughd@...gle.com> Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, akpm@...ux-foundation.org, william.kucharski@...cle.com, ziy@...dia.com, kirill.shutemov@...ux.intel.com, zhenyzha@...hat.com, shan.gavin@...il.com, riel@...riel.com, willy@...radead.org, apopple@...dia.com Subject: Re: [PATCH] mm: migrate: Fix THP's mapcount on isolation On 11/24/22 4:46 PM, David Hildenbrand wrote: > On 24.11.22 01:14, Gavin Shan wrote: >> On 11/23/22 4:56 PM, David Hildenbrand wrote: >>> On 23.11.22 06:14, Hugh Dickins wrote: >>>> On Wed, 23 Nov 2022, Gavin Shan wrote: >>>> >>>>> The issue is reported when removing memory through virtio_mem device. >>>>> The transparent huge page, experienced copy-on-write fault, is wrongly >>>>> regarded as pinned. The transparent huge page is escaped from being >>>>> isolated in isolate_migratepages_block(). The transparent huge page >>>>> can't be migrated and the corresponding memory block can't be put >>>>> into offline state. >>>>> >>>>> Fix it by replacing page_mapcount() with total_mapcount(). With this, >>>>> the transparent huge page can be isolated and migrated, and the memory >>>>> block can be put into offline state. >>>>> >>>>> Fixes: 3917c80280c9 ("thp: change CoW semantics for anon-THP") >>>>> Cc: stable@...r.kernel.org # v5.8+ >>>>> Reported-by: Zhenyu Zhang <zhenyzha@...hat.com> >>>>> Suggested-by: David Hildenbrand <david@...hat.com> >>>>> Signed-off-by: Gavin Shan <gshan@...hat.com> >>>> >>>> Interesting, good catch, looked right to me: except for the Fixes line >>>> and mention of v5.8. That CoW change may have added a case which easily >>>> demonstrates the problem, but it would have been the wrong test on a THP >>>> for long before then - but only in v5.7 were compound pages allowed >>>> through at all to reach that test, so I think it should be >>>> >>>> Fixes: 1da2f328fa64 ("mm,thp,compaction,cma: allow THP migration for CMA allocations") >>>> Cc: stable@...r.kernel.org # v5.7+ >>>> >> >> Right, commit 1da2f328fa64 looks more accurate in this particular >> case, I will fix it up in next revision. >> >>>> Oh, no, stop: this is not so easy, even in the latest tree. >>>> >>>> Because at the time of that "admittedly racy check", we have no hold >>>> at all on the page in question: and if it's PageLRU or PageCompound >>>> at one instant, it may be different the next instant. Which leaves it >>>> vulnerable to whatever BUG_ON()s there may be in the total_mapcount() >>>> path - needs research. *Perhaps* there are no more BUG_ON()s in the >>>> total_mapcount() path than in the existing page_mapcount() path. >>>> >>>> I suspect that for this to be safe (before your patch and more so after), >>>> it will be necessary to shift the "admittedly racy check" down after the >>>> get_page_unless_zero() (and check the sequence of operations when a >>>> compound page is initialized). >>> >>> Grabbing a reference first sounds like the right approach to me. >>> >> >> Yeah, it sounds reasonable to me to grab a page->__refcount in the >> first place. Looking at isolate_migratepages_block(), the page's refcount >> is increased by get_page_unless_zero(), but it's too late. To increase >> the page's refcount at the first place in the function will be conflicting >> with hugetlb page and non-LRU page. I mean there will be a series to refactor >> the code so that the page's refcount can be grabbed in the first place. >> >> So I plan to post a followup series to refactor the code and grab >> the page's refcount in the first place. In this way, the fix can be >> merged as soon as possible. David and Hugh, please let me know if >> it's reasonable plan? :) > > > Can't you just temporarily grab the refcount and drop it again? I mean, it's all racy either way and the code has to be able to cope with such races. > Well, we can do this by moving the hunk of code, which increases page's refcount, ahead of the check. if (unlikely(!get_page_unless_zero(page))) goto isolate_fail; if (!mapping && (page_count(page) - 1) > total_mapcount(page)) goto isolate_fail_put; Thanks, Gavin
Powered by blists - more mailing lists