lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Nov 2022 14:20:38 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Zi Yan <ziy@...dia.com>
Cc:     Gavin Shan <gshan@...hat.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
        william.kucharski@...cle.com, kirill.shutemov@...ux.intel.com,
        zhenyzha@...hat.com, apopple@...dia.com, hughd@...gle.com,
        willy@...radead.org, shan.gavin@...il.com,
        Huang Ying <ying.huang@...el.com>
Subject: Re: [PATCH v2] mm: migrate: Fix THP's mapcount on isolation

On 24.11.22 13:38, Zi Yan wrote:
> 
> On 24 Nov 2022, at 5:43, David Hildenbrand wrote:
> 
>> On 24.11.22 11:21, Gavin Shan wrote:
>>> On 11/24/22 6:09 PM, David Hildenbrand wrote:
>>>> On 24.11.22 10:55, Gavin Shan wrote:
>>>>> The issue is reported when removing memory through virtio_mem device.
>>>>> The transparent huge page, experienced copy-on-write fault, is wrongly
>>>>> regarded as pinned. The transparent huge page is escaped from being
>>>>> isolated in isolate_migratepages_block(). The transparent huge page
>>>>> can't be migrated and the corresponding memory block can't be put
>>>>> into offline state.
>>>>>
>>>>> Fix it by replacing page_mapcount() with total_mapcount(). With this,
>>>>> the transparent huge page can be isolated and migrated, and the memory
>>>>> block can be put into offline state. Besides, The page's refcount is
>>>>> increased a bit earlier to avoid the page is released when the check
>>>>> is executed.
>>>>
>>>> Did you look into handling pages that are in the swapcache case as well?
>>>>
>>>> See is_refcount_suitable() in mm/khugepaged.c.
>>>>
>>>> Should be easy to reproduce, let me know if you need inspiration.
>>>>
>>>
>>> Nope, I didn't look into the case. Please elaborate the details so that
>>> I can reproduce it firstly.
>>
>>
>> A simple reproducer would be (on a system with ordinary swap (not zram))
>>
>> 1) mmap a region (MAP_ANON|MAP_PRIVATE) that can hold a THP
>>
>> 2) Enable THP for that region (MADV_HUGEPAGE)
>>
>> 3) Populate a THP (e.g., write access)
>>
>> 4) PTE-map the THP, for example, using MADV_FREE on the last subpage
>>
>> 5) Trigger swapout of the THP, for example, using MADV_PAGEOUT
> 
> Added the original THP swapout code author, Ying.
> 
> At this step, the THP will be split, right?
> 
> https://elixir.bootlin.com/linux/latest/source/mm/vmscan.c#L1786
> 
> Even if a THP has PMD mapping, IIRC, it is split in the add_to_swap()
> then swapped out. But I cannot find that split code now.

I recall there was some sequence to achieve it. Maybe it was
swapping out the PMD first and not triggering a PTE-mapping first.

mm/vmscan.c:shrink_folio_list()

if (folio_test_large(folio)) {
	/* cannot split folio, skip it */
	if (!can_split_folio(folio, NULL))
		goto activate_locked;
	/*
	 * Split folios without a PMD map right
	 * away. Chances are some or all of the
	 * tail pages can be freed without IO.
	 */
	if (!folio_entire_mapcount(folio) &&
	    split_folio_to_list(folio, folio_list))
		goto activate_locked;
	}
}

So the sequence might have to be

1) mmap a region (MAP_ANON|MAP_PRIVATE) that can hold a THP

2) Enable THP for that region (MADV_HUGEPAGE)

3) Populate a THP (e.g., write access)

4) Trigger swapout of the THP, for example, using MADV_PAGEOUT

5) Access some subpage

As we don't have PMD swap entries, we will PTE-map the
THP during try_to_unmap() IIRC.



Independent of that, the check we have here also doesn't consider
ordinary order-0 pages that might be in the swapache.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists