lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58D53B8A.9040508@cs.rutgers.edu>
Date:   Fri, 24 Mar 2017 10:30:18 -0500
From:   Zi Yan <zi.yan@...rutgers.edu>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
CC:     Zi Yan <zi.yan@...t.com>, <linux-kernel@...r.kernel.org>,
        <linux-mm@...ck.org>, <kirill.shutemov@...ux.intel.com>,
        <akpm@...ux-foundation.org>, <minchan@...nel.org>,
        <vbabka@...e.cz>, <mgorman@...hsingularity.net>,
        <mhocko@...nel.org>, <n-horiguchi@...jp.nec.com>,
        <khandual@...ux.vnet.ibm.com>, <dnellans@...dia.com>
Subject: Re: [PATCH v4 05/11] mm: thp: enable thp migration in generic path

Hi Kirill,

Kirill A. Shutemov wrote:
> On Mon, Mar 13, 2017 at 11:45:01AM -0400, Zi Yan wrote:
>> From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
>>
>> This patch adds thp migration's core code, including conversions
>> between a PMD entry and a swap entry, setting PMD migration entry,
>> removing PMD migration entry, and waiting on PMD migration entries.
>>
>> This patch makes it possible to support thp migration.
>> If you fail to allocate a destination page as a thp, you just split
>> the source thp as we do now, and then enter the normal page migration.
>> If you succeed to allocate destination thp, you enter thp migration.
>> Subsequent patches actually enable thp migration for each caller of
>> page migration by allowing its get_new_page() callback to
>> allocate thps.
>>
>> ChangeLog v1 -> v2:
>> - support pte-mapped thp, doubly-mapped thp
>>
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
>>
>> ChangeLog v2 -> v3:
>> - use page_vma_mapped_walk()
>>
>> ChangeLog v3 -> v4:
>> - factor out the code of removing pte pgtable page in zap_huge_pmd()
>>
>> Signed-off-by: Zi Yan <zi.yan@...rutgers.edu>
> 
> See few questions below.
> 
> It would be nice to split it into few patches. Probably three or four.

This patch was two separate ones in v2:
1. introduce remove_pmd_migration_entry(), set_migration_pmd() and other
auxiliary functions,
2. enable THP migration in the migration path.

But the first one of these two patches would be dead code, since no one
else uses it. Michal also suggested merging two patches into one when he
reviewed v2.

If you have any suggestion, I am OK to split this patch and make it
smaller.

<snip>

>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index cda4c2778d04..0bbad6dcf95a 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -211,6 +211,12 @@ static int remove_migration_pte(struct page *page, struct vm_area_struct *vma,
>>  		new = page - pvmw.page->index +
>>  			linear_page_index(vma, pvmw.address);
>>  
>> +		/* PMD-mapped THP migration entry */
>> +		if (!PageHuge(page) && PageTransCompound(page)) {
>> +			remove_migration_pmd(&pvmw, new);
>> +			continue;
>> +		}
>> +
> 
> Any reason not to share PTE handling of non-THP with THP?

You mean PTE-mapped THPs? I was mostly reuse Naoya's patches. But at
first look, it seems PTE-mapped THP handling code is the same as
existing PTE handling code.

This part of code can be changed to:

+		/* PMD-mapped THP migration entry */
+		if (!pvmw.pte && pvmw.page) {
+                       VM_BUG_ON_PAGE(!PageTransCompound(page), page);
+			remove_migration_pmd(&pvmw, new);
+			continue;
+		}
+

> 
>>  		get_page(new);
>>  		pte = pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot)));
>>  		if (pte_swp_soft_dirty(*pvmw.pte))

<snip>

>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index 4ed5908c65b0..9d550a8a0c71 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -118,7 +118,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
>>  {
>>  	pmd_t pmd;
>>  	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>> -	VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp));
>> +	VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
>> +		  !pmd_devmap(*pmdp));
> 
> How does this? _flush doesn't make sense for !present.

Right. It should be:

-	VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp));
+	VM_BUG_ON((pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) &&
+		  !pmd_devmap(*pmdp)) || !pmd_present(*pmdp));


> 
>>  	pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
>>  	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
>>  	return pmd;
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 555cc7ebacf6..2c65abbd7a0e 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1298,6 +1298,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>  	int ret = SWAP_AGAIN;
>>  	enum ttu_flags flags = (enum ttu_flags)arg;
>>  
>> +
>>  	/* munlock has nothing to gain from examining un-locked vmas */
>>  	if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
>>  		return SWAP_AGAIN;
>> @@ -1308,6 +1309,14 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>>  	}
>>  
>>  	while (page_vma_mapped_walk(&pvmw)) {
>> +		/* THP migration */
>> +		if (flags & TTU_MIGRATION) {
>> +			if (!PageHuge(page) && PageTransCompound(page)) {
>> +				set_pmd_migration_entry(&pvmw, page);
> 
> Again, it would be nice share PTE handling. It should be rather similar,
> no?

At first look, it should work. I will change it. If it works, it will be
included in the next version.

This can also shrink the patch size.

Thanks.


-- 
Best Regards,
Yan Zi


Download attachment "signature.asc" of type "application/pgp-signature" (538 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ