lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Jan 2020 21:05:54 +0800
From:   Wei Yang <richardw.yang@...ux.intel.com>
To:     Wei Yang <richardw.yang@...ux.intel.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kirill.shutemov@...ux.intel.com,
        willy@...radead.org
Subject: Re: [Patch v2] mm/rmap.c: split huge pmd when it really is

On Fri, Jan 03, 2020 at 03:18:46PM +0800, Wei Yang wrote:
>On Tue, Dec 24, 2019 at 06:28:56AM +0800, Wei Yang wrote:
>>When page is not NULL, function is called by try_to_unmap_one() with
>>TTU_SPLIT_HUGE_PMD set. There are two cases to call try_to_unmap_one()
>>with TTU_SPLIT_HUGE_PMD set:
>>
>>  * unmap_page()
>>  * shrink_page_list()
>>
>>In both case, the page passed to try_to_unmap_one() is PageHead() of the
>>THP. If this page's mapping address in process is not HPAGE_PMD_SIZE
>>aligned, this means the THP is not mapped as PMD THP in this process.
>>This could happen when we do mremap() a PMD size range to an un-aligned
>>address.
>>
>>Currently, this case is handled by following check in __split_huge_pmd()
>>luckily.
>>
>>  page != pmd_page(*pmd)
>>
>>This patch checks the address to skip some work.
>
>I am sorry to forget address Kirill's comment in 1st version.
>
>The first one is the performance difference after this change for a PTE
>mappged THP.
>
>Here is the result:(in cycle)
>
>        Before     Patched
>
>        963        195
>        988        40
>        895        78
>
>Average 948        104
>
>So the change reduced 90% time for function split_huge_pmd_address().
>
>For the 2nd comment, the vma check. Let me take a further look to analysis.
>
>Thanks for Kirill's suggestion.
>

For 2nd comment, check vma could hold huge page.

You mean do this check ?

  vma->vm_start <= address && vma->vm_end >= address + HPAGE_PMD_SIZE

This happens after munmap a partial of the THP range? After doing so, we can
skip split_pmd for this case.

>>
>>Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
>>
>>---
>>v2: move the check into split_huge_pmd_address().
>>---
>> mm/huge_memory.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>index 893fecd5daa4..2b9c2f412b32 100644
>>--- a/mm/huge_memory.c
>>+++ b/mm/huge_memory.c
>>@@ -2342,6 +2342,22 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
>> 	pud_t *pud;
>> 	pmd_t *pmd;
>> 
>>+	/*
>>+	 * When page is not NULL, function is called by try_to_unmap_one()
>>+	 * with TTU_SPLIT_HUGE_PMD set. There are two places set
>>+	 * TTU_SPLIT_HUGE_PMD
>>+	 *
>>+	 *     unmap_page()
>>+	 *     shrink_page_list()
>>+	 *
>>+	 * In both cases, the "page" here is the PageHead() of a THP.
>>+	 *
>>+	 * If the page is not a PMD mapped huge page, e.g. after mremap(), it
>>+	 * is not necessary to split it.
>>+	 */
>>+	if (page && !IS_ALIGNED(address, HPAGE_PMD_SIZE))
>>+		return;
>>+
>> 	pgd = pgd_offset(vma->vm_mm, address);
>> 	if (!pgd_present(*pgd))
>> 		return;
>>-- 
>>2.17.1
>
>-- 
>Wei Yang
>Help you, Help me

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists