lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200103132650.jlyd37k6fcvycmy7@box>
Date:   Fri, 3 Jan 2020 16:26:50 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Wei Yang <richardw.yang@...ux.intel.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kirill.shutemov@...ux.intel.com,
        willy@...radead.org
Subject: Re: [Patch v2] mm/rmap.c: split huge pmd when it really is

On Fri, Jan 03, 2020 at 09:05:54PM +0800, Wei Yang wrote:
> On Fri, Jan 03, 2020 at 03:18:46PM +0800, Wei Yang wrote:
> >On Tue, Dec 24, 2019 at 06:28:56AM +0800, Wei Yang wrote:
> >>When page is not NULL, function is called by try_to_unmap_one() with
> >>TTU_SPLIT_HUGE_PMD set. There are two cases to call try_to_unmap_one()
> >>with TTU_SPLIT_HUGE_PMD set:
> >>
> >>  * unmap_page()
> >>  * shrink_page_list()
> >>
> >>In both case, the page passed to try_to_unmap_one() is PageHead() of the
> >>THP. If this page's mapping address in process is not HPAGE_PMD_SIZE
> >>aligned, this means the THP is not mapped as PMD THP in this process.
> >>This could happen when we do mremap() a PMD size range to an un-aligned
> >>address.
> >>
> >>Currently, this case is handled by following check in __split_huge_pmd()
> >>luckily.
> >>
> >>  page != pmd_page(*pmd)
> >>
> >>This patch checks the address to skip some work.
> >
> >I am sorry to forget address Kirill's comment in 1st version.
> >
> >The first one is the performance difference after this change for a PTE
> >mappged THP.
> >
> >Here is the result:(in cycle)
> >
> >        Before     Patched
> >
> >        963        195
> >        988        40
> >        895        78
> >
> >Average 948        104
> >
> >So the change reduced 90% time for function split_huge_pmd_address().

Right.

But do we have a scenario, where the performance of
split_huge_pmd_address() matters? I mean, it it called as part of rmap
walk, attempt to split huge PMD where we don't have huge PMD should be
within noise.

> >For the 2nd comment, the vma check. Let me take a further look to analysis.
> >
> >Thanks for Kirill's suggestion.
> >
> 
> For 2nd comment, check vma could hold huge page.
> 
> You mean do this check ?
> 
>   vma->vm_start <= address && vma->vm_end >= address + HPAGE_PMD_SIZE
> 
> This happens after munmap a partial of the THP range? After doing so, we can
> skip split_pmd for this case.

Okay, you are right. This kind of check would not be safe as we
split_huge_pmd_address() after adjusting VMA with expectation of splitting
PMD on boundary of the VMA.

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ