lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Sep 2016 16:11:53 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     "Chen, Tim C" <tim.c.chen@...el.com>
Cc:     "Huang, Ying" <ying.huang@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Hansen, Dave" <dave.hansen@...el.com>,
        "Kleen, Andi" <andi.kleen@...el.com>,
        "Lu, Aaron" <aaron.lu@...el.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
        Rik van Riel <riel@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Vladimir Davydov <vdavydov@...tuozzo.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH -v3 00/10] THP swap: Delay splitting THP during swapping
 out

Hi Tim,

On Tue, Sep 13, 2016 at 11:52:27PM +0000, Chen, Tim C wrote:
> >>
> >> - Avoid CPU time for splitting, collapsing THP across swap out/in.
> >
> >Yes, if you want, please give us how bad it is.
> >
> 
> It could be pretty bad.  In an experiment with THP turned on and we
> enter swap, 50% of the cpu are spent in the page compaction path.  

It's page compaction overhead, especially, pageblock_pfn_to_page.
Why is it related to overhead THP split for swapout?
I don't understand.

> So if we could deal with units of large page for swap, the splitting
> and compaction of ordinary pages to large page overhead could be avoided.
> 
>    51.89%    51.89%            :1688  [kernel.kallsyms]   [k] pageblock_pfn_to_page                       
>                       |
>                       --- pageblock_pfn_to_page
>                          |          
>                          |--64.57%-- compaction_alloc
>                          |          migrate_pages
>                          |          compact_zone
>                          |          compact_zone_order
>                          |          try_to_compact_pages
>                          |          __alloc_pages_direct_compact
>                          |          __alloc_pages_nodemask
>                          |          alloc_pages_vma
>                          |          do_huge_pmd_anonymous_page
>                          |          handle_mm_fault
>                          |          __do_page_fault
>                          |          do_page_fault
>                          |          page_fault
>                          |          0x401d9a
>                          |          
>                          |--34.62%-- compact_zone
>                          |          compact_zone_order
>                          |          try_to_compact_pages
>                          |          __alloc_pages_direct_compact
>                          |          __alloc_pages_nodemask
>                          |          alloc_pages_vma
>                          |          do_huge_pmd_anonymous_page
>                          |          handle_mm_fault
>                          |          __do_page_fault
>                          |          do_page_fault
>                          |          page_fault
>                          |          0x401d9a
>                           --0.81%-- [...]
> 
> Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ