lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160215024220.GA30918@js1304-P5Q-DELUXE>
Date:	Mon, 15 Feb 2016 11:42:20 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	zhong jiang <zhongjiang@...wei.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	Aaron Lu <aaron.lu@...el.com>, Mel Gorman <mgorman@...e.de>,
	Rik van Riel <riel@...hat.com>,
	David Rientjes <rientjes@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Xishi Qiu <qiuxishi@...wei.com>
Subject: Re: [PATCH v2 3/3] mm/compaction: speed up pageblock_pfn_to_page()
 when zone is contiguous

On Sun, Feb 14, 2016 at 06:21:03PM +0800, zhong jiang wrote:
> On 2016/2/6 0:11, Joonsoo Kim wrote:
> > 2016-02-05 9:49 GMT+09:00 Andrew Morton <akpm@...ux-foundation.org>:
> >> On Thu,  4 Feb 2016 15:19:35 +0900 Joonsoo Kim <js1304@...il.com> wrote:
> >>
> >>> There is a performance drop report due to hugepage allocation and in there
> >>> half of cpu time are spent on pageblock_pfn_to_page() in compaction [1].
> >>> In that workload, compaction is triggered to make hugepage but most of
> >>> pageblocks are un-available for compaction due to pageblock type and
> >>> skip bit so compaction usually fails. Most costly operations in this case
> >>> is to find valid pageblock while scanning whole zone range. To check
> >>> if pageblock is valid to compact, valid pfn within pageblock is required
> >>> and we can obtain it by calling pageblock_pfn_to_page(). This function
> >>> checks whether pageblock is in a single zone and return valid pfn
> >>> if possible. Problem is that we need to check it every time before
> >>> scanning pageblock even if we re-visit it and this turns out to
> >>> be very expensive in this workload.
> >>>
> >>> Although we have no way to skip this pageblock check in the system
> >>> where hole exists at arbitrary position, we can use cached value for
> >>> zone continuity and just do pfn_to_page() in the system where hole doesn't
> >>> exist. This optimization considerably speeds up in above workload.
> >>>
> >>> Before vs After
> >>> Max: 1096 MB/s vs 1325 MB/s
> >>> Min: 635 MB/s 1015 MB/s
> >>> Avg: 899 MB/s 1194 MB/s
> >>>
> >>> Avg is improved by roughly 30% [2].
> >>>
> >>> [1]: http://www.spinics.net/lists/linux-mm/msg97378.html
> >>> [2]: https://lkml.org/lkml/2015/12/9/23
> >>>
> >>> ...
> >>>
> >>> --- a/include/linux/memory_hotplug.h
> >>> +++ b/include/linux/memory_hotplug.h
> >>> @@ -196,6 +196,9 @@ void put_online_mems(void);
> >>>  void mem_hotplug_begin(void);
> >>>  void mem_hotplug_done(void);
> >>>
> >>> +extern void set_zone_contiguous(struct zone *zone);
> >>> +extern void clear_zone_contiguous(struct zone *zone);
> >>> +
> >>>  #else /* ! CONFIG_MEMORY_HOTPLUG */
> >>>  /*
> >>>   * Stub functions for when hotplug is off
> >>
> >> Was it really intended that these declarations only exist if
> >> CONFIG_MEMORY_HOTPLUG?  Seems unrelated.
> > 
> > These are called for caching memory layout whether it is contiguous
> > or not. So, they are always called in memory initialization. Then,
> > hotplug could change memory layout so they should be called
> > there, too. So, they are defined in page_alloc.c and exported only
> > if CONFIG_MEMORY_HOTPLUG.
> > 
> >> The i386 allnocofnig build fails in preditable ways so I fixed that up
> >> as below, but it seems wrong.
> > 
> > Yeah, it seems wrong to me. :)
> > Here goes fix.
> > 
> > ----------->8------------
> >>From ed6add18bc361e00a7ac6746de6eeb62109e6416 Mon Sep 17 00:00:00 2001
> > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> > Date: Thu, 10 Dec 2015 17:03:54 +0900
> > Subject: [PATCH] mm/compaction: speed up pageblock_pfn_to_page() when zone is
> >  contiguous
> > 
> > There is a performance drop report due to hugepage allocation and in there
> > half of cpu time are spent on pageblock_pfn_to_page() in compaction [1].
> > In that workload, compaction is triggered to make hugepage but most of
> > pageblocks are un-available for compaction due to pageblock type and
> > skip bit so compaction usually fails. Most costly operations in this case
> > is to find valid pageblock while scanning whole zone range. To check
> > if pageblock is valid to compact, valid pfn within pageblock is required
> > and we can obtain it by calling pageblock_pfn_to_page(). This function
> > checks whether pageblock is in a single zone and return valid pfn
> > if possible. Problem is that we need to check it every time before
> > scanning pageblock even if we re-visit it and this turns out to
> > be very expensive in this workload.
> > 
> > Although we have no way to skip this pageblock check in the system
> > where hole exists at arbitrary position, we can use cached value for
> > zone continuity and just do pfn_to_page() in the system where hole doesn't
> > exist. This optimization considerably speeds up in above workload.
> > 
> > Before vs After
> > Max: 1096 MB/s vs 1325 MB/s
> > Min: 635 MB/s 1015 MB/s
> > Avg: 899 MB/s 1194 MB/s
> > 
> > Avg is improved by roughly 30% [2].
> > 
> > [1]: http://www.spinics.net/lists/linux-mm/msg97378.html
> > [2]: https://lkml.org/lkml/2015/12/9/23
> > 
> > v3
> > o remove pfn_valid_within() check for all pages in the pageblock
> > because pageblock_pfn_to_page() is only called with pageblock aligned pfn.
> 
> I have a question about the zone continuity. because hole exists at
> arbitrary position in a page block. Therefore, only pageblock_pf_to_page()
> is insufficiency, whether pageblock aligned pfn or not , the pfn_valid_within()
> is necessary.
> 
> eh: 120M-122M is a range of page block, but the 120.5M-121.5M is holes, only by
> pageblock_pfn_to_page() to conclude in the result is inaccurate

contiguous may be misleading word. It doesn't represent there are no
hole. It only represents that all pageblocks within zone span belong to
corresponding zone and validity of all pageblock aligned pfn is
checked. So, if it is set, we can safely call pfn_to_page() for pageblock
aligned pfn in that zone without checking pfn_valid().

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ