lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0201b67f-e6a7-623c-77e1-f080d5bf30b5@microsoft.com>
Date:   Wed, 31 Oct 2018 16:06:17 +0000
From:   Pasha Tatashin <Pavel.Tatashin@...rosoft.com>
To:     Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
        Pasha Tatashin <Pavel.Tatashin@...rosoft.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
CC:     "mhocko@...e.com" <mhocko@...e.com>,
        "dave.jiang@...el.com" <dave.jiang@...el.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "willy@...radead.org" <willy@...radead.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "yi.z.zhang@...ux.intel.com" <yi.z.zhang@...ux.intel.com>,
        "khalid.aziz@...cle.com" <khalid.aziz@...cle.com>,
        "rppt@...ux.vnet.ibm.com" <rppt@...ux.vnet.ibm.com>,
        "vbabka@...e.cz" <vbabka@...e.cz>,
        "sparclinux@...r.kernel.org" <sparclinux@...r.kernel.org>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "ldufour@...ux.vnet.ibm.com" <ldufour@...ux.vnet.ibm.com>,
        "mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
        "mingo@...nel.org" <mingo@...nel.org>,
        "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>
Subject: Re: [mm PATCH v4 3/6] mm: Use memblock/zone specific iterator for
 handling deferred page init



On 10/31/18 12:05 PM, Alexander Duyck wrote:
> On Wed, 2018-10-31 at 15:40 +0000, Pasha Tatashin wrote:
>>
>> On 10/17/18 7:54 PM, Alexander Duyck wrote:
>>> This patch introduces a new iterator for_each_free_mem_pfn_range_in_zone.
>>>
>>> This iterator will take care of making sure a given memory range provided
>>> is in fact contained within a zone. It takes are of all the bounds checking
>>> we were doing in deferred_grow_zone, and deferred_init_memmap. In addition
>>> it should help to speed up the search a bit by iterating until the end of a
>>> range is greater than the start of the zone pfn range, and will exit
>>> completely if the start is beyond the end of the zone.
>>>
>>> This patch adds yet another iterator called
>>> for_each_free_mem_range_in_zone_from and then uses it to support
>>> initializing and freeing pages in groups no larger than MAX_ORDER_NR_PAGES.
>>> By doing this we can greatly improve the cache locality of the pages while
>>> we do several loops over them in the init and freeing process.
>>>
>>> We are able to tighten the loops as a result since we only really need the
>>> checks for first_init_pfn in our first iteration and after that we can
>>> assume that all future values will be greater than this. So I have added a
>>> function called deferred_init_mem_pfn_range_in_zone that primes the
>>> iterators and if it fails we can just exit.
>>>
>>> On my x86_64 test system with 384GB of memory per node I saw a reduction in
>>> initialization time from 1.85s to 1.38s as a result of this patch.
>>>
>>> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>>
>> Hi Alex,
>>
>> Could you please split this patch into two parts:
>>
>> 1. Add deferred_init_maxorder()
>> 2. Add memblock iterator?
>>
>> This would allow a better bisecting in case of problems. Chaning two
>> loops into deferred_init_maxorder() while a good idea, is still
>> non-trivial and might lead to bugs.
>>
>> Thank you,
>> Pavel
> 
> I can do that, but I will need to flip the order. I will add the new
> iterator first and then deferred_init_maxorder. Otherwise the
> intermediate step ends up being too much throw-away code.

That sounds good.

Thank you,
Pavel

> 
> - Alex
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ