lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Sun, 19 Jun 2016 10:49:26 +0800
From:	Wenwei Tao <ww.tao0320@...il.com>
To:	Vlastimil Babka <vbabka@...e.cz>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Fwd: [RFC PATCH 1/3] mm, page_alloc: free HIGHATOMIC page directly to
 the allocator

Hi,
The original message is somehow determined to be junk mail and
rejected by the system.
Forward this message.

---------- Forwarded message ----------
From: Wenwei Tao <ww.tao0320@...il.com>
Date: 2016-06-19 10:40 GMT+08:00
Subject: Re: [RFC PATCH 1/3] mm, page_alloc: free HIGHATOMIC page
directly to the allocator
To: Vlastimil Babka <vbabka@...e.cz>
抄送: Wenwei Tao <wwtao0320@....com>, akpm@...ux-foundation.org,
mgorman@...hsingularity.net, mhocko@...e.com, rientjes@...gle.com,
kirill.shutemov@...ux.intel.com, iamjoonsoo.kim@....com,
izumi.taku@...fujitsu.com, Johannes Weiner <hannes@...xchg.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org


2016-06-18 18:14 GMT+08:00 Vlastimil Babka <vbabka@...e.cz>:
> On 06/18/2016 11:34 AM, Wenwei Tao wrote:
>> From: Wenwei Tao <ww.tao0320@...il.com>
>>
>> Some pages might have already been allocated before reserve
>> the pageblock as HIGHATOMIC. When free these pages, put them
>> directly to the allocator instead of the pcp lists since they
>> might have the chance to be merged to high order pages.
>
> Are there some data showing the improvement, or just theoretical?
>

It's just theoretical. I read the mm code and try to understand it,
think this might be an optimization.

>> Signed-off-by: Wenwei Tao <ww.tao0320@...il.com>
>> ---
>>  mm/page_alloc.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 6903b69..19f9e76 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2412,7 +2412,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>
> The full comment that's here for context:
>
> /*
>  * We only track unmovable, reclaimable and movable on pcp lists.
>  * Free ISOLATE pages back to the allocator because they are being
>  * offlined but treat RESERVE as movable pages so we can get those
>  * areas back if necessary. Otherwise, we may have to free
>  * excessively into the page allocator
>  */
>
> That comment looks outdated as it refers to RESERVE, which was replaced
> by HIGHATOMIC. But there's some reasoning why these pages go to
> pcplists. I'd expect the "free excessively" part isn't as bad as
> highatomic reserves are quite limited. They also shouldn't be used for
> order-0 allocations, which is what this function is about, so I would
> expect both the impact on "free excessively" and the improvement of
> merging to be minimal?
>
>>        * excessively into the page allocator
>>        */
>>       if (migratetype >= MIGRATE_PCPTYPES) {
>> -             if (unlikely(is_migrate_isolate(migratetype))) {
>> +             if (unlikely(is_migrate_isolate(migratetype) ||
>> +                             migratetype == MIGRATE_HIGHATOMIC)) {
>>                       free_one_page(zone, page, pfn, 0, migratetype);
>>                       goto out;
>>               }
>
> In any case your patch highlighted that this code could be imho
> optimized like below.
>
> if (unlikely(migratetype >= MIGRATE_PCPTYPES))
>    if (is_migrate_cma(migratetype)) {
>        migratetype = MIGRATE_MOVABLE;
>    } else {
>        free_one_page(zone, page, pfn, 0, migratetype);
>        goto out;
>    }
> }
>
> That's less branches than your patch, and even less than originally if
> CMA is not enabled (or with ZONE_CMA).

Yeah, this looks better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ