lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <296cb740-f04d-6e2b-6480-4a426d2e57ce@huawei.com>
Date:   Mon, 13 Mar 2017 10:16:18 +0800
From:   Yisheng Xie <xieyisheng1@...wei.com>
To:     Vlastimil Babka <vbabka@...e.cz>, <linux-mm@...ck.org>,
        Johannes Weiner <hannes@...xchg.org>
CC:     Joonsoo Kim <iamjoonsoo.kim@....com>,
        David Rientjes <rientjes@...gle.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        <linux-kernel@...r.kernel.org>, <kernel-team@...com>,
        Hanjun Guo <guohanjun@...wei.com>
Subject: Re: [RFC v2 10/10] mm, page_alloc: introduce MIGRATE_MIXED
 migratetype

Hi, Vlastimil,

On 2017/3/8 15:07, Vlastimil Babka wrote:
> On 03/08/2017 03:16 AM, Yisheng Xie wrote:
>> Hi Vlastimil ,
>>
>> On 2017/2/11 1:23, Vlastimil Babka wrote:
>>> @@ -1977,7 +1978,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>>>  	unsigned int current_order = page_order(page);
>>>  	struct free_area *area;
>>>  	int free_pages, good_pages;
>>> -	int old_block_type;
>>> +	int old_block_type, new_block_type;
>>>  
>>>  	/* Take ownership for orders >= pageblock_order */
>>>  	if (current_order >= pageblock_order) {
>>> @@ -1991,11 +1992,27 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>>>  	if (!whole_block) {
>>>  		area = &zone->free_area[current_order];
>>>  		list_move(&page->lru, &area->free_list[start_type]);
>>> -		return;
>>> +		free_pages = 1 << current_order;
>>> +		/* TODO: We didn't scan the block, so be pessimistic */
>>> +		good_pages = 0;
>>> +	} else {
>>> +		free_pages = move_freepages_block(zone, page, start_type,
>>> +							&good_pages);
>>> +		/*
>>> +		 * good_pages is now the number of movable pages, but if we
>>> +		 * want UNMOVABLE or RECLAIMABLE, we consider all non-movable
>>> +		 * as good (but we can't fully distinguish them)
>>> +		 */
>>> +		if (start_type != MIGRATE_MOVABLE)
>>> +			good_pages = pageblock_nr_pages - free_pages -
>>> +								good_pages;
>>>  	}
>>>  
>>>  	free_pages = move_freepages_block(zone, page, start_type,
>>>  						&good_pages);
>> It seems this move_freepages_block() should be removed, if we can steal whole block
>> then just  do it. If not we can check whether we can set it as mixed mt, right?
>> Please let me know if I miss something..
> 
> Right. My results suggested this patch was buggy, so this might be the
> bug (or one of the bugs), thanks for pointing it out. I've reposted v3
> without the RFC patches 9 and 10 and will return to them later.
Yes, I also have test about this patch on v4.1, but can not get better perf.
And it would be much appreciative if you can Cc me when send patchs about 9,10 later.

Thanks
Yisheng Xie.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ