lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Jul 2014 11:39:55 +0200
From:	Vlastimil Babka <vbabka@...e.cz>
To:	David Rientjes <rientjes@...gle.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Minchan Kim <minchan@...nel.org>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Michal Nazarewicz <mina86@...a86.com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Christoph Lameter <cl@...ux.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	Zhang Yanfei <zhangyanfei@...fujitsu.com>
Subject: Re: [PATCH v5 05/14] mm, compaction: move pageblock checks up from
 isolate_migratepages_range()

On 07/30/2014 01:02 AM, David Rientjes wrote:
>>>>
>>>>    /*
>>>> - * Isolate all pages that can be migrated from the block pointed to by
>>>> - * the migrate scanner within compact_control.
>>>> + * Isolate all pages that can be migrated from the first suitable block,
>>>> + * starting at the block pointed to by the migrate scanner pfn within
>>>> + * compact_control.
>>>>     */
>>>>    static isolate_migrate_t isolate_migratepages(struct zone *zone,
>>>>    					struct compact_control *cc)
>>>>    {
>>>>    	unsigned long low_pfn, end_pfn;
>>>> +	struct page *page;
>>>> +	const isolate_mode_t isolate_mode =
>>>> +		(cc->mode == MIGRATE_ASYNC ? ISOLATE_ASYNC_MIGRATE : 0);
>>>>
>>>> -	/* Do not scan outside zone boundaries */
>>>> -	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
>>>> +	/*
>>>> +	 * Start at where we last stopped, or beginning of the zone as
>>>> +	 * initialized by compact_zone()
>>>> +	 */
>>>> +	low_pfn = cc->migrate_pfn;
>>>>
>>>>    	/* Only scan within a pageblock boundary */
>>>>    	end_pfn = ALIGN(low_pfn + 1, pageblock_nr_pages);
>>>>
>>>> -	/* Do not cross the free scanner or scan within a memory hole */
>>>> -	if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
>>>> -		cc->migrate_pfn = end_pfn;
>>>> -		return ISOLATE_NONE;
>>>> -	}
>>>> +	/*
>>>> +	 * Iterate over whole pageblocks until we find the first suitable.
>>>> +	 * Do not cross the free scanner.
>>>> +	 */
>>>> +	for (; end_pfn <= cc->free_pfn;
>>>> +			low_pfn = end_pfn, end_pfn += pageblock_nr_pages) {
>>>> +
>>>> +		/*
>>>> +		 * This can potentially iterate a massively long zone with
>>>> +		 * many pageblocks unsuitable, so periodically check if we
>>>> +		 * need to schedule, or even abort async compaction.
>>>> +		 */
>>>> +		if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))
>>>> +						&& compact_should_abort(cc))
>>>> +			break;
>>>> +
>>>> +		/* Skip whole pageblock in case of a memory hole */
>>>> +		if (!pfn_valid(low_pfn))
>>>> +			continue;
>>>> +
>>>> +		page = pfn_to_page(low_pfn);
>>>> +
>>>> +		/* If isolation recently failed, do not retry */
>>>> +		if (!isolation_suitable(cc, page))
>>>> +			continue;
>>>> +
>>>> +		/*
>>>> +		 * For async compaction, also only scan in MOVABLE blocks.
>>>> +		 * Async compaction is optimistic to see if the minimum amount
>>>> +		 * of work satisfies the allocation.
>>>> +		 */
>>>> +		if (cc->mode == MIGRATE_ASYNC &&
>>>> +		    !migrate_async_suitable(get_pageblock_migratetype(page)))
>>>> +			continue;
>>>> +
>>>> +		/* Perform the isolation */
>>>> +		low_pfn = isolate_migratepages_block(cc, low_pfn, end_pfn,
>>>> +								isolate_mode);
>>>
>>> Hmm, why would we want to unconditionally set pageblock_skip if no pages
>>> could be isolated from a pageblock when
>>> isolate_mode == ISOLATE_ASYNC_MIGRATE?  It seems like it erroneously skip
>>> pageblocks for cases when isolate_mode == 0.
>>
>> Well pageblock_skip is a single bit and you don't know if the next attempt
>> will be async or sync. So now you would maybe skip needlessly if the next
>> attempt would be sync. If we changed that, you wouldn't skip if the next
>> attempt would be async again. Could be that one way is better than other but
>> I'm not sure, and would consider it separately.
>> The former patch 15 (quick skip pageblock that won't be fully migrated) could
>> perhaps change the balance here.
>>
>
> That's why we have two separate per-zone cached start pfns, though, right?
> The next call to async compaction should start from where the previous
> caller left off so there would be no need to set pageblock skip in that
> case until we have checked all memory.  Or are you considering the case of
> concurrent async compaction?

Ah, well the lifecycle of cached pfn's and pageblock_skip is not 
generally in sync. It may be that cached pfn's are reset, but 
pageblock_skip bits remain. So this would be one async pass setting 
hints for the next async pass.

But maybe we've already reduced the impact of sync compaction enough so 
it could now be ignoring pageblock_skip completely, and leave those 
hints only for async compaction.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ