lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 Jan 2012 16:04:10 +0100
From:	"Michal Nazarewicz" <mina86@...a86.com>
To:	"Marek Szyprowski" <m.szyprowski@...sung.com>,
	"Mel Gorman" <mel@....ul.ie>
Cc:	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	linux-media@...r.kernel.org, linux-mm@...ck.org,
	linaro-mm-sig@...ts.linaro.org,
	"Kyungmin Park" <kyungmin.park@...sung.com>,
	"Russell King" <linux@....linux.org.uk>,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	"KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>,
	"Daniel Walker" <dwalker@...eaurora.org>,
	"Arnd Bergmann" <arnd@...db.de>,
	"Jesse Barker" <jesse.barker@...aro.org>,
	"Jonathan Corbet" <corbet@....net>,
	"Shariq Hasnain" <shariq.hasnain@...aro.org>,
	"Chunsang Jeong" <chunsang.jeong@...aro.org>,
	"Dave Hansen" <dave@...ux.vnet.ibm.com>,
	"Benjamin Gaignard" <benjamin.gaignard@...aro.org>
Subject: Re: [PATCH 02/11] mm: compaction: introduce
 isolate_{free,migrate}pages_range().

On Tue, 10 Jan 2012 14:43:51 +0100, Mel Gorman <mel@....ul.ie> wrote:

> On Thu, Dec 29, 2011 at 01:39:03PM +0100, Marek Szyprowski wrote:
>> From: Michal Nazarewicz <mina86@...a86.com>

>> +static unsigned long
>> +isolate_freepages_range(struct zone *zone,
>> +			unsigned long start_pfn, unsigned long end_pfn,
>> +			struct list_head *freelist)
>>  {
>> -	unsigned long zone_end_pfn, end_pfn;
>> -	int nr_scanned = 0, total_isolated = 0;
>> -	struct page *cursor;
>> -
>> -	/* Get the last PFN we should scan for free pages at */
>> -	zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
>> -	end_pfn = min(blockpfn + pageblock_nr_pages, zone_end_pfn);
>> +	unsigned long nr_scanned = 0, total_isolated = 0;
>> +	unsigned long pfn = start_pfn;
>> +	struct page *page;
>>
>> -	/* Find the first usable PFN in the block to initialse page cursor */
>> -	for (; blockpfn < end_pfn; blockpfn++) {
>> -		if (pfn_valid_within(blockpfn))
>> -			break;
>> -	}
>> -	cursor = pfn_to_page(blockpfn);
>> +	VM_BUG_ON(!pfn_valid(pfn));
>> +	page = pfn_to_page(pfn);
>
> The existing code is able to find the first usable PFN in a pageblock
> with pfn_valid_within(). It's allowed to do that because it knows
> the pageblock is valid so calling pfn_valid() is unnecessary.
>
> It is curious to change this to something that can sometimes BUG_ON
> !pfn_valid(pfn) instead of having a PFN walker that knows how to
> handle pfn_valid().

Actually, this walker seem redundant since one is already present in
isolate_freepages(), ie. if !pfn_valid(pfn) then the loop in
isolate_freepages() will “continue;” and the function will never get
called.

>> +cleanup:
>> +	/*
>> +	 * Undo what we have done so far, and return.  We know all pages from
>> +	 * [start_pfn, pfn) are free because we have just freed them.  If one of
>> +	 * the page in the range was not freed, we would end up here earlier.
>> +	 */
>> +	for (; start_pfn < pfn; ++start_pfn)
>> +		__free_page(pfn_to_page(start_pfn));
>> +	return 0;
>
> This returns 0 even though you could have isolated some pages.

The loop's intention is to “unisolate” (ie. free) anything that got
isolated, so at the end number of isolated pages in in fact zero.

> Overall, there would have been less contortion if you
> implemented isolate_freepages_range() in terms of the existing
> isolate_freepages_block. Something that looked kinda like this not
> compiled untested illustration.
>
> static unsigned long isolate_freepages_range(struct zone *zone,
>                         unsigned long start_pfn, unsigned long end_pfn,
>                         struct list_head *list)
> {
>         unsigned long pfn = start_pfn;
>         unsigned long isolated;
>
>         for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
>                 if (!pfn_valid(pfn))
>                         continue;
>                 isolated += isolate_freepages_block(zone, pfn, list);
>         }
>
>         return isolated;
> }
>
> I neglected to update isolate_freepages_block() to take the end_pfn
> meaning it will in fact isolate too much but that would be easy to
> address.

This is not a problem since my isolate_freepages_range() implementation
can go beyond end_pfn, anyway.

Of course, the function taking end_pfn is an optimisation since it does
not have to compute zone_end each time.

> It would be up to yourself whether to shuffle the tracepoint
> around because with this example it will be triggered once per
> pageblock. You'd still need the cleanup code and freelist check in
> isolate_freepages_block() of course.
>
> The point is that it would minimise the disruption to the existing
> code and easier to get details like pfn_valid() right without odd
> contortions, more gotos than should be necessary and trying to keep
> pfn and page straight.

Even though I'd personally go with my approach, I see merit in your point,
so will change.

>>  }
>>
>>  /* Returns true if the page is within a block suitable for migration to */

>> @@ -365,17 +403,49 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>>  		nr_isolated++;
>>
>>  		/* Avoid isolating too much */
>> -		if (cc->nr_migratepages == COMPACT_CLUSTER_MAX)
>> +		if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) {
>> +			++low_pfn;
>>  			break;
>> +		}
>>  	}
>
> This minor optimisation is unrelated to the implementation of
> isolate_migratepages_range() and should be in its own patch.

It's actually not a mere minor optimisation, but rather making the function work
according to the documentation added, ie. that it returns “PFN of the first page
that was not scanned”.

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +----<email/xmpp: mpn@...gle.com>--------------ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ