lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <545375B2.6050800@suse.cz>
Date:	Fri, 31 Oct 2014 12:42:42 +0100
From:	Vlastimil Babka <vbabka@...e.cz>
To:	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>
CC:	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Minchan Kim <minchan@...nel.org>,
	Michal Nazarewicz <mina86@...a86.com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Christoph Lameter <cl@...ux.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	Zhang Yanfei <zhangyanfei@...fujitsu.com>
Subject: Re: [PATCH for v3.18] mm/compaction: skip the range until proper
 target pageblock is met

On 10/31/2014 08:23 AM, Joonsoo Kim wrote:
> commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in
> the migration scanner") makes side-effect that change iteration
> range calculation. Before change, block_end_pfn is calculated using
> start_pfn, but, now, blindly add pageblock_nr_pages to previous value.
>
> This cause the problem that isolation_start_pfn is larger than
> block_end_pfn when we isolation the page with more than pageblock order.
> In this case, isolation would be failed due to invalid range parameter.
>
> To prevent this, this patch implement skipping the range until proper
> target pageblock is met. Without this patch, CMA with more than pageblock
> order always fail, but, with this patch, it will succeed.

Well, that's a shame, a third fix you send for my series... And only the 
first was caught before going mainline. I guess -rcX phase is intended 
for this, but how could we do better to catch this in -next?
Anyway, thanks!

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> ---
>   mm/compaction.c |    6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ec74cf0..212682a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -472,18 +472,20 @@ isolate_freepages_range(struct compact_control *cc,
>   	pfn = start_pfn;
>   	block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
>
> -	for (; pfn < end_pfn; pfn += isolated,
> -				block_end_pfn += pageblock_nr_pages) {
> +	for (; pfn < end_pfn; block_end_pfn += pageblock_nr_pages) {
>   		/* Protect pfn from changing by isolate_freepages_block */
>   		unsigned long isolate_start_pfn = pfn;
>
>   		block_end_pfn = min(block_end_pfn, end_pfn);
> +		if (pfn >= block_end_pfn)
> +			continue;

Without any comment, this will surely confuse anyone reading the code.
Also I wonder if just recalculating block_end_pfn wouldn't be cheaper 
cpu-wise (not that it matters much?) and easier to understand than 
conditionals. IIRC backward jumps (i.e. continue) are by default 
predicted as "likely" if there's no history in the branch predictor 
cache, but this rather unlikely?

>   		if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone))
>   			break;
>
>   		isolated = isolate_freepages_block(cc, &isolate_start_pfn,
>   						block_end_pfn, &freelist, true);
> +		pfn += isolated;

Moving the "pfn += isolated" here doesn't change anything, or does it? 
Do you just find it nicer?

>   		/*
>   		 * In strict mode, isolate_freepages_block() returns 0 if
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ