lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FF25ED9.5070504@kernel.org>
Date:	Tue, 03 Jul 2012 11:54:17 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Rik van Riel <riel@...hat.com>
CC:	Sasha Levin <levinsasha928@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Mel Gorman <mel@....ul.ie>,
	jaschut@...dia.gov, kamezawa.hiroyu@...fujitsu.com,
	Dave Jones <davej@...hat.com>
Subject: Re: [PATCH -mm v2] mm: have order > 0 compaction start off where
 it left

On 07/03/2012 09:57 AM, Rik van Riel wrote:

> On 07/02/2012 01:42 PM, Sasha Levin wrote:
>> On Thu, 2012-06-28 at 14:35 -0700, Andrew Morton wrote:
>>> On Thu, 28 Jun 2012 17:24:25 -0400 Rik van Riel<riel@...hat.com>  wrote:
>>>>
>>>>>> @@ -463,6 +474,8 @@ static void isolate_freepages(struct zone *zone,
>>>>>>              */
>>>>>>             if (isolated)
>>>>>>                     high_pfn = max(high_pfn, pfn);
>>>>>> +          if (cc->order>   0)
>>>>>> +                  zone->compact_cached_free_pfn = high_pfn;
>>>>>
>>>>> Is high_pfn guaranteed to be aligned to pageblock_nr_pages here?  I
>>>>> assume so, if lots of code in other places is correct but it's
>>>>> unobvious from reading this function.
>>>>
>>>> Reading the code a few more times, I believe that it is
>>>> indeed aligned to pageblock size.
>>>
>>> I'll slip this into -next for a while.
>>>
>>> ---
>>> a/mm/compaction.c~isolate_freepages-check-that-high_pfn-is-aligned-as-expected
>>>
>>> +++ a/mm/compaction.c
>>> @@ -456,6 +456,7 @@ static void isolate_freepages(struct zon
>>>                  }
>>>                  spin_unlock_irqrestore(&zone->lock, flags);
>>>
>>> +               WARN_ON_ONCE(high_pfn&  (pageblock_nr_pages - 1));
>>>                  /*
>>>                   * Record the highest PFN we isolated pages from.
>>> When next
>>>                   * looking for free pages, the search will restart
>>> here as
>>
>> I've triggered the following with today's -next:
> 
> I've been staring at the migrate code for most of the afternoon,
> and am not sure how this is triggered.
> 
> At this point, I'm going to focus my attention on addressing
> Minchan's comments on my code, and hoping someone who is more
> familiar with the migrate code knows how high_pfn ends up
> being not pageblock_nr_pages aligned...
> 


migrate_pfn does not necessarily start aligned to a pageblock.

        /* Setup to move all movable pages to the end of the zone */
        cc->migrate_pfn = zone->zone_start_pfn;

In isolate_freepages, high_pfn = low_pfn = cc->migrate_pfn + pageblock_nr_pages /* migrate_pfn doesn't aligned to a pageblock */

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ