[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181220155247.qbyptzk35xr7ey72@d104.suse.de>
Date: Thu, 20 Dec 2018 16:52:51 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Wei Yang <richard.weiyang@...il.com>
Cc: Michal Hocko <mhocko@...nel.org>, akpm@...ux-foundation.org,
vbabka@...e.cz, pavel.tatashin@...rosoft.com,
rppt@...ux.vnet.ibm.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm, page_alloc: Fix has_unmovable_pages for HugePages
On Thu, Dec 20, 2018 at 03:32:37PM +0000, Wei Yang wrote:
> Now let's go back to see how to calculate new_iter. From the chart
> above, we can see this formula stands for all three cases:
>
> new_iter = round_up(iter + 1, page_size(HugePage))
>
> So it looks the first version is correct.
Let us assume:
* iter = 0 (page first of the pageblock)
* page is a tail
* hugepage is 2mb
So we have the following:
iter = round_up(iter + 1, 1<<compound_order(head)) - 1;
which translates to:
iter = round_up(1, 512) - 1 = 511;
Then iter will be incremented to 512, and we break the loop.
The outcome of this is that ouf ot 512 pages, we only scanned 1,
and we skipped all the other 511 pages by mistake.
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists