lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161011041916.GA30973@bbox>
Date:   Tue, 11 Oct 2016 13:19:16 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Sangseok Lee <sangseok.lee@....com>
Subject: Re: [PATCH 1/4] mm: adjust reserved highatomic count

Hi Vlasimil,

On Mon, Oct 10, 2016 at 08:57:40AM +0200, Vlastimil Babka wrote:
> On 10/07/2016 04:29 PM, Minchan Kim wrote:
> >>>In that case, we should adjust nr_reserved_highatomic.
> >>>Otherwise, VM cannot reserve highorderatomic pageblocks any more
> >>>although it doesn't reach 1% limit. It means highorder atomic
> >>>allocation failure would be higher.
> >>>
> >>>So, this patch decreases the account as well as migratetype
> >>>if it was MIGRATE_HIGHATOMIC.
> >>>
> >>>Signed-off-by: Minchan Kim <minchan@...nel.org>
> >>
> >>Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
> >>changed if it's highatomic? Possibly also not do move_freepages_block() in
> >
> >It could be. Actually, I did it with modifying can_steal_fallback which returns
> >false it found the pageblock is highorderatomic but changed to this way again
> >because I don't have any justification to prevent changing pageblock.
> >If you give concrete justification so others isn't against on it, I am happy to
> >do what you suggested.
> 
> Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so we
> are not supposed to steal from it in the first place. Stealing will only
> happen due to races, which would be too costly to close, so we allow them
> and expect to be rare. But we shouldn't allow them to break the accounting.
> 

Fair enough.
How about this?

>From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Thu, 6 Oct 2016 13:38:35 +0900
Subject: [PATCH] mm: don't steal highatomic pageblock

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, highatomic pageblock accounting is broken
so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
any more although it doesn't reach 1% limit).

So, this patch prohibits the changing from highatomic to other type.
It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
array so stealing will only happen due to unexpected races which is
really rare. Also, such prohibiting keeps highatomic pageblock more
longer so it would be better for highorderatomic page allocation.

Signed-off-by: Minchan Kim <minchan@...nel.org>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..79853b258211 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 		page = list_first_entry(&area->free_list[fallback_mt],
 						struct page, lru);
-		if (can_steal)
+		if (can_steal &&
+			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
 			steal_suitable_fallback(zone, page, start_migratetype);
 
 		/* Remove the page from the freelists */
@@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
+				&& mt != MIGRATE_HIGHATOMIC)
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ