lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Feb 2017 15:22:00 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Michal Hocko <mhocko@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>
Subject: [PATCH] mm, page_alloc: only use per-cpu allocator for irq-safe
 requests -fix v2

preempt_enable_no_resched() was used based on review feedback that had
no strong objection at the time. The thinking was that it avoided adding
a preemption point where one didn't exist before so the feedback was
applied. This reasoning was wrong.

There was an indirect preemption point as explained by Thomas Gleixner where
an interrupt could set_need_resched() followed by preempt_enable being
a preemption point that matters. This use of preempt_enable_no_resched
is bad from both a mainline and RT perspective and a violation of the
preemption mechanism. Peter Zijlstra noted that "the only acceptable use
of preempt_enable_no_resched() is if the next statement is a schedule()
variant".

The usage was outright broken and I should have stuck to preempt_enable()
as it was originally developed. It's known from previous tests
that there was no detectable difference to the performance by using
preempt_enable_no_resched().

This is a fix to the mmotm patch
mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch

Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
 mm/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaecb4b145e6..2a36dad03dac 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2520,7 +2520,7 @@ void free_hot_cold_page(struct page *page, bool cold)
 	}
 
 out:
-	preempt_enable_no_resched();
+	preempt_enable();
 }
 
 /*
@@ -2686,7 +2686,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 		zone_statistics(preferred_zone, zone);
 	}
-	preempt_enable_no_resched();
+	preempt_enable();
 	return page;
 }
 

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ