lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Mar 2016 10:22:07 +0000
From:	He Kuang <hekuang@...wei.com>
To:	<akpm@...ux-foundation.org>, <mgorman@...hsingularity.net>,
	<mhocko@...e.com>, <vbabka@...e.cz>, <rientjes@...gle.com>,
	<cody@...ux.vnet.ibm.com>
CC:	<gilad@...yossef.com>, <kosaki.motohiro@...il.com>,
	<mgorman@...e.de>, <penberg@...nel.org>, <lizefan@...wei.com>,
	<wangnan0@...wei.com>, <hekuang@...wei.com>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: [PATCH] Revert "mm/page_alloc: protect pcp->batch accesses with ACCESS_ONCE"

This reverts commit 998d39cb236fe464af86a3492a24d2f67ee1efc2.

When local irq is disabled, a percpu variable does not change, so we can
remove the access macros and let the compiler optimize the code safely.

Signed-off-by: He Kuang <hekuang@...wei.com>
---
 mm/page_alloc.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 59de90d..4575b82 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2015,11 +2015,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
 {
 	unsigned long flags;
-	int to_drain, batch;
+	int to_drain;
 
 	local_irq_save(flags);
-	batch = READ_ONCE(pcp->batch);
-	to_drain = min(pcp->count, batch);
+	to_drain = min(pcp->count, pcp->batch);
 	if (to_drain > 0) {
 		free_pcppages_bulk(zone, to_drain, pcp);
 		pcp->count -= to_drain;
@@ -2217,9 +2216,8 @@ void free_hot_cold_page(struct page *page, bool cold)
 		list_add_tail(&page->lru, &pcp->lists[migratetype]);
 	pcp->count++;
 	if (pcp->count >= pcp->high) {
-		unsigned long batch = READ_ONCE(pcp->batch);
-		free_pcppages_bulk(zone, batch, pcp);
-		pcp->count -= batch;
+		free_pcppages_bulk(zone, pcp->batch, pcp);
+		pcp->count -= pcp->batch;
 	}
 
 out:
-- 
1.8.5.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ