lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171009054434.GA1798@intel.com>
Date:   Mon, 9 Oct 2017 13:44:35 +0800
From:   Aaron Lu <aaron.lu@...el.com>
To:     linux-mm <linux-mm@...ck.org>, lkml <linux-kernel@...r.kernel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Huang Ying <ying.huang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Kemi Wang <kemi.wang@...el.com>
Subject: [PATCH] page_alloc.c: inline __rmqueue()

__rmqueue() is called by rmqueue_bulk() and rmqueue() under zone->lock
and that lock can be heavily contended with memory intensive applications.

Since __rmqueue() is a small function, inline it can save us some time.
With the will-it-scale/page_fault1/process benchmark, when using nr_cpu
processes to stress buddy:

On a 2 sockets Intel-Skylake machine:
      base          %change       head
     77342            +6.3%      82203        will-it-scale.per_process_ops

On a 4 sockets Intel-Skylake machine:
      base          %change       head
     75746            +4.6%      79248        will-it-scale.per_process_ops

This patch adds inline to __rmqueue().

Signed-off-by: Aaron Lu <aaron.lu@...el.com>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e309ce4a44a..c9605c7ebaf6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2291,7 +2291,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
  * Do the hard work of removing an element from the buddy allocator.
  * Call me with the zone->lock already held.
  */
-static struct page *__rmqueue(struct zone *zone, unsigned int order,
+static inline struct page *__rmqueue(struct zone *zone, unsigned int order,
 				int migratetype)
 {
 	struct page *page;
-- 
2.13.6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ