lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 03 Feb 2009 15:29:05 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Hugh Dickins <hugh@...itas.com>, Nick Piggin <npiggin@...e.de>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Lin Ming <ming.m.lin@...el.com>,
	Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator

On Mon, 2009-02-02 at 11:00 +0200, Pekka Enberg wrote:
> Hi Yanmin,
> 
> On Mon, 2009-02-02 at 11:38 +0800, Zhang, Yanmin wrote:
> > Can we add a checking about free memory page number/percentage in function
> > allocate_slab that we can bypass the first try of alloc_pages when memory
> > is hungry?
> 
> If the check isn't too expensive, I don't any reason not to. How would
> you go about checking how much free pages there are, though? Is there
> something in the page allocator that we can use for this?

We can use nr_free_pages(), totalram_pages and hugetlb_total_pages(). Below
patch is a try. I tested it with hackbench and tbench on my stoakley
(2 qual-core processors) and tigerton (4 qual-core processors). There is almost no
regression.

Besides this patch, I have another patch to try to reduce the calculation
of "totalram_pages - hugetlb_total_pages()", but it touches many files. So just
post the first simple patch here for review.


Hugh,

Would you like to test it on your machines?

Thanks,
Yanmin


---

--- linux-2.6.29-rc2/mm/slub.c	2009-01-20 14:20:45.000000000 +0800
+++ linux-2.6.29-rc2_slubfreecheck/mm/slub.c	2009-02-03 14:40:52.000000000 +0800
@@ -23,6 +23,8 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/memory.h>
+#include <linux/swap.h>
+#include <linux/hugetlb.h>
 #include <linux/math64.h>
 #include <linux/fault-inject.h>
 
@@ -1076,14 +1078,18 @@ static inline struct page *alloc_slab_pa
 
 static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 {
-	struct page *page;
+	struct page *page = NULL;
 	struct kmem_cache_order_objects oo = s->oo;
+	unsigned long free_pages = nr_free_pages();
+	unsigned long total_pages = totalram_pages - hugetlb_total_pages();
 
 	flags |= s->allocflags;
 
-	page = alloc_slab_page(flags | __GFP_NOWARN | __GFP_NORETRY, node,
-									oo);
-	if (unlikely(!page)) {
+	if (free_pages > total_pages >> 3) {
+		page = alloc_slab_page(flags | __GFP_NOWARN | __GFP_NORETRY,
+				node, oo);
+	}
+	if (!page) {
 		oo = s->min;
 		/*
 		 * Allocation may have failed due to fragmentation.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ