lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d6a94c50609130021q5e9071a1n5c9b6036f6037b28@mail.gmail.com>
Date:	Wed, 13 Sep 2006 15:21:47 +0800
From:	Aubrey <aubreylee@...il.com>
To:	"Nick Piggin" <nickpiggin@...oo.com.au>
Cc:	"David Howells" <dhowells@...hat.com>,
	"Matt Mackall" <mpm@...enic.com>, linux-kernel@...r.kernel.org,
	davidm@...pgear.com, gerg@...pgear.com
Subject: Re: kernel BUGs when removing largish files with the SLOB allocator

I changed my patch with some of your suggestions. Anyway, my system
works properly with the patch while the current kernel does not.

From: Aubrey.Li <aubrey@...ux-suse.ADI>
Date: Wed, 13 Sep 2006 15:01:23 +0800
Subject: [PATCH] Make the SLOB allocator mark its pages with PG_slab.
Signed-off-by: Aubrey.Li <aubrey.adi@...il.com>
---
 mm/slob.c |   20 ++++++++++++++------
 1 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 7b52b20..05f0d16 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -109,6 +109,7 @@ static void *slob_alloc(size_t size, gfp

 			slob_free(cur, PAGE_SIZE);
 			spin_lock_irqsave(&slob_lock, flags);
+			SetPageSlab(virt_to_page(cur));
 			cur = slobfree;
 		}
 	}
@@ -173,12 +174,17 @@ void *kmalloc(size_t size, gfp_t gfp)
 		return 0;

 	bb->order = find_order(size);
-	bb->pages = (void *)__get_free_pages(gfp, bb->order);
+	page = alloc_pages(gfp, bb->order);
+	bb->pages = page_address(page);

 	if (bb->pages) {
 		spin_lock_irqsave(&block_lock, flags);
 		bb->next = bigblocks;
 		bigblocks = bb;
+		for (i = 0; i < (1 << bb->order); i++) {
+			SetPageSlab(page);
+			page++;
+		}
 		spin_unlock_irqrestore(&block_lock, flags);
 		return bb->pages;
 	}
@@ -202,8 +208,15 @@ void kfree(const void *block)
 		spin_lock_irqsave(&block_lock, flags);
 		for (bb = bigblocks; bb; last = &bb->next, bb = bb->next) {
 			if (bb->pages == block) {
+				struct page *page = virt_to_page(bb->pages);
+				int i;
+
 				*last = bb->next;
 				spin_unlock_irqrestore(&block_lock, flags);
+				for (i = 0; i < (1 << bb->order); i++) {
+					ClearPageSlab(page);
+					page++;
+				}
 				free_pages((unsigned long)block, bb->order);
 				slob_free(bb, sizeof(bigblock_t));
 				return;
@@ -332,11 +345,6 @@ static struct timer_list slob_timer = TI

 void kmem_cache_init(void)
 {
-	void *p = slob_alloc(PAGE_SIZE, 0, PAGE_SIZE-1);
-
-	if (p)
-		free_page((unsigned long)p);
-
 	mod_timer(&slob_timer, jiffies + HZ);
 }

-- 
1.4.0

View attachment "0001-Make-the-SLOB-allocator-mark-its-pages-with-PG_slab.txt" of type "text/plain" (1924 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ