lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 1 Oct 2008 13:31:00 +0100 From: Andy Whitcroft <apw@...dowen.org> To: linux-mm@...ck.org Cc: linux-kernel@...r.kernel.org, KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>, Peter Zijlstra <peterz@...radead.org>, Christoph Lameter <cl@...ux-foundation.org>, Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>, Andy Whitcroft <apw@...dowen.org>, Nick Piggin <nickpiggin@...oo.com.au>, Andrew Morton <akpm@...ux-foundation.org> Subject: [PATCH 3/4] buddy: explicitly identify buddy field use in struct page Explicitly define the struct page fields which buddy uses when it owns pages. Defines a new anonymous struct to allow additional fields to be defined in a later patch. Signed-off-by: Andy Whitcroft <apw@...dowen.org> Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl> Acked-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> Reviewed-by: Rik van Riel <riel@...hat.com> Reviewed-by: Christoph Lameter <cl@...ux-foundation.org> --- include/linux/mm_types.h | 3 +++ mm/internal.h | 2 +- mm/page_alloc.c | 4 ++-- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 995c588..906d8e0 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -70,6 +70,9 @@ struct page { #endif struct kmem_cache *slab; /* SLUB: Pointer to slab */ struct page *first_page; /* Compound tail pages */ + struct { + unsigned long buddy_order; /* buddy: free page order */ + }; }; union { pgoff_t index; /* Our offset within mapping. */ diff --git a/mm/internal.h b/mm/internal.h index c0e4859..fcedcd0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -58,7 +58,7 @@ extern void __free_pages_bootmem(struct page *page, unsigned int order); static inline unsigned long page_order(struct page *page) { VM_BUG_ON(!PageBuddy(page)); - return page_private(page); + return page->buddy_order; } extern int mlock_vma_pages_range(struct vm_area_struct *vma, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 921c435..3a646e3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -331,7 +331,7 @@ static inline void prep_zero_page(struct page *page, int order, gfp_t gfp_flags) static inline void set_page_order(struct page *page, int order) { - set_page_private(page, order); + page->buddy_order = order; __SetPageBuddy(page); #ifdef CONFIG_PAGE_OWNER page->order = -1; @@ -341,7 +341,7 @@ static inline void set_page_order(struct page *page, int order) static inline void rmv_page_order(struct page *page) { __ClearPageBuddy(page); - set_page_private(page, 0); + page->buddy_order = 0; } /* -- 1.6.0.1.451.gc8d31 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists