lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 6 Jul 2010 11:00:27 -0500 (CDT) From: Christoph Lameter <cl@...ux-foundation.org> To: Naoya Horiguchi <n-horiguchi@...jp.nec.com> cc: Andi Kleen <andi@...stfloor.org>, Andrew Morton <akpm@...ux-foundation.org>, Mel Gorman <mel@....ul.ie>, Wu Fengguang <fengguang.wu@...el.com>, "Jun'ichi Nomura" <j-nomura@...jp.nec.com>, linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 6/7] hugetlb: hugepage migration core On Fri, 2 Jul 2010, Naoya Horiguchi wrote: > --- v2.6.35-rc3-hwpoison/mm/migrate.c > +++ v2.6.35-rc3-hwpoison/mm/migrate.c > @@ -32,6 +32,7 @@ > #include <linux/security.h> > #include <linux/memcontrol.h> > #include <linux/syscalls.h> > +#include <linux/hugetlb.h> > #include <linux/gfp.h> > > #include "internal.h" > @@ -74,6 +75,8 @@ void putback_lru_pages(struct list_head *l) > struct page *page2; > > list_for_each_entry_safe(page, page2, l, lru) { > + if (PageHuge(page)) > + break; > list_del(&page->lru); Argh. Hugepages in putpack_lru_pages()? Huge pages are not on the lru. Come up with something cleaner here. > @@ -267,7 +284,14 @@ static int migrate_page_move_mapping(struct address_space *mapping, > * Note that anonymous pages are accounted for > * via NR_FILE_PAGES and NR_ANON_PAGES if they > * are mapped to swap space. > + * > + * Not account hugepage here for now because hugepage has > + * separate accounting rule. > */ > + if (PageHuge(newpage)) { > + spin_unlock_irq(&mapping->tree_lock); > + return 0; > + } > __dec_zone_page_state(page, NR_FILE_PAGES); > __inc_zone_page_state(newpage, NR_FILE_PAGES); > if (PageSwapBacked(page)) { This looks wrong here. Too many special casing added to basic migration functionality. > @@ -284,7 +308,17 @@ static int migrate_page_move_mapping(struct address_space *mapping, > */ > static void migrate_page_copy(struct page *newpage, struct page *page) > { > - copy_highpage(newpage, page); > + int i; > + struct hstate *h; > + if (!PageHuge(newpage)) > + copy_highpage(newpage, page); > + else { > + h = page_hstate(newpage); > + for (i = 0; i < pages_per_huge_page(h); i++) { > + cond_resched(); > + copy_highpage(newpage + i, page + i); > + } > + } > > if (PageError(page)) > SetPageError(newpage); Could you generalize this for migrating an order N page? > @@ -718,6 +752,11 @@ unlock: > put_page(page); > > if (rc != -EAGAIN) { > + if (PageHuge(newpage)) { > + put_page(newpage); > + goto out; > + } > + I dont like this kind of inconsistency with the refcounting. Page migration is complicated enough already. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists