[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0611241924110.17508@blonde.wat.veritas.com>
Date: Fri, 24 Nov 2006 19:57:59 +0000 (GMT)
From: Hugh Dickins <hugh@...itas.com>
To: Mel Gorman <mel@...net.ie>
cc: Linus Torvalds <torvalds@...l.org>,
Christoph Lameter <clameter@....com>, linux-mm@...ck.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/11] Add __GFP_MOVABLE flag and update callers
On Fri, 24 Nov 2006, Mel Gorman wrote:
>
> This is what the (compile-tested-only on x86) patch looks like for
> GFP_HIGH_MOVABLE. The remaining in-tree GFP_HIGHUSER users are infiniband,
> kvm, ncpfs, nfs, pipes (possible the most frequent user), m68knommu, hugepages
> and kexec.
>
> Signed-off-by: Mel Gorman <mel@....ul.ie>
You need to add in something like the patch below (mutatis mutandis
for whichever approach you end up taking): tmpfs uses highmem pages
for its swap vector blocks, noting where on swap the data pages are,
and allocates them with mapping_gfp_mask(inode->i_mapping); but we
don't have any mechanism in place for reclaiming or migrating those.
(We could add that; but there might be better things to do instead.
I've often wanted to remove that whole layer from tmpfs, and note
swap entries in the pagecache's radix-tree slots instead - but that
does then lock them into low memory. Hum, haw, never decided.)
You can certainly be forgiven for missing that, and may well wonder
why it doesn't just use GFP_HIGHUSER explicitly: because the loop
driver may be on top of that tmpfs file, masking off __GFP_IO and
__GFP_FS: the swap vector blocks should be allocated with the same
restrictions as the data pages.
Excuse me for moving the __GFP_ZERO too: I think it's tidier to
do them both within the little helper function.
Hugh
--- 2.6.19-rc5-mm2/mm/shmem.c 2006-11-14 09:58:21.000000000 +0000
+++ linux/mm/shmem.c 2006-11-24 19:22:30.000000000 +0000
@@ -94,7 +94,8 @@ static inline struct page *shmem_dir_all
* BLOCKS_PER_PAGE on indirect pages, assume PAGE_CACHE_SIZE:
* might be reconsidered if it ever diverges from PAGE_SIZE.
*/
- return alloc_pages(gfp_mask, PAGE_CACHE_SHIFT-PAGE_SHIFT);
+ return alloc_pages((gfp_mask & ~__GFP_MOVABLE) | __GFP_ZERO,
+ PAGE_CACHE_SHIFT-PAGE_SHIFT);
}
static inline void shmem_dir_free(struct page *page)
@@ -372,7 +373,7 @@ static swp_entry_t *shmem_swp_alloc(stru
}
spin_unlock(&info->lock);
- page = shmem_dir_alloc(mapping_gfp_mask(inode->i_mapping) | __GFP_ZERO);
+ page = shmem_dir_alloc(mapping_gfp_mask(inode->i_mapping));
if (page)
set_page_private(page, 0);
spin_lock(&info->lock);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists