[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070607214706.3efc5870.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 7 Jun 2007 21:47:06 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, clameter@....com,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: 2.6.22-rc4-mm1
Question.
While writing memory unplug, I noticed this code.
==
static int
fixup_anon_page(pte_t *pte, unsigned long start, unsigned long end, void *priv)
{
struct vm_area_struct *vma = priv;
struct page *page = vm_normal_page(vma, start, *pte);
if (page && PageAnon(page))
page->index = linear_page_index(vma, start);
return 0;
}
static int fixup_anon_pages(struct vm_area_struct *vma)
{
struct mm_walk walk = {
.pte_entry = fixup_anon_page,
};
return walk_page_range(vma->vm_mm,
vma->vm_start, vma->vm_end, &walk, vma);
}
==
I think that 'pte' passed to fixup_anon_page() by walk_page_range()
is not guaranteed to be 'Present'.
Then, vm_normal_page() will show print_bad_pte().
If this never occur now, I'll add my own check code for memory migration by kernel here.
(Sorry, I can't find who should be CCed.)
-Kame
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists