[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090211142004.GB25799@csn.ul.ie>
Date: Wed, 11 Feb 2009 14:20:04 +0000
From: Mel Gorman <mel@....ul.ie>
To: Andy Whitcroft <apw@...onical.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Hugh Dickins <hugh@...itas.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Greg KH <gregkh@...e.de>,
Maksim Yevmenkin <maksim.yevmenkin@...il.com>,
Nick Piggin <npiggin@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
will@...wder-design.com, Rik van Riel <riel@...hat.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Mikos Szeredi <miklos@...redi.hu>, wli@...ementarian.org
Subject: Re: [PATCH] Do not account for the address space used by hugetlbfs
using VM_ACCOUNT V2 (Was Linus 2.6.29-rc4)
On Wed, Feb 11, 2009 at 12:03:17PM +0000, Andy Whitcroft wrote:
> > <SNIP>
> >
> > Yes, and this was a mistake. For noreserve mappings, we may now be taking
> > twice the amount of quota and probably leaking it. This is wrong and I need
> > to move the check for quota below the check for VM_NORESERVE. Good spot.
>
> Thanks.
>
How about this?
=====
[PATCH] Do not account for hugetlbfs quota at mmap() time if mapping *_NORESERVE
Commit 5a6fe125950676015f5108fb71b2a67441755003 brought hugetlbfs more in line
with the core VM by obeying VM_NORESERVE and not reserving hugepages for both
shared and private mappings when [SHM|MAP]_NORESERVE are specified. However,
it is still taking filesystem quota unconditionally and this leads to
double-accounting.
At fault time, if there are no reserves and attempt is made to allocate the
page and account for filesystem quota. If either fail, the fault fails. This
patch prevents quota being taken when [SHM|MAP]_NORESERVE is specified.
To help prevent this mistake happening again, it improves the documentation
of hugetlb_reserve_pages().
Reported-by: Andy Whitcroft <apw@...onical.com>
Signed-off-by: Mel Gorman <mel@....ul.ie>
---
hugetlb.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2074642..b0b63cd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2289,24 +2289,41 @@ int hugetlb_reserve_pages(struct inode *inode,
if (chg < 0)
return chg;
- if (hugetlb_get_quota(inode->i_mapping, chg))
- return -ENOSPC;
-
/*
- * Only apply hugepage reservation if asked. We still have to
- * take the filesystem quota because it is an upper limit
- * defined for the mount and not necessarily memory as a whole
+ * Only apply hugepage reservation if asked. At fault time, an
+ * attempt will be made for VM_NORESERVE to allocate a page
+ * and filesystem quota without using reserves
*/
if (acctflag & VM_NORESERVE) {
reset_vma_resv_huge_pages(vma);
return 0;
}
+ /* There must be enough filesystem quota for the mapping */
+ if (hugetlb_get_quota(inode->i_mapping, chg))
+ return -ENOSPC;
+
+ /*
+ * Check enough hugepages are available for the reservation.
+ * Hand back the quota if there are not
+ */
ret = hugetlb_acct_memory(h, chg);
if (ret < 0) {
hugetlb_put_quota(inode->i_mapping, chg);
return ret;
}
+
+ /*
+ * Account for the reservations made. Shared mappings record regions
+ * that have reservations as they are shared by multiple VMAs.
+ * When the last VMA disappears, the region map says how much
+ * the reservation was and the page cache tells how much of
+ * the reservation was consumed. Private mappings are per-VMA and
+ * only the consumed reservations are tracked. When the VMA
+ * disappears, the original reservation is the VMA size and the
+ * consumed reservations are stored in the map. Here, we just need
+ * to allocate the region map.
+ */
if (!vma || vma->vm_flags & VM_SHARED)
region_add(&inode->i_mapping->private_list, from, to);
else {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists