[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1009011801050.24694@chino.kir.corp.google.com>
Date: Wed, 1 Sep 2010 18:03:20 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Anton Altaparmakov <aia21@...tab.net>,
linux-ntfs-dev@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: [patch v2 5/5] ntfs: remove dependency on __GFP_NOFAIL
Reimplement ntfs_malloc_nofs_nofail() to loop forever calling
ntfs_malloc_nofs() until the allocation succeeds.
If the first allocation attempt fails because the page allocator doesn't
implicitly loop, a warning will be emitted, including a call trace.
Subsequent failures will suppress this warning.
This was added as a helper function for documentation and auditability.
No future callers should be added.
Signed-off-by: David Rientjes <rientjes@...gle.com>
---
fs/ntfs/malloc.h | 12 ++++++++++--
1 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/fs/ntfs/malloc.h b/fs/ntfs/malloc.h
--- a/fs/ntfs/malloc.h
+++ b/fs/ntfs/malloc.h
@@ -76,11 +76,19 @@ static inline void *ntfs_malloc_nofs(unsigned long size)
* This function guarantees that the allocation will succeed. It will sleep
* for as long as it takes to complete the allocation.
*
- * If there was insufficient memory to complete the request, return NULL.
+ * NOTE: no new callers of this function should be implemented!
+ * All memory allocations should be failable whenever possible.
*/
static inline void *ntfs_malloc_nofs_nofail(unsigned long size)
{
- return __ntfs_malloc(size, GFP_NOFS | __GFP_HIGHMEM | __GFP_NOFAIL);
+ void *ret;
+
+ for (;;) {
+ ret = ntfs_malloc_nofs(size);
+ if (ret)
+ return ret;
+ WARN_ON_ONCE(get_order(size) > PAGE_ALLOC_COSTLY_ORDER);
+ }
}
static inline void ntfs_free(void *addr)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists