[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0904062151250.21044@blonde.anvils>
Date: Mon, 6 Apr 2009 21:54:54 +0100 (BST)
From: Hugh Dickins <hugh@...itas.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Yuri Tikhonov <yur@...raft.com>, linux-kernel@...r.kernel.org
Subject: [PATCH 1/3] shmem: fix division by zero
From: Yuri Tikhonov <yur@...raft.com>
Fix a division by zero which we have in shmem_truncate_range() and
shmem_unuse_inode() when using big PAGE_SIZE values (e.g. 256kB on ppc44x).
With 256kB PAGE_SIZE, the ENTRIES_PER_PAGEPAGE constant becomes too large
(0x1.0000.0000) on a 32-bit kernel, so this patch just changes its type
from 'unsigned long' to 'unsigned long long'.
Hugh: reverted its unsigned long longs in shmem_truncate_range() and
shmem_getpage(): the pagecache index cannot be more than an unsigned long,
so the divisions by zero occurred in unreached code. It's a pity we need
any ULL arithmetic here, but I found no pretty way to avoid it.
Signed-off-by: Yuri Tikhonov <yur@...raft.com>
Signed-off-by: Hugh Dickins <hugh@...itas.com>
---
mm/shmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- 2.6.29-git13/mm/shmem.c 2009-04-06 11:48:55.000000000 +0100
+++ 2.6.29-git13+/mm/shmem.c 2009-04-06 15:33:09.000000000 +0100
@@ -66,7 +66,7 @@ static struct vfsmount *shm_mnt;
#include <asm/pgtable.h>
#define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long))
-#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
+#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
#define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512)
#define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1))
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists