lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Dec 2008 00:35:12 +0000 (GMT)
From:	Hugh Dickins <hugh@...itas.com>
To:	Yuri Tikhonov <yur@...raft.com>
cc:	linux-kernel@...r.kernel.org, linuxppc-dev@...abs.org,
	akpm@...ux-foundation.org, paulus@...ba.org, wd@...x.de,
	dzu@...x.de, yanok@...raft.com, miltonm@....com,
	viro@...iv.linux.org.uk, Geert.Uytterhoeven@...ycom.com
Subject: Re: [PATCH] mm/shmem.c: fix division by zero

On Fri, 19 Dec 2008, Yuri Tikhonov wrote:
> 
> The following patch fixes division by zero, which we have in
> shmem_truncate_range() and shmem_unuse_inode(), if use big
> PAGE_SIZE values (e.g. 256KB on ppc44x).
> 
> With 256KB PAGE_SIZE the ENTRIES_PER_PAGEPAGE constant becomes
> too large (0x1.0000.0000), so this patch just changes the types
> from 'ulong' to 'ullong' where it's necessary.
> 
> Signed-off-by: Yuri Tikhonov <yur@...raft.com>

Sorry for the slow reply, but I'm afraid I don't like spattering
around an increasing number of unsigned long longs to fix that division
by zero on an unusual configuration: I doubt that's the right solution.

It's ridiculous for shmem.c to be trying to support a wider address
range than the page cache itself can support, and it's wasteful for
it to be using 256KB pages for its index blocks (not to mention its
data blocks! but we'll agree to differ on that).

Maybe it should be doing a kmalloc(4096) instead of using alloc_pages();
though admittedly that's not a straightforward change, since we do make
use of highmem and page->private.  Maybe I should use this as stimulus
to switch shmem over to storing its swap entries in the pagecache radix
tree.  Maybe we should simply disable its use of swap in such an
extreme configuration.

But I need to understand more about your ppc44x target to make
the right decision.  What's very strange to me is this: since
unsigned long long is the same size as unsigned long on 64-bit,
this change appears to be for a 32-bit machine with 256KB pages.
I wonder what market segment that is targeted at?

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists