lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706250607260.5191@schroedinger.engr.sgi.com>
Date:	Mon, 25 Jun 2007 07:07:53 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Hugh Dickins <hugh@...itas.com>
cc:	Russell King <rmk+lkml@....linux.org.uk>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nicolas Ferre <nicolas.ferre@....atmel.com>,
	ARM Linux Mailing List 
	<linux-arm-kernel@...ts.arm.linux.org.uk>,
	Linux Kernel list <linux-kernel@...r.kernel.org>,
	Marc Pignat <marc.pignat@...s.ch>,
	Andrew Victor <andrew@...people.com>,
	Pierre Ossman <drzeus@...eus.cx>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Oops in a driver while using SLUB as a SLAB allocator

On Mon, 25 Jun 2007, Hugh Dickins wrote:

> And I now rather think that needs to stay, not be replaced by the
> VM_BUG_ON Christoph was proposing for 2.6.23 (which earlier I acked).
> 
> Christoph responded to my page_mapping patch by looking at arch/arm,
> and there finding a kmalloc in dma_alloc_coherent which he didn't
> like; but you're right, it's entirely irrelevant to Nicolas' oops.
> 
> The slub allocation which gives rise to Nicolas' oops in not in
> ARM, but (I'm guessing) in drivers/mmc/core/sd.c: one of those
> 	status = kmalloc(64, GFP_KERNEL);
> where status is passed down for the response from mmc_sd_switch.
>
> And what is wrong with using kmalloc there?
> Why should that be changed to allocate a whole page?
> How many other such cases might there be?

So someone effectively does a flush_dcache_page(virt_to_page(status))?

> In the kmalloc case it's not mapped into userspace: flush_dcache_page
> should detect that and do nothing, as it does with slab; but slub was
> reusing page->mapping for something else, so we oopsed.

If that is the case then what we really want is a flush_dcache_range
not the above. flush_dcache_range does not take a page struct as an 
argument and it will work on memory that has no struct page backing it.

Is flush_dcache_range available in all platforms? I see some drivers
using it:

drivers/net/fec.c
drivers/serial/mpsc.c
drivers/char/agp/uninorth-agp.c


flush_dcache_page is implemented by

sparc64		Uses mapping
sh		Ok. Only uses PG_mapped
arm		Uses mapping in the mmu case
frv		Does a kmap_atomic ?? Otherwise looks ok.
ppc		Clears PG_arch_1
mips		Uses mapping
sh64		No page struct use
parisc		Uses mapping
xtensa		Uses mapping
powerpc		Handles page flags PG_arch_1
ia64		Clears PG_arch_1
sparc		Calculates address based on page struct addr.
blackfin	Does an immediate page_address(page)
m68k		Does an immediate page_address(page)

In many situations the page struct passed to flush_dcache_page is
simply used to calculate the virtual address. So its mostly harmless.
Trouble starts when page attributes like the mapping is used.

So the problem platforms are

sparc64 arm mips parisc xtensa

If we indeed do these weird things then I think the general fix should
be to use flush_dcache_range() but that is too late for 2.6.22. The 
VM_BUG_ON will be useful to detect these scenarios. Maybe we need
to replace that with a WARN_ON or something if the usage is frequent?
There are a large number of platforms on which flush_dcache_range has
no effect or an effect that is negligible.

A kmalloc slab object (even 64 byte) may be crossing a page boundary 
with a ARCH_KMALLOC_MINALIGN of 4 or 8. So I think that 
flush_dcache_range *must* be used rather than flush_dcache_page. 
flush_dcache_page(virt_to_page(object)) takes the starting address of 
the object and flushes the page in which the object started. It may
not be the complete object. This usually works fine with 64 byte objects
because they neatly fit into a slab page. Again if CONFIG_SLAB_DEBUG
f.e. is enabled then the alignment will no longer be to a 64 byte bound 
but only to the alignment guaranteed by ARCH_KMALLOC_MINALIGN. If this 
trick is used on a non kmalloc cache with a non power of size then we
may have a larger chance of trouble occurring.

For 2.6.22 the easiest solution may be to check for PageSlab in the
flush_dcache_pages of the affected platforms and then count on
the users not enabling any slab debugging. Its then simply the same state 
as before.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ