lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 1 Jun 2007 15:41:48 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Srinivasa Ds <srinivasa@...ibm.com>,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Dinakar Guniguntala <dino@...ibm.com>, pj@....com,
	simon.derr@...l.net, clameter@...ulhu.engr.sgi.com,
	rientjes@...gle.com
Subject: Re: [RFC] [PATCH] cpuset operations causes Badness at mm/slab.c:777
 warning

On Fri, 1 Jun 2007, Andrew Morton wrote:

> > I should make SLUB put poisoning values in unused areas of a kmalloced 
> > object?
> 
> hm, I hadn't thought of it that way actually.  I was thinking it was
> specific to kmalloc(0) but as you point out, the situation is
> generalisable.

Right it could catch a lot of other bugs as well.

> Yes, if someone does kmalloc(42) and we satisfy the allocation from the
> size-64 slab, we should poison and then check the allegedly-unused 22
> bytes.
> 
> Please ;)
> 
> (vaguely stunned that we didn't think of doing this years ago).

Well there are architectural problems. We determine the power of two slab 
at compile time. The object size information is currently not available in 
the binary :=).
 
> It'll be a large patch, I expect?

Ummm... Yes. We need to switch off the compile time power of two slab 
calculation. Then I need to have some way of storing the object size in 
the metainformation of each object. Changes a lot of function calls.

> Actually, I have this vague memory that slab would take that kmalloc(42)
> and would then return kmalloc(64)+22, so the returned memory is
> "right-aligned".  This way the existing overrun-detection is applicable to
> all kmallocs.  Maybe I dreamed it.

Yes, that would be possible by simply adding a compile time generated 
offset to the compile time generated call to kmem_cache_alloc. But then 
kfree() will have a difficult time figuring out which object to free. 
Hmmm... But I can get to the slab via the page struct which allows me to 
figure out the power of two size. This would mean that kfree would work 
with an arbitrary pointer anywhere into the object.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ