lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1208091406590.20908@greybox.home>
Date:	Thu, 9 Aug 2012 14:08:10 -0500 (CDT)
From:	"Christoph Lameter (Open Source)" <cl@...ux.com>
To:	Shuah Khan <shuah.khan@...com>
cc:	penberg@...nel.org, glommer@...allels.com, js1304@...il.com,
	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	shuahkhan@...il.com
Subject: Re: [PATCH v2] mm: Restructure kmem_cache_create() to move debug
 cache integrity checks into a new function

On Thu, 9 Aug 2012, Shuah Khan wrote:

> Moving these checks into kmem_cache_sanity_check() would mean return
> path handling will change. The first block of sanity checks for name,
> and size etc. are done before holding the slab_mutex and the second
> block that checks the slab lists is done after holding the mutex.
> Depending on which one fails, return handling is going to be different
> in that if second block fails, mutex needs to be unlocked and when the
> first block fails, there is no need to do that. Nothing that is too
> complex to solve, just something that needs to be handled.

Right. The taking of the mutex etc is not depending on the parameters at
all. So its possible. Its rather simple.

> Comments, thoughts on
>
> 1. just remove size from kmem_cache_sanity_check() parameters
> or
> 2. move first block sanity checks into kmem_cache_sanity_check()
>
> Personally I prefer the first option to avoid complexity in return path
> handling. Would like to hear what others think.

We already have to deal with the return path handling for other failure
cases.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ