lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 11 Feb 2011 11:01:55 +0100
From:	Milan Broz <mbroz@...hat.com>
To:	Jesper Juhl <jj@...osbits.net>
CC:	linux-kernel@...r.kernel.org, Alexander Kjeldaas <astor@...t.no>,
	David Woodhouse <David.Woodhouse@...el.com>,
	Herbert Xu <herbert@...dor.hengli.com.au>,
	Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: NULL deref in drivers/md/dm-crypt.c:crypt_convert()

On 02/11/2011 10:26 AM, Jesper Juhl wrote:
> On Fri, 11 Feb 2011, Milan Broz wrote:
> But, is that really where it says the problem is? That's not how I read 
> it.
>
> The problem is the second time through the while loop, not the first :
> ...
> 776             while(ctx->idx_in < ctx->bio_in->bi_vcnt &&
> 777                   ctx->idx_out < ctx->bio_out->bi_vcnt) {
> 778
> 779                     crypt_alloc_req(cc, ctx);
> 780
> 781                     atomic_inc(&ctx->pending);
> 782
> 783                     r = crypt_convert_block(cc, ctx, this_cc->req);
>
> first time through the loop this is fine, but if we then subsequently hit 
> the -EINPROGRESS case in the switch below we'll explictly assign NULL to 
> this_cc->req and the the 'continue' ensures that we do one more trip 
> around the loop and on that second pass we pass a NULL this_cc->req to 
> crypt_convert_block()
>
Sigh. Did you read my first email? It is false positive.

this_cc->req is allocated in crypt_alloc_req(cc, ctx);

this_cc is simple per cpu struct, common in both functions.

The code here tries to simply support both sync and async cryptAPI operation.

In sync, we can reuse this_cc->req immediately (this is common case).

In async mode (returns EBUSY, EINPROGRESS) we must not use it again (because it is
still processing) so we explicitly set it here to NULL and in the NEXT iteration
crypt_alloc_req(cc, ctx) allocates new this_cc->req from pool.

crypt_alloc_req can probably take this_cc as argument directly and not calculate
it again, but compiler will inline and optimise the code anyway.

You can easily test async path, just apply in crypt_ctr and use some crypt mapping
-                      "%s(%s)", chainmode, cipher);                                                                                       
+                      "cryptd(%s(%s-generic))", chainmode, cipher);                                                                       

To make coverity happy, see patch below.
-
Tidy code to reuse per cpu pointer

Signed-off-by: Milan Broz <mbroz@...hat.com>
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index ad2a6df..a2312e3 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -748,9 +748,9 @@ static void kcryptd_async_done(struct crypto_async_request *async_req,
 			       int error);
 
 static void crypt_alloc_req(struct crypt_config *cc,
-			    struct convert_context *ctx)
+			    struct convert_context *ctx,
+			    struct crypt_cpu *this_cc)
 {
-	struct crypt_cpu *this_cc = this_crypt_config(cc);
 	unsigned key_index = ctx->sector & (cc->tfms_count - 1);
 
 	if (!this_cc->req)
@@ -776,7 +776,7 @@ static int crypt_convert(struct crypt_config *cc,
 	while(ctx->idx_in < ctx->bio_in->bi_vcnt &&
 	      ctx->idx_out < ctx->bio_out->bi_vcnt) {
 
-		crypt_alloc_req(cc, ctx);
+		crypt_alloc_req(cc, ctx, this_cc);
 
 		atomic_inc(&ctx->pending);
 

Milan


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ