lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Jul 2015 19:44:08 +0100
From:	Al Viro <viro@...IV.linux.org.uk>
To:	Andrey Ryabinin <a.ryabinin@...sung.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [git pull] vfs part 2

On Wed, Jul 01, 2015 at 02:25:43PM +0300, Andrey Ryabinin wrote:

> I've attached gdb instead.
> So, after message "bogus RWRITE: 93 -> 4096"
> I've got this:
> 
> (gdb) p *req->rc
> $11 = {size = 11, id = 119 'w', tag = 3, offset = 11, capacity = 8192, sdata = 0xffff8802347b8020 "\v"}
> (gdb) p *req->tc
> $12 = {size = 116, id = 118 'v', tag = 3, offset = 0, capacity = 8192, sdata = 0xffff88023479c020 "t"}

*grumble*

Request: id = P9_TWRITE, tag = 3.  Response: id = P9_RWRITE, tag = 3.
Size of request is reasonable: it should be 32bit request size + 8bit id
(TWRITE) + 16bit tag + 32bit fid + 64bit offset + 32bit count + payload,
i.e. 23 + payload size.  23 + 93 == 116, which is what we have there.
Success response is 32bit request size + 8bit id (RWRITE) + 16bit tag + 32bit
count, i.e. 11 bytes and that seems to be what we are getting here.

That would appear to exclude the possibility of bogus request - even if we had
somehow ended up with count == 4096 in TWRITE arguments, server wouldn't have
anywhere to get that much data from and either the things are *really* badly
fucked on server, or it should've replied with RERROR.

To exclude it completely we could check 4 bytes at req->tc->sdata + 19
(that's where count is), but I don't believe that this is where the problem
is.

The thing is, looking through qemu hw/9pfs/virtio-9p.c:v9fs_write() and the
stuff it calls, I don't see any way for that kind of crap to happen there...
Just in case - after the do-while loop in qemu v9fs_write(), just prior
to
    offset = 7;
    err = pdu_marshal(pdu, offset, "d", total);
    if (err < 0) {
        goto out;
    }

could you slap something like
	if (total > count)
		*(char *)0 = 0;

and see if it dumps core on your test?  Or use some more humane form of
debugging - it's been a while since I last ran qemu under gdb and I'd
need more coffee to bring that memory up...

Mismatched reply could also be a possibility, but only if we end up with
sending more than one request with the same tag without waiting for response
for the first one.

The reason why I don't want to let it go just with the "cap count with rsize
in a couple of places in net/9p/client.c" (which we'll definitely need - we
can't assume that server is sane, not compromised, etc.) is that it smells
like a symptom of something very fishy and I'd prefer to get to the root of
that thing...

Oh, lovely - I do see one bug in there that could've lead to bogosities around
the request lifetimes, but it's more recent than 3.19, so we have something
else going on.  In any case, the patch below is needed to fix a fuckup
introduced in 4.0 - getting truncated RWRITE packet from server (as in
"badly malformed response", not "short write") ends up with double-free.
Almost certainly not what we are hitting here, though.

diff --git a/net/9p/client.c b/net/9p/client.c
index 6f4c4c8..ca3b342 100644
--- a/net/9p/client.c
+++ b/net/9p/client.c
@@ -1647,6 +1647,7 @@ p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err)
 		if (*err) {
 			trace_9p_protocol_dump(clnt, req->rc);
 			p9_free_req(clnt, req);
+			break;
 		}
 
 		p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", count);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ