lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 19 Nov 2022 11:31:41 +0900
From:   Dominique Martinet <asmadeus@...ewreck.org>
To:     Stefano Stabellini <sstabellini@...nel.org>
Cc:     GUO Zihua <guozihua@...wei.com>, linux_oss@...debyte.com,
        v9fs-developer@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] 9p/xen: check logical size for buffer size

Thanks a lot, that was fast!

Stefano Stabellini wrote on Fri, Nov 18, 2022 at 05:51:46PM -0800:
> On Fri, 18 Nov 2022, Dominique Martinet wrote:
> > trans_xen did not check the data fits into the buffer before copying
> > from the xen ring, but we probably should.
> > Add a check that just skips the request and return an error to
> > userspace if it did not fit
> > 
> > Signed-off-by: Dominique Martinet <asmadeus@...ewreck.org>
> > ---
> > 
> > This comes more or less as a follow up of a fix for trans_fd:
> > https://lkml.kernel.org/r/20221117091159.31533-1-guozihua@huawei.com
> > Where msize should be replaced by capacity check, except trans_xen
> > did not actually use to check the size fits at all.
> > 
> > While we normally trust the hypervisor (they can probably do whatever
> > they want with our memory), a bug in the 9p server is always possible so
> > sanity checks never hurt, especially now buffers got drastically smaller
> > with a recent patch.
> > 
> > My setup for xen is unfortunately long dead so I cannot test this:
> > Stefano, you've tested v9fs xen patches in the past, would you mind
> > verifying this works as well?
> > 
> >  net/9p/trans_xen.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> > index b15c64128c3e..66ceb3b3ae30 100644
> > --- a/net/9p/trans_xen.c
> > +++ b/net/9p/trans_xen.c
> > @@ -208,6 +208,14 @@ static void p9_xen_response(struct work_struct *work)
> >  			continue;
> >  		}
> >  
> > +		if (h.size > req->rc.capacity) {
> > +			dev_warn(&priv->dev->dev,
> > +				 "requested packet size too big: %d for tag %d with capacity %zd\n",
> > +		                 h.size, h.tag, rreq->rc.capacity);
> 
> "req" instead of "rreq"

ugh, sorry for that. I didn't realize I have NET_9P_XEN=n on this
machine ... /facepalm

I'm lazy so won't bother sending a v2:
https://github.com/martinetd/linux/commit/ebd09c8245aa15f15b273e9733e8ed8991db3682

I'll add your Tested-by tomorrow unless you don't want to :)


> I made this change and tried the two patches together. Unfortunately I
> get the following error as soon as I try to write a file:
> 
> /bin/sh: can't create /mnt/file: Input/output error
> 
> 
> Next I reverted the second patch and only kept this patch. With that, it
> worked as usual. It looks like the second patch is the problem. I have
> not investigated further.

Thanks -- it's now obvious I shouldn't send patches without testing
before bedtime...
I could reproduce easily with virtio as well, this one was silly as well
(>= instead of >). . . With another problem when zc requests get
involved, as we don't actually allocate more than 4k for the rpc itself.

If I adjust it to also check with the zc 'inlen' as follow it appears to
work:
https://github.com/martinetd/linux/commit/162015a0dac40eccc9e8311a5eb031596ad35e82
But that inlen isn't actually precise, and trans_virtio (the only
transport implementing zc rpc) actually takes some liberty with the
actual sg size to better fit hardwre, so that doesn't really make
sense either and we probably should just trust trans_virtio at this
point?

This isn't obvious, so I'll just drop this patch for now.
Checking witih msize isn't any good but it can wait till we sort it out
as transports now all already check this one way or another; I'd like to
get the actual fixes out first.

(Christian, if you have time to look at it and take over I'd appreciate
it, but there's no hurry.)


Thanks again and sorry for the bad patches!
-- 
Dominique

Powered by blists - more mailing lists