lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Nov 2017 12:45:23 +0800
From:   Wei Xu <wexu@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, mst@...hat.com,
        mjrosato@...ux.vnet.ibm.com
Subject: Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()

On Wed, Nov 29, 2017 at 10:43:33PM +0800, Jason Wang wrote:
> 
> 
> On 2017年11月29日 22:23, wexu@...hat.com wrote:
> > From: Wei Xu <wexu@...hat.com>
> > 
> > Matthew found a roughly 40% tcp throughput regression with commit
> > c67df11f(vhost_net: try batch dequing from skb array) as discussed
> > in the following thread:
> > https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> > 
> > Eventually we figured out that it was a skb leak in handle_rx()
> > when sending packets to the VM. This usually happens when a guest
> > can not drain out vq as fast as vhost fills in, afterwards it sets
> > off the traffic jam and leaks skb(s) which occurs as no headcount
> > to send on the vq from vhost side.
> > 
> > This can be avoided by making sure we have got enough headcount
> > before actually consuming a skb from the batched rx array while
> > transmitting, which is simply done by moving checking the zero
> > headcount a bit ahead.
> > 
> > Also strengthen the small possibility of leak in case of recvmsg()
> > fails by freeing the skb.
> > 
> > Signed-off-by: Wei Xu <wexu@...hat.com>
> > Reported-by: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
> > ---
> >   drivers/vhost/net.c | 23 +++++++++++++----------
> >   1 file changed, 13 insertions(+), 10 deletions(-)
> > 
> > v2:
> > - add Matthew as the reporter, thanks matthew.
> > - moving zero headcount check ahead instead of defer consuming skb
> >    due to jason and mst's comment.
> > - add freeing skb in favor of recvmsg() fails.
> > 
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 8d626d7..e302e08 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
> >   		/* On error, stop handling until the next kick. */
> >   		if (unlikely(headcount < 0))
> >   			goto out;
> > -		if (nvq->rx_array)
> > -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > -		/* On overrun, truncate and discard */
> > -		if (unlikely(headcount > UIO_MAXIOV)) {
> > -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > -			err = sock->ops->recvmsg(sock, &msg,
> > -						 1, MSG_DONTWAIT | MSG_TRUNC);
> > -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > -			continue;
> > -		}
> >   		/* OK, now we need to know about added descriptors. */
> >   		if (!headcount) {
> >   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> > @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
> >   			 * they refilled. */
> >   			goto out;
> >   		}
> > +		if (nvq->rx_array)
> > +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > +		/* On overrun, truncate and discard */
> > +		if (unlikely(headcount > UIO_MAXIOV)) {
> > +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > +			err = sock->ops->recvmsg(sock, &msg,
> > +						 1, MSG_DONTWAIT | MSG_TRUNC);
> > +			if (unlikely(err != 1))
> > +				kfree_skb((struct sk_buff *)msg.msg_control);
> 
> I think we'd better fix this in tun/tap (better in another patch) otherwise
> it lead to an odd API: some case skb were freed in recvmsg() but caller
> still need to deal with the rest case.

Right, it is better to handle it in recvmsg().

Wei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ