lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52172395.9000400@redhat.com>
Date:	Fri, 23 Aug 2013 16:55:49 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] vhost_net: remove the max pending check

On 08/20/2013 10:48 AM, Jason Wang wrote:
> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>>> >> We used to limit the max pending DMAs to prevent guest from pinning too many
>>> >> pages. But this could be removed since:
>>> >>
>>> >> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
>>> >> - This max pending check were almost useless since it was one done when there's
>>> >>   no new buffers coming from guest. Guest can easily exceeds the limitation.
>>> >> - We've already check upend_idx != done_idx and switch to non zerocopy then. So
>>> >>   even if all vq->heads were used, we can still does the packet transmission.
>> > We can but performance will suffer.
> The check were in fact only done when no new buffers submitted from
> guest. So if guest keep sending, the check won't be done.
>
> If we really want to do this, we should do it unconditionally. Anyway, I
> will do test to see the result.

There's a bug in PATCH 5/6, the check:

nvq->upend_idx != nvq->done_idx

makes the zerocopy always been disabled since we initialize both
upend_idx and done_idx to zero. So I change it to:

(nvq->upend_idx + 1) % UIO_MAXIOV != nvq->done_idx.

With this change on top, I didn't see performance difference w/ and w/o
this patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ