lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Nov 2010 11:30:45 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Shirley Ma <mashirle@...ibm.com>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/1] vhost: TX used buffer guest signal accumulation

On Wed, Nov 03, 2010 at 10:38:46PM -0700, Shirley Ma wrote:
> On Wed, 2010-11-03 at 12:48 +0200, Michael S. Tsirkin wrote:
> > I mean in practice, you see a benefit from this patch?
> 
> Yes, I tested it. It does benefit the performance.
> 
> > > My concern here is whether checking only in set up would be
> > sufficient
> > > for security?
> > 
> > It better be sufficient because the checks that put_user does
> > are not effictive when run from the kernel thread, anyway.
> > 
> > > Would be there is a case guest could corrupt the ring
> > > later? If not, that's OK.
> > 
> > You mean change the pointer after it's checked?
> > If you see such a case, please holler.
> 
> I wonder about it, not a such case in mind.
> 
> > To clarify: the combination of __put_user and separate
> > signalling is giving the same performance benefit as your
> > patch?
> 
> Yes, it has similar performance, not I haven't finished all message
> sizes comparison yet.
> 
> > I am mostly concerned with adding code that seems to help
> > speed for reasons we don't completely understand, because
> > then we might break the optimization easily without noticing.
> 
> I don't think the patch I submited would break up anything.

No, I just meant that when a patch gives some benefit, I'd like
to understand where the benefit comes from so that I don't
break it later.

> It just
> reduced the cost of per used buffer 3 put_user() calls and guest
> signaling from one to one to many to one.

One thing to note is that deferred signalling needs to be
benchmarked with old guests which don't orphan skbs on xmit
(or disable orphaning in both networking stack and virtio-net).

> 
> Thanks
> Shirley

OK, so I guess I'll queue the __put_user etc patches for net-next, on top of this
I think a patch which defers signalling would be nice to have,
then we can figure out whether a separate heads array still has benefits
for non zero copy case: if yes what they are, if no whether it should be
used for zero copy only for both both non-zero copy and zero copy.

Makes sense?

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ