lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Dec 2012 10:13:20 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	Jason Wang <jasowang@...hat.com>, rusty@...tcorp.com.au,
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
	kvm@...r.kernel.org
Subject: Re: [PATCHv5] virtio-spec: virtio network device RFS support

On Wed, Dec 05, 2012 at 08:39:26PM +0000, Ben Hutchings wrote:
> On Mon, 2012-12-03 at 12:58 +0200, Michael S. Tsirkin wrote:
> > Add RFS support to virtio network device.
> > Add a new feature flag VIRTIO_NET_F_RFS for this feature, a new
> > configuration field max_virtqueue_pairs to detect supported number of
> > virtqueues as well as a new command VIRTIO_NET_CTRL_RFS to program
> > packet steering for unidirectional protocols.
> [...]
> > +Programming of the receive flow classificator is implicit.
> > + Transmitting a packet of a specific flow on transmitqX will cause incoming
> > + packets for this flow to be steered to receiveqX.
> > + For uni-directional protocols, or where no packets have been transmitted
> > + yet, device will steer a packet to a random queue out of the specified
> > + receiveq0..receiveqn.
> [...]
> 
> It doesn't seem like this is usable to implement accelerated RFS in the
> guest, though perhaps that doesn't matter.

What is the issue? Could you be more explicit please?

It seems to work pretty well: if we have
# of queues >= # of cpus, incoming TCP_STREAM into
guest scales very nicely without manual tweaks in guest.

The way it works is, when guest sends a packet driver
select the rx queue that we want to use for incoming
packets for this slow, and transmit on the matching tx queue.
This is exactly what text above suggests no?

>  On the host side, presumably
> you'll want vhost_net to do the equivalent of sock_rps_record_flow() -
> only without a socket?  But in any case, that requires an rxhash, so I
> don't see how this is supposed to work.
> 
> Ben.

Host should just do what guest tells it to.
On the host side we build up the steering table as we get packets
to transmit. See the code in drivers/net/tun.c in recent
kernels.

Again this actually works fine - what are the problems that you see?
Could you give an example please?

> -- 
> Ben Hutchings, Staff Engineer, Solarflare
> Not speaking for my employer; that's the marketing department's job.
> They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ