lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179494148.14376.7.camel@moonstone.uk.level5networks.com>
Date:	Fri, 18 May 2007 14:15:48 +0100
From:	Kieran Mansley <kmansley@...arflare.com>
To:	xen-devel@...ts.xensource.com
Cc:	netdev@...r.kernel.org, muli@...ibm.com,
	herbert@...dor.apana.org.au
Subject: [PATCH 0/4] [Net] Support Xen accelerated network plugin modules

This is a repost of some earlier patches to the xen-devel mailing list,
with a number of changes thanks to some useful suggestions from others.
Apologies for the short delay in getting this next version ready.

I've also CC'd netdev@...r.kernel.org as some of the files being patched
may be merged into upstream linux soon, and so folks there may have
opinions too.

This set of patches provides the hooks and support necessary for
accelerated network plugin modules to attach to Xen's netback and
netfront.  These modules provide a fast path for network traffic where
there is hardware support available for the netfront driver to send and
receive packets directly to a NIC (such as those available from
Solarflare).

As there are currently no available plugins, I've attached a couple of
dummy ones to illustrate how the hooks could be used.  These are
incomplete (and clearly wouldn't even compile) in that they only include
code to show the interface between the accelerated module and
netfront/netback.  A lot of the comments hint at what code should go
where.  They don't show any interface between the accelerated frontend
and accelerated backend, or hardware access, for example, as those would
both be specific to the implementation.  I hope they help illustrate
this, but if you have any questions I'm happy to provide more
information.

A brief overview of the operation of the plugins:  When the accelerated
modules are loaded, a VI is created by the accelerated backend to allow
the accelerated frontend to safely access portions of the NIC.  For RX,
when packets are received by the accelerated backend, it will examine
them and if appropriate insert filters into the NIC to deliver future
packets on that address directly to the accelerated frontend's VI.  For
TX, netfront gives each accelerated frontend the option of sending each
packet, which it can accept (if it wants to send it directly to the
hardware) or decline (if it thinks this is more appropriate to send via
the normal network path).

We have tried to ensure that the hooks are hardware-agnostic, i.e. would
be relevant to hardware other than our own, without providing all
possible ways of doing each task (but if others need to extend it, that
would be welcomed).

We have found that using this approach to accelerating network traffic,
domU to domU connections (across the network) can achieve close to the
performance of dom0 to dom0 connections on a 10Gbps ethernet.  This is
roughly double the bandwidth seen with unmodified Xen. 

Kieran

View attachment "dummy_accel_backend.c" of type "text/x-csrc" (3325 bytes)

View attachment "dummy_accel_frontend.c" of type "text/x-csrc" (9228 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ