lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Jan 2012 10:14:33 +0000
From:	Paul Durrant <Paul.Durrant@...rix.com>
To:	"Wei Liu (Intern)" <wei.liu2@...rix.com>,
	Ian Campbell <Ian.Campbell@...rix.com>,
	"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"Wei Liu (Intern)" <wei.liu2@...rix.com>
Subject: RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
 model

> -----Original Message-----
> From: xen-devel-bounces@...ts.xensource.com [mailto:xen-devel-
> bounces@...ts.xensource.com] On Behalf Of Wei Liu
> Sent: 13 January 2012 16:59
> To: Ian Campbell; konrad.wilk@...cle.com; xen-
> devel@...ts.xensource.com; netdev@...r.kernel.org
> Cc: Wei Liu (Intern)
> Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> model
> 
> This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> do the weight-lifting job:
> 
>   - NAPI is used for guest side TX (host side RX)
>   - kthread is used for guest side RX (host side TX)
> 
> This model provides better scheduling fairness among vifs. It also lays the
> foundation for future work.
> 
> The major defect for the current implementation is that in the NAPI poll
> handler we don't actually disable interrupt. Xen stuff is different from real
> hardware, it requires some other tuning of ring macros.
> 
> Signed-off-by: Wei Liu <wei.liu2@...rix.com>
> ---
[snip]
> 
>  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
>  	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> +91,14 @@ struct xen_netbk {
>  	struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];  };
> 

Keeping these big inline arrays might cause scalability issues. pending_tx_info should arguably me more closely tied in and possibly implemented within your page pool code.

  Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ