[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1326709915.17210.410.camel@zakaz.uk.xensource.com>
Date: Mon, 16 Jan 2012 10:31:55 +0000
From: Ian Campbell <Ian.Campbell@...rix.com>
To: Paul Durrant <Paul.Durrant@...rix.com>
CC: "Wei Liu (Intern)" <wei.liu2@...rix.com>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
model
On Mon, 2012-01-16 at 10:14 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@...ts.xensource.com [mailto:xen-devel-
> > bounces@...ts.xensource.com] On Behalf Of Wei Liu
> > Sent: 13 January 2012 16:59
> > To: Ian Campbell; konrad.wilk@...cle.com; xen-
> > devel@...ts.xensource.com; netdev@...r.kernel.org
> > Cc: Wei Liu (Intern)
> > Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> > model
> >
> > This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> > do the weight-lifting job:
> >
> > - NAPI is used for guest side TX (host side RX)
> > - kthread is used for guest side RX (host side TX)
> >
> > This model provides better scheduling fairness among vifs. It also lays the
> > foundation for future work.
> >
> > The major defect for the current implementation is that in the NAPI poll
> > handler we don't actually disable interrupt. Xen stuff is different from real
> > hardware, it requires some other tuning of ring macros.
> >
> > Signed-off-by: Wei Liu <wei.liu2@...rix.com>
> > ---
> [snip]
> >
> > struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> > struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> > +91,14 @@ struct xen_netbk {
> > struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE]; };
> >
>
> Keeping these big inline arrays might cause scalability issues.
> pending_tx_info should arguably me more closely tied in and possibly
> implemented within your page pool code.
For pending_tx_info that probably makes sense since there is a 1:1
mapping between page pool entries and pending_tx_info.
For some of the others the arrays are the runtime scratch space used by
the netback during each processing pass. Since, regardless of the number
of VIFs, there can only ever be nr_online_cpus netback's active at once
perhaps per-CPU scratch space (with appropriate locking etc) is the way
to go.
Ian.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists