[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120129213746.GA7164@phenom.dumpdata.com>
Date: Sun, 29 Jan 2012 16:37:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Wei Liu <wei.liu2@...rix.com>
Cc: Ian Campbell <Ian.Campbell@...rix.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
David Vrabel <david.vrabel@...rix.com>,
Paul Durrant <Paul.Durrant@...rix.com>
Subject: Re: [RFC PATCH V2] New Xen netback implementation
On Sun, Jan 29, 2012 at 01:42:41PM +0000, Wei Liu wrote:
> On Fri, 2012-01-27 at 19:22 +0000, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 17, 2012 at 01:46:56PM +0000, Wei Liu wrote:
> > > A new netback implementation which includes three major features:
> > >
> > > - Global page pool support
> > > - NAPI + kthread 1:1 model
> > > - Netback internal name changes
> > >
> > > Changes in V2:
> > > - Fix minor bugs in V1
> > > - Embed pending_tx_info into page pool
> > > - Per-cpu scratch space
> > > - Notification code path clean up
> > >
> > > This patch series is the foundation of furture work. So it is better
> > > to get it right first. Patch 1 and 3 have the real meat.
> >
> > I've been playing with these patches and couple of things
> > came to my mind:
> > - would it make sense to also register to the shrinker API? This way
> > if the host is running low on memory it can squeeze it out of the
> > pool code. Perhaps a future TODO..
> > - I like the pool code. I was thinking that perhaps (in the future)
> > it could be used by blkback as well, as it runs into "not enought
> > request structure" with the default setting. And making this dynamic
> > would be pretty sweet.
>
> Interesting thoughts worth adding to TODO list. But I'm focusing on
> multi-page ring support and split event channel at the moment, which
> should help improve performance on 10G network. Hopefully I can submit
> RFC patch V3 in a few days. ;-)
>
> > - This patch set solves the CPU banding problem I've seen with the
> > older netback. The older one I could see X netback threads eating 80%
> > of CPU. With this one, the number is down to 13-14%.
> >
> > So you can definitly stick 'Tested-by: Konrad.." on them. And definitly
> > Reviewed-by on the first two - hadn't had a chance to look at the rest.
> >
>
> Thanks for your extensive test and review.
Sure. I also did some testing with limiting the amount of CPUs and found
that 'xl vcpu-set 0 N' make netback not work anymore :-(
>
>
> Wei.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists