[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120523131242.GA15406@phenom.dumpdata.com>
Date: Wed, 23 May 2012 09:12:42 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Simon Graham <simon.graham@...rix.com>
Cc: Ian Campbell <Ian.Campbell@...rix.com>,
Ben Hutchings <bhutchings@...arflare.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Adnan Misherfi <adnan.misherfi@...cle.com>
Subject: Re: [PATCH] xen/netback: calculate correctly the SKB slots.
On Tue, May 22, 2012 at 03:01:55PM -0400, Simon Graham wrote:
> > >
> > > > > int i, copy_off;
> > > > >
> > > > > count = DIV_ROUND_UP(
> > > > > - offset_in_page(skb->data)+skb_headlen(skb),
> > PAGE_SIZE);
> > > > > + offset_in_page(skb->data + skb_headlen(skb)),
> > PAGE_SIZE);
> > > >
> > > > The new version would be equivalent to:
> > > > count = offset_in_page(skb->data + skb_headlen(skb)) != 0;
> > > > which is not right, as netbk_gop_skb() will use one slot per page.
> > >
> > > Just outside the context of this patch we separately count the frag
> > > pages.
> > >
> > > However I think you are right if skb->data covers > 1 page, since the
> > > new version can only ever return 0 or 1. I expect this patch papers
> > over
> > > the underlying issue by not stopping often enough, rather than
> > actually
> > > fixing the underlying issue.
> >
> > Ah, any thoughts? Have you guys seen this behavior as well?
>
> We ran into this same problem and the fix we've been running with for a while now (been meaning to submit it!) is:
Where is the patchqueue of the patches? Is it only on the src.rpm or
is it in some nice mercurial tree? Asking b/c if we run into other trouble
it would be also time-saving for us (and I presume other companies
too) to check that. Thanks!
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists