[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1351095739.18035.83.camel@zakaz.uk.xensource.com>
Date: Wed, 24 Oct 2012 17:22:19 +0100
From: Ian Campbell <Ian.Campbell@...rix.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>
Subject: Re: [PATCH] net: allow configuration of the size of page in
__netdev_alloc_frag
On Wed, 2012-10-24 at 16:21 +0100, Eric Dumazet wrote:
> On Wed, 2012-10-24 at 15:02 +0100, Ian Campbell wrote:
> > On Wed, 2012-10-24 at 14:30 +0100, Eric Dumazet wrote:
> > > It seems to me its a driver issue, for example
> > > drivers/net/xen-netfront.c has assumptions that can be easily fixed.
> >
> > The netfront ->head thing is a separate (although perhaps related)
> > issue, I intended to fix along the same lines as the previous netback
> > except for some unfathomable reason I haven't been able to reproduce the
> > problem with netfront -- I've no idea why though since it seems like it
> > should be a no brainer!
> >
> > > Why skb->head can be on order-1 or order-2 pages and this is working ?
> >
> > skb->head being order 1 or 2 isn't working for me. The driver I'm having
> > issues with which caused me to create this particular patch is the tg3
> > driver (although I don't think this is by any means specific to tg3).
> >
> > For the ->head the tg3 driver does:
> > mapping = pci_map_single(tp->pdev, skb->data, len, PCI_DMA_TODEVICE);
> > while for the frags it does:
> > mapping = skb_frag_dma_map(&tp->pdev->dev, frag, 0, len, DMA_TO_DEVICE);
> >
> > This ought to do the Right Thing but doesn't seem to be working. Konrad
> > suspected an issue with the swiotlb's handling of order>0 pages in some
> > cases. As I said in the commit message he is looking into this issue.
> >
> > My concern however was that even once the swiotlb is fixed to work right
> > the effect of pci_map_single on a order>0 page is going to be that the
> > data gets bounced into contiguous memory -- that is a memcpy which would
> > undo the benefit of having allocating large pages to start with. So I
> > figured that in such cases we'd be better off just using order 0
> > allocations to start with.
>
> I am really confused.
>
> If you really have such problems, why locally generated TCP traffic
> doesnt also have it ?
I think it does. The reason I noticed the original problem was that ssh
to the machine was virtually (no pun intended) unusable.
> Your patch doesnt touch sk_page_frag_refill(), does it ?
That's right. It doesn't. When is (sk->sk_allocation & __GFP_WAIT) true?
Is it possible I'm just not hitting that case?
Is it possible that this only affects certain traffic patterns (I only
really tried ssh/scp and ping)? Or perhaps its just that the swiotlb is
only broken in one corner case and not the other.
Ian.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists