[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170222065737.GA1201@felix.cavium.com>
Date: Tue, 21 Feb 2017 22:57:37 -0800
From: Felix Manlunas <felix.manlunas@...ium.com>
To: Tom Herbert <tom@...bertland.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
raghu.vatsavayi@...ium.com, derek.chickles@...ium.com,
satananda.burla@...ium.com,
VSR Burru <veerasenareddy.burru@...ium.com>
Subject: Re: [PATCH v2 net-next] liquidio: improve UDP TX performance
Tom Herbert <tom@...bertland.com> wrote on Tue [2017-Feb-21 15:27:54 -0800]:
> On Tue, Feb 21, 2017 at 1:09 PM, Felix Manlunas
> <felix.manlunas@...ium.com> wrote:
> > From: VSR Burru <veerasenareddy.burru@...ium.com>
> >
> > Improve UDP TX performance by:
> > * reducing the ring size from 2K to 512
>
> It looks like liquidio supports BQL. Is that not effective here?
Response from our colleague, VSR:
That's right, BQL is not effective here. We reduced the ring size because
there is heavy overhead with dma_map_single every so often. With iommu=on,
dma_map_single in PF Tx data path was taking longer time (~700usec) for
every ~250 packets. Debugged intel_iommu code, and found that PF driver is
utilizing too many static IO virtual address mapping entries (for gather
list entries and info buffers): about 100K entries for two PF's each using
8 rings. Also, finding an empty entry (in rbtree of device domain's iova
mapping in kernel) during Tx path becomes a bottleneck every so often; the
loop to find the empty entry goes through over 40K iterations; this is too
costly and was the major overhead. Overhead is low when this loop quits
quickly.
Powered by blists - more mailing lists