[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHn8xc=1g8bzV-uxaJAYpJ114rR7MLzth=4jyDG329ZwEG+kpg@mail.gmail.com>
Date: Thu, 3 Jun 2021 14:32:35 +0200
From: Jussi Maki <joamaki@...il.com>
To: Robin Murphy <robin.murphy@....com>
Cc: Daniel Borkmann <daniel@...earbox.net>, jroedel@...e.de,
netdev@...r.kernel.org, bpf <bpf@...r.kernel.org>,
intel-wired-lan@...ts.osuosl.org, davem@...emloft.net,
anthony.l.nguyen@...el.com, jesse.brandeburg@...el.com, hch@....de,
iommu@...ts.linux-foundation.org, suravee.suthikulpanit@....com,
gregkh@...uxfoundation.org
Subject: Re: Regression 5.12.0-rc4 net: ice: significant throughput drop
On Wed, Jun 2, 2021 at 2:49 PM Robin Murphy <robin.murphy@....com> wrote:
> >> Thanks for the quick response & patch. I tried it out and indeed it
> >> does solve the issue:
>
> Cool, thanks Jussi. May I infer a Tested-by tag from that?
Of course!
> Given that the race looks to have been pretty theoretical until now, I'm
> not convinced it's worth the bother of digging through the long history
> of default domain and DMA ops movement to figure where it started, much
> less attempt invasive backports. The flush queue change which made it
> apparent only landed in 5.13-rc1, so as long as we can get this in as a
> fix in the current cycle we should be golden - in the meantime, note
> that booting with "iommu.strict=0" should also restore the expected
> behaviour.
>
> FWIW I do still plan to resend the patch "properly" soon (in all honesty
> it wasn't even compile-tested!)
BTW, even with the patch there's quite a bit of spin lock contention
coming from ice_xmit_xdp_ring->dma_map_page_attrs->...->alloc_iova.
CPU load drops from 85% to 20% (~80Mpps, 64b UDP) when iommu is
disabled. Is this type of overhead to be expected?
Powered by blists - more mailing lists