[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170427151855.GW5077@suse.de>
Date: Thu, 27 Apr 2017 17:18:55 +0200
From: Joerg Roedel <jroedel@...e.de>
To: Shaohua Li <shli@...com>
Cc: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
gang.wei@...el.com, hpa@...ux.intel.com, kernel-team@...com,
ning.sun@...el.com, srihan@...com, alex.eydelberg@...el.com
Subject: Re: [PATCH V2] x86/tboot: add an option to disable iommu force on
On Thu, Apr 27, 2017 at 07:49:02AM -0700, Shaohua Li wrote:
> This is exactly the usage for us. And please note, not everybody should
> sacrifice the DMA security. It is only required when the pcie device hits iommu
> hardware limitation. In our enviroment, normal network workloads (as high as
> 60k pps) are completely ok with iommu enabled. Only the XDP workload, which can
> do around 200k pps, is suffering from the problem. So completely forcing iommu
> off for some workloads without the performance issue isn't good because of the
> DMA security.
How big are the packets in your XDP workload? I also run pps tests for
performance measurement on older desktop-class hardware
(Xeon E5-1620 v2 and AMD FX 6100) and 10GBit network
hardware, and easily get over the 200k pps mark with IOMMU enabled. The
Intel system can receive >900k pps and the AMD system is still at
~240k pps.
But my tests only send IPv4/UDP packets with 8bytes of payload, so that
is probably different to your setup.
Joerg
Powered by blists - more mailing lists