lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170427154119.GA26498@dhcp-172-26-110-153.dhcp.thefacebook.com>
Date:   Thu, 27 Apr 2017 08:41:20 -0700
From:   Shaohua Li <shli@...com>
To:     Joerg Roedel <jroedel@...e.de>
CC:     Ingo Molnar <mingo@...nel.org>, <linux-kernel@...r.kernel.org>,
        <gang.wei@...el.com>, <hpa@...ux.intel.com>, <kernel-team@...com>,
        <ning.sun@...el.com>, <srihan@...com>, <alex.eydelberg@...el.com>
Subject: Re: [PATCH V2] x86/tboot: add an option to disable iommu force on

On Thu, Apr 27, 2017 at 05:18:55PM +0200, Joerg Roedel wrote:
> On Thu, Apr 27, 2017 at 07:49:02AM -0700, Shaohua Li wrote:
> > This is exactly the usage for us. And please note, not everybody should
> > sacrifice the DMA security. It is only required when the pcie device hits iommu
> > hardware limitation. In our enviroment, normal network workloads (as high as
> > 60k pps) are completely ok with iommu enabled. Only the XDP workload, which can
> > do around 200k pps, is suffering from the problem. So completely forcing iommu
> > off for some workloads without the performance issue isn't good because of the
> > DMA security.
> 
> How big are the packets in your XDP workload? I also run pps tests for
> performance measurement on older desktop-class hardware
> (Xeon E5-1620 v2 and AMD FX 6100) and 10GBit network
> hardware, and easily get over the 200k pps mark with IOMMU enabled. The
> Intel system can receive >900k pps and the AMD system is still at
> ~240k pps.
> 
> But my tests only send IPv4/UDP packets with 8bytes of payload, so that
> is probably different to your setup.

Sorry, I wrote the wrong data. With iommu the pps is 6M pps, and without it, we
can get around 20M pps. XDP is much faster than normal network workloads. The
test uses 64 bytes. We tried other sizes in the machine (not 8 bytes though),
but pps doesn't change significantly. With different package size, the peek pps
is around 7M with iommu, then the NIC starts to drop package. CPU util is very
low as I said before. Without iommu, the peek pps is around 22M.

Thanks,
Shaohua

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ