lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Ywi6qA7EsBJwEa5x@nvidia.com>
Date:   Fri, 26 Aug 2022 09:20:56 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Leon Romanovsky <leon@...nel.org>
Cc:     Saeed Mahameed <saeed@...nel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        Steffen Klassert <steffen.klassert@...unet.com>,
        "David S . Miller" <davem@...emloft.net>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        netdev@...r.kernel.org, Raed Salem <raeds@...dia.com>,
        ipsec-devel <devel@...ux-ipsec.org>
Subject: Re: [PATCH xfrm-next v2 0/6] Extend XFRM core to allow full offload
 configuration

On Tue, Aug 23, 2022 at 07:48:37AM +0300, Leon Romanovsky wrote:
> On Mon, Aug 22, 2022 at 02:27:16PM -0700, Saeed Mahameed wrote:
> > On 22 Aug 09:33, Jakub Kicinski wrote:
> > > On Mon, 22 Aug 2022 11:54:42 +0300 Leon Romanovsky wrote:
> > > > On Mon, Aug 22, 2022 at 10:41:05AM +0200, Steffen Klassert wrote:
> > > > > On Fri, Aug 19, 2022 at 10:53:56AM -0700, Jakub Kicinski wrote:
> > > > > > Yup, that's what I thought you'd say. Can't argue with that use case
> > > > > > if Steffen is satisfied with the technical aspects.
> > > > >
> > > > > Yes, everything that can help to overcome the performance problems
> > > > > can help and I'm interested in this type of offload. But we need to
> > > > > make sure the API is usable by the whole community, so I don't
> > > > > want an API for some special case one of the NIC vendors is
> > > > > interested in.
> > > > 
> > > > BTW, we have a performance data, I planned to send it as part of cover
> > > > letter for v3, but it is worth to share it now.
> > > > 
> > > >  ================================================================================
> > > >  Performance results:
> > > > 
> > > >  TCP multi-stream, using iperf3 instance per-CPU.
> > > >  +----------------------+--------+--------+--------+--------+---------+---------+
> > > >  |                      | 1 CPU  | 2 CPUs | 4 CPUs | 8 CPUs | 16 CPUs | 32 CPUs |
> > > >  |                      +--------+--------+--------+--------+---------+---------+
> > > >  |                      |                   BW (Gbps)                           |
> > > >  +----------------------+--------+--------+-------+---------+---------+---------+
> > > >  | Baseline             | 27.9   | 59     | 93.1  | 92.8    | 93.7    | 94.4    |
> > > >  +----------------------+--------+--------+-------+---------+---------+---------+
> > > >  | Software IPsec       | 6      | 11.9   | 23.3  | 45.9    | 83.8    | 91.8    |
> > > >  +----------------------+--------+--------+-------+---------+---------+---------+
> > > >  | IPsec crypto offload | 15     | 29.7   | 58.5  | 89.6    | 90.4    | 90.8    |
> > > >  +----------------------+--------+--------+-------+---------+---------+---------+
> > > >  | IPsec full offload   | 28     | 57     | 90.7  | 91      | 91.3    | 91.9    |
> > > >  +----------------------+--------+--------+-------+---------+---------+---------+
> > > > 
> > > >  IPsec full offload mode behaves as baseline and reaches linerate with same amount
> > > >  of CPUs.
> > > > 
> > 
> > Just making sure: Baseline == "Clear text TCP" ?
> 
> Yes, baseline is plain TCP without any encryption.
> 
> We can get higher numbers with Tariq's improvements, but it was not
> important to achieve maximum as we are interested to see differences
> between various modes.

BW is only part of the goal here, a significant metric is how much
hypervisor CPU power is consumed by the ESP operation.

It is not any different than vlan or other encapsulating offloads
where the ideal is to not interrupt the hypervisor CPU at all.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ