[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YxTWKatwm5vuBovt@unreal>
Date: Sun, 4 Sep 2022 19:45:29 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Steffen Klassert <steffen.klassert@...unet.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
Paolo Abeni <pabeni@...hat.com>, Raed Salem <raeds@...dia.com>,
Saeed Mahameed <saeedm@...dia.com>
Subject: Re: [PATCH xfrm-next v3 0/6] Extend XFRM core to allow full offload
configuration
On Mon, Aug 29, 2022 at 09:54:03AM +0200, Steffen Klassert wrote:
> On Tue, Aug 23, 2022 at 04:31:57PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@...dia.com>
> >
> > Changelog:
> > v3:
> > * I didn't hear any suggestion what term to use instead of
> > "full offload", so left it as is. It is used in commit messages
> > and documentation only and easy to rename.
> > * Added performance data and background info to cover letter
> > * Reused xfrm_output_resume() function to support multiple XFRM transformations
> > * Add PMTU check in addition to driver .xdo_dev_offload_ok validation
> > * Documentation is in progress, but not part of this series yet.
> > v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com
> > * Rebased to latest 6.0-rc1
> > * Add an extra check in TX datapath patch to validate packets before
> > forwarding to HW.
> > * Added policy cleanup logic in case of netdev down event
> > v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com
> > * Moved comment to be before if (...) in third patch.
> > v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com
> > -----------------------------------------------------------------------
> >
> > The following series extends XFRM core code to handle a new type of IPsec
> > offload - full offload.
> >
> > In this mode, the HW is going to be responsible for the whole data path,
> > so both policy and state should be offloaded.
> >
> > IPsec full offload is an improved version of IPsec crypto mode,
> > In full mode, HW is responsible to trim/add headers in addition to
> > decrypt/encrypt. In this mode, the packet arrives to the stack as already
> > decrypted and vice versa for TX (exits to HW as not-encrypted).
> >
> > Devices that implement IPsec full offload mode offload policies too.
> > In the RX path, it causes the situation that HW can't effectively
> > handle mixed SW and HW priorities unless users make sure that HW offloaded
> > policies have higher priorities.
> >
> > To make sure that users have a coherent picture, we require that
> > HW offloaded policies have always (both RX and TX) higher priorities
> > than SW ones.
> >
> > To not over-engineer the code, HW policies are treated as SW ones and
> > don't take into account netdev to allow reuse of the same priorities for
> > different devices.
> >
> > There are several deliberate limitations:
> > * No software fallback
> > * Fragments are dropped, both in RX and TX
> > * No sockets policies
> > * Only IPsec transport mode is implemented
>
> ... and you still have not answered the fundamental questions:
>
> As implemented, the software does not hold any state.
> I.e. there is no sync between hardware and software
> regarding stats, liftetime, lifebyte, packet counts
> and replay window. IKE rekeying and auditing is based
> on these, how should this be done?
I hope that the patch added in v4 clarifies it. There is a sync between
HW and core in regarding of packet counts. The HW generates event and
calls to xfrm_state_check_expire() to make sure that already existing
logic will do rekeying.
The replay window will be handled in similar way. HW will generate an
event.
>
> How can tunnel mode work with this offload?
I don't see any issues here. Same rules will apply here.
>
> I want to see the full picture before I consider to
> apply this.
Powered by blists - more mailing lists