lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkZ62F_iCCOf4nmM@nataraja>
Date: Thu, 16 May 2024 23:30:00 +0200
From: Harald Welte <laforge@...monks.org>
To: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
Cc: Alexander Lobakin <aleksander.lobakin@...el.com>,
	Wojciech Drewek <wojciech.drewek@...el.com>, netdev@...r.kernel.org,
	intel-wired-lan@...ts.osuosl.org, linux-kernel@...r.kernel.org,
	David Miller <davem@...emloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 00/21] ice: add PFCP filter
 support

Hi Michal,

thanks for your response.

On Thu, May 16, 2024 at 12:44:13PM +0200, Michal Swiatkowski wrote:

> > I'm curious to understand why are *pfcp* packets hardware offloaded?
> > PFCP is just the control plane, similar to you can consider netlink the
> > control plane by which userspace programs control the data plane.
> > 
> > I can fully understand that GTP-U packets are offloaded to kernel space or
> > hardware, and that then some control plane mechanism like PFCP is needed
> > to control that data plane.  But offloading packets of that control
> > protocol?
> 
> It is hard for me to answer your concerns, because oposite to you, I
> don't have any experience with telco implementations. We had client that
> want to add offload rule for PFCP in the same way as for GTP. 

I've meanwhile done some reading and it seems there are indeed some
papers suggesting that in specific implementations of control/data plane
splits, the transaction rate between control and data plane (to set up /
tear down / modify tunnels) is to low.  As a work-around, the entire
PFCP parsing is then off-loaded into (e.g. P4 capable) hardware.

So it seems at least there appears to be equipment where PFCP offloading
is useful to significantly incresae performance.

For those curious, https://faculty.iiitd.ac.in/~rinku/resources/slides/2022-sosr-accelupf-slides.pdf
seems to cover one such configuration where offloading processing the
control-plane protocol into the P4 hardware switch has massively
improved the previously poor PFCP processing rate.

> Intel
> hardware support matching on specific PFCP packet parts. We spent some
> time looking at possible implementations. As you said, it is a little
> odd to follow the same scheme for GTP and PFCP, but it look for me like
> reasonable solution.

Based on what I understand, I am not sure I would agree with the
"reasonable solution" part.  But then of course, it is a subjective
point of view.

I understand and appreciate the high-level goal of giving the user some
way to configure a specific feature of an intel NIC.

However, I really don't think that this should come at the expense of
introducing tunnel devices (or other net-devs) for things that are not
tunnels, and by calling things PFCP whcih are not an implementation of
PFCP.

Tou are introducing something called "pfcp" into the kernel,
whcih is not pfcp.  What if somebody else at some point wanted to
introduce some actual PFCP support in some form?  How should they call
their sub-systems / Kconfigs / etc?  They could no longer call it simply
"pfcp" as you already used this very generic term for (from the
perspective of PFCP) a specific niche use case of configuring a NIC to
handle all of PFCP.

> Do you have better idea for that?

I am not familiar with the use cases and the intel NICs and what kind of
tooling or third party software might be out there wanting to configure
it.  It's really not my domain of expertise and as such I have no
specific proposal, sorry.

It just seems mind-boggling to me that we would introduce
* a net-device for sometthing that's not a net-device
* a tunnel for something that does no tunneling whatsoeer
* code mentioning "PFCP encapsulated traffic" when in fact it is
  impossible to encapsulate any traffic in PFCP, and the code does
  not - to the best of my understanding - do any such encapsulation
  or even configure hardware to perform any such non-existant PFCP
  encapsulation

[and possibly more] *just* so that a user can use 'tc' to configure a
hardware offloading feature in their NIC.

IMHO, there must be a better way.

> > I also see the following in the patch:
> > 
> > > +MODULE_DESCRIPTION("Interface driver for PFCP encapsulated traffic");
> > 
> > PFCP is not an encapsulation protocol for user plane traffic.  It is not
> > a tunneling protocol.  GTP-U is the tunneling protocol, whose
> > implementations (typically UPFs) are remote-controlled by PFCP.
> > 
> 
> Agree, it is done like that to simplify implementation and reuse kernel
> software stack.

My comment was about the module description.  It claims something that
makes no sense and that it does not actually implement.  Unless I'm not
understanding what the code does, it is outright wrong and misleading.

Likewise is the comment on top of the drivers/net/pfcp.c file says:

> * PFCP according to 3GPP TS 29.244

While the code is in fact *not* implementing any part of TS 29.244.  The
code is using the udp_tunnel infrastructure for something that's not a
tunnel.  From what i can tell it is creating an in-kernel UDP socket
(whcih can be done without any relation to udp_tunnel) and it is
configuring hardware offload features in a NIC.  The fact that the
payload of those UDP packets may be PFCP is circumstancial - nothing in
the code actually implements even the tiniest bit of PFCP protocol
parsing.

> This is the way to allow user to steer PFCP packet based on specific
> opts (type and seid) using tc flower. If you have better solution for
> that I will probably agree with you and will be willing to help you
> with better implementation.

sadly, neither tc nor intel NIC hardware offload capabilities are my
domain of expertise, so I am unable to propose detials of a better
solution.

> I assume the biggest problem here is with treating PFCP as a tunnel
> (done for simplification and reuse) and lack of any functionality of
> PFCP device

I don't think the kernel should introduce a net-device (here
specifically a tunnel device) for something that is not a tunnel.  This
device [nor the hardware accelerator it controls] will never encapsulate
or decapsulate any PFCP packet, as PFCP is not a tunnel / encapsulation
protocol.

> (moving the PFCP specification implementation into kernel
> probably isn't good idea and may never be accepted).

I would agree, but that is a completely separate discussion.  Even if
one ignores this, the hypothetical kernel PFCP implementation would not
be a tunnel, and not be a net-device at all.  It would simply be some
kind of in-kernel UDP socket parsing and responding to packets, similar
to the in-kernel nfs daemon or whatever other in-kernel UDP users exist.

Also: The fact that you or we think an actual PFCP implementation would
not be accepted in the kernel should *not* be an argument in favor of
accepting something else into the kernel, call it PFCP and create tunnel
devices for things that are not tunnels :)

> Offloading doesn't sound like problematic. If there is a user that want
> to use that (to offload for better performance, or even to steer between
> VFs based on PFCP specific parts) why not allow to do that?

The papers I read convinced me that there is a use case.  I very much
believe the use case is a work-around for a different problem (the
inefficiency of the control->data plance protocol in this case), but
my opinion on that doesn't matter here.  I do agree with you that there
are apparently people who would want to make use of such a feature, and
that there is nothing wrong with provoding them means to do this.

However, the *how* is what I strongly object to.  Once again, I may miss
some part of your architecture, and I am happy to be proven otherwise.

But if all of this is *just* to configure hardware offloading in a nic,
I don't think there should be net-devices or tunnels that never
encap/decap a single packet or for protocols / use cases that clearly do
not encap or decap packets.

I also think this sets very bad precedent.  What about other prootocols
in the future?  Will we see new tunnels in the kernel for things that
are not tunnels at all, every time there is some new protocol that gains
hardware offloading capability in some NIC? Surely this kind of
proliferation is not the kind of software architecture we want to have?

Once again, I do apologize for raising my concerns at such a late stage.
I am not a kernel developer anymore these days, and I do not follow any
of the related mailing lists.  It was pure coincidence that the net-next
merge of some GTP improvements I was involved in specifying also
contained the PFCP code and I started digging what this was all about.

Regards,
	Harald

-- 
- Harald Welte <laforge@...monks.org>          https://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
                                                  (ETSI EN 300 175-7 Ch. A6)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ