lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOuuhY8_W-Evd1=1020bv4oP4pK2mCyyCuDLAu1ZU51hTefZrQ@mail.gmail.com>
Date:   Mon, 30 Jan 2023 14:41:23 -0800
From:   Tom Herbert <tom@...anda.io>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        Jamal Hadi Salim <hadi@...atatu.com>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Jiri Pirko <jiri@...nulli.us>,
        Willem de Bruijn <willemb@...gle.com>,
        Stanislav Fomichev <sdf@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
        kernel@...atatu.com, deb.chatterjee@...el.com,
        anjali.singhai@...el.com, namrata.limaye@...el.com,
        khalidm@...dia.com, pratyush@...anda.io, xiyou.wangcong@...il.com,
        davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
        vladbu@...dia.com, simon.horman@...igine.com, stefanc@...vell.com,
        seong.kim@....com, mattyk@...dia.com, dan.daly@...el.com,
        john.andy.fingerhut@...el.com
Subject: Re: [PATCH net-next RFC 00/20] Introducing P4TC

On Mon, Jan 30, 2023 at 1:10 PM John Fastabend <john.fastabend@...il.com> wrote:
>
> Toke Høiland-Jørgensen wrote:
> > Jamal Hadi Salim <hadi@...atatu.com> writes:
> >
> > > On Mon, Jan 30, 2023 at 12:04 PM Toke Høiland-Jørgensen <toke@...hat.com> wrote:
> > >>
> > >> Jamal Hadi Salim <jhs@...atatu.com> writes:
> > >>
> > >> > So i dont have to respond to each email individually, I will respond
> > >> > here in no particular order. First let me provide some context, if
> > >> > that was already clear please skip it. Hopefully providing the context
> > >> > will help us to focus otherwise that bikeshed's color and shape will
> > >> > take forever to settle on.
> > >> >
> > >> > __Context__
> > >> >
> > >> > I hope we all agree that when you have 2x100G NIC (and i have seen
> > >> > people asking for 2x800G NICs) no XDP or DPDK is going to save you. To
> > >> > visualize: one 25G port is 35Mpps unidirectional. So "software stack"
> > >> > is not the answer. You need to offload.
> > >>
> > >> I'm not disputing the need to offload, and I'm personally delighted that
> > >> P4 is breaking open the vendor black boxes to provide a standardised
> > >> interface for this.
> > >>
> > >> However, while it's true that software can't keep up at the high end,
> > >> not everything runs at the high end, and today's high end is tomorrow's
> > >> mid end, in which XDP can very much play a role. So being able to move
> > >> smoothly between the two, and even implement functions that split
> > >> processing between them, is an essential feature of a programmable
> > >> networking path in Linux. Which is why I'm objecting to implementing the
> > >> P4 bits as something that's hanging off the side of the stack in its own
> > >> thing and is not integrated with the rest of the stack. You were touting
> > >> this as a feature ("being self-contained"). I consider it a bug.
> > >>
> > >> > Scriptability is not a new idea in TC (see u32 and pedit and others in
> > >> > TC).
> > >>
> > >> u32 is notoriously hard to use. The others are neat, but obviously
> > >> limited to particular use cases.
> > >
> > > Despite my love for u32, I admit its user interface is cryptic. I just
> > > wanted to point out to existing samples of scriptable and offloadable
> > > TC objects.
> > >
> > >> Do you actually expect anyone to use P4
> > >> by manually entering TC commands to build a pipeline? I really find that
> > >> hard to believe...
> > >
> > > You dont have to manually hand code anything - its the compilers job.
> >
> > Right, that was kinda my point: in that case the compiler could just as
> > well generate a (set of) BPF program(s) instead of this TC script thing.
> >
> > >> > IOW, we are reusing and plugging into a proven and deployed mechanism
> > >> > with a built-in policy driven, transparent symbiosis between hardware
> > >> > offload and software that has matured over time. You can take a
> > >> > pipeline or a table or actions and split them between hardware and
> > >> > software transparently, etc.
> > >>
> > >> That's a control plane feature though, it's not an argument for adding
> > >> another interpreter to the kernel.
> > >
> > > I am not sure what you mean by control, but what i described is kernel
> > > built in. Of course i could do more complex things from user space (if
> > > that is what you mean as control).
> >
> > "Control plane" as in SDN parlance. I.e., the bits that keep track of
> > configuration of the flow/pipeline/table configuration.
> >
> > There's no reason you can't have all that infrastructure and use BPF as
> > the datapath language. I.e., instead of:
> >
> > tc p4template create pipeline/aP4proggie numtables 1
> > ... + all the other stuff to populate it
> >
> > you could just do:
> >
> > tc p4 create pipeline/aP4proggie obj_file aP4proggie.bpf.o
> >
> > and still have all the management infrastructure without the new
> > interpreter and associated complexity in the kernel.
> >
> > >> > This hammer already meets our goals.
> > >>
> > >> That 60k+ line patch submission of yours says otherwise...
> > >
> > > This is pretty much covered in the cover letter and a few responses in
> > > the thread since.
> >
> > The only argument for why your current approach makes sense I've seen
> > you make is "I don't want to rewrite it in BPF". Which is not a
> > technical argument.
> >
> > I'm not trying to be disingenuous here, BTW: I really don't see the
> > technical argument for why the P4 data plane has to be implemented as
> > its own interpreter instead of integrating with what we have already
> > (i.e., BPF).
> >
> > -Toke
> >
>
> I'll just take this here becaues I think its mostly related.
>
> Still not convinced the P4TC has any value for sw. From the
> slide you say vendors prefer you have this picture roughtly.
>
>
>    [ P4 compiler ] ------ [ P4TC backend ] ----> TC API
>         |
>         |
>    [ P4 Vendor backend ]
>         |
>         |
>         V
>    [ Devlink ]
>
>
> Now just replace P4TC backend with P4C and your only work is to
> replace devlink with the current hw specific bits and you have
> a sw and hw components. Then you get XDP-BPF pretty easily from
> P4XDP backend if you like. The compat piece is handled by compiler
> where it should be. My CPU is not a MAT so pretending it is seems
> not ideal to me, I don't have a TCAM on my cores.
>
> For runtime get those vendors to write their SDKs over Devlink
> and no need for this software thing. The runtime for P4c should
> already work over BPF. Giving this picture
>
>    [ P4 compiler ] ------ [ P4C backend ] ----> BPF
>         |
>         |
>    [ P4 Vendor backend ]
>         |
>         |
>         V
>    [ Devlink ]
>

John, that's a good direction. If we go one step further and define a
common Intermediate Representation for programmable datapaths, we can
create a general solution that gives the user maximum flexibility and
freedom on both the frontend and the backend. For the front end they
can use whatever language they want as long as it supports an API that
can be compiled into the common IR (this is what PANDA does for
defining data paths in C). Similarly, for the backend we want to
support multiple targets both hardware and software. This is "write
once, run anywhere, run well": the developer writes their program
once, the same program runs on different targets, and on any
particular target the program runs as fast as possible given the
capabilities of the target.

There is another problem that a common IR addresses. The salient
requirement of kernel offload is that the offloaded functionality is
precisely equivalent to the kernel functionality that is being
offloaded. The traditional way this has been done is that the kernel
has to manage the bits offloaded to the device and provide all the
API. The problem is that it doesn't scale and quickly leads to
complexities like callouts to a jit compiler. My proposal is that we
compute an MD-5 hash of the IR and tag the program compiled from it
for the kernel (e.g. eBPF bytecode) and also tag the executable
compiled for the hardware (e.g. the P4 run-time).  At run time, there
kernel would query the device to see what program its running, if the
reported hash is equal to that of the loaded eBPF program, then the
device is running a functionally equivalent program and the offload
can safely be performed (via whatever datapath interfaces are needed).
This means that the device can be managed through a side channel, but
the kernel retains the necessary transparency to instantiate the
offload.

Here is a diagram of what this might look like:

[ P4 program ] ---- [ P4 compiler ] -----------------------+

           |
[ PANDA-C program ] ---- [ LLVM ]-----------------------+

           |
[ PANDA-Python program ] --- {Python compiler] ---+

           |
[ PANDA-Rust program ] --- [Rust compiler] ----------+

           |
[GUI] -------------[GUI to IR]---------------------------------+

           |
[CLI] --------------[CLI to IR]---------------------------------+

           |
                              +-----------------------------------------+
                              |
                              V
                  [Common IR (.json)]
                              |
+-----------------------+
|
+----[P4 Vendor Backend] ---- [Devlink]
|
+----[IR to eBPF backend compiler] --- [eBPF bytecode code]
|
+----[IR to CPU instructions] --- [Executable Binary]
|
+----[IR to P4TC CLI] --- [Script of commands]


> And much less work for us to maintain.

+1

>
> .John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ