[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHo-OoyFjsOxmjmsAA8FjiQf7DLbUHtW8z=JB1JRwJNaHsEWCg@mail.gmail.com>
Date: Tue, 1 Nov 2016 22:06:27 -0700
From: Maciej Żenczykowski <zenczykowski@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: Jiri Pirko <jiri@...nulli.us>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Thomas Graf <tgraf@...g.ch>,
John Fastabend <john.fastabend@...il.com>,
Jakub Kicinski <kubakici@...pl>,
Linux NetDev <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Jamal Hadi Salim <jhs@...atatu.com>,
roopa@...ulusnetworks.com, simon.horman@...ronome.com,
ast@...nel.org, prem@...efootnetworks.com,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Jiri Benc <jbenc@...hat.com>,
Tom Herbert <tom@...bertland.com>, mattyk@...lanox.com,
idosch@...lanox.com, eladr@...lanox.com, yotamg@...lanox.com,
nogahf@...lanox.com, ogerlitz@...lanox.com,
"John W. Linville" <linville@...driver.com>,
Andy Gospodarek <andy@...yhouse.net>,
Florian Fainelli <f.fainelli@...il.com>,
dsa@...ulusnetworks.com, vivien.didelot@...oirfairelinux.com,
andrew@...n.ch, ivecera@...hat.com
Subject: Re: Let's do P4
> Sorry for jumping into the middle and the delay (plumbers this week). My
> question would be, if the main target is for p4 *offloading* anyway, who
> would use this sw fallback path? Mostly for testing purposes?
>
> I'm not sure about compilerB here and the complexity that needs to be
> pushed into the kernel along with it. I would assume this would result
> in slower code than what the existing P4 -> eBPF front ends for LLVM
> would generate since it could perform all kind of optimizations there,
> that might not be feasible for doing inside the kernel. Thus, if I'd want
> to do that in sw, I'd just use the existing LLVM facilities instead and
> go via cls_bpf in that case.
>
> What is your compilerA? Is that part of tc in user space? Maybe linked
> against LLVM lib, for example? If you really want some sw path, can't tc
> do this transparently from user space instead when it gets a netlink error
> that it cannot get offloaded (and thus switch internally to f_bpf's loader)?
Since we're jumping in the middle ;-)
Ideally we'd have an interface where some generic like program is
loaded into the kernel,
and the kernel core fetches some sort of generic description of the
hardware capabilities,
translates the program and fits as much of it as it can into the hardware,
possibly all of it, and emulates/executes the rest in software.
ie. if hardware can only match on 5 different 10 byte headers, but we
need to match on 7 different 12 byte headers,
we can still use the hardware to help us dispatch straight into 'check
the last 2 bytes, then the last 2 headers' software emulation code.
or maybe hardware can match, but can't count packets... so we need to
implement counting in sw.
or it can't do all types of encap/decap, so we need to sw encap in
certain cases...
Doing this via extracting such information out of a bpf program seems
pretty hard.
Or maybe I'm overestimating the true difficulty of taking a bpf
program and extracting it into a TCAM...
Maybe if the bpf program has a more 'standard' layout
(ie. a tree doing packet parsing/matching, with 'actions' in the
leaves) then it's not so hard?...
Obviously real hardware has significantly more capabilities then just
a tcam at the front of the pipeline...
I'm afraid I lack the knowledge of what the real capabilities of
current (and future...) hardware are...
But maybe we could come up with some sufficiently generic description
of *what* we want accomplished
instead of the precise specifics of how.
Powered by blists - more mailing lists