lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2148cc16-4988-5866-cb64-0a4f3d290a23@gmail.com>
Date:   Fri, 15 May 2020 17:15:20 -0600
From:   David Ahern <dsahern@...il.com>
To:     John Fastabend <john.fastabend@...il.com>,
        Toke Høiland-Jørgensen <toke@...hat.com>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org
Cc:     davem@...emloft.net, kuba@...nel.org,
        prashantbhole.linux@...il.com, brouer@...hat.com,
        daniel@...earbox.net, ast@...nel.org, kafai@...com,
        songliubraving@...com, yhs@...com, andriin@...com,
        David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH v5 bpf-next 00/11] net: Add support for XDP in egress path

On 5/15/20 4:54 PM, John Fastabend wrote:
> Hi David,
> 
> Another way to set up egress programs that I had been thinking about is to
> build a prog_array map with a slot per interface then after doing the
> redirect (or I guess the tail call program can do the redirect) do the
> tail call into the "egress" program.
> 
> From a programming side this would look like,
> 
> 
>   ---> ingress xdp bpf                BPF_MAP_TYPE_PROG_ARRAY
>          redirect(ifindex)            +---------+
>          tail_call(ifindex)           |         |
>                       |               +---------+
>                       +-------------> | ifindex | 
>                                       +---------+
>                                       |         |
>                                       +---------+
> 
> 
>          return XDP_REDIRECT
>                         |
>                         +-------------> xdp_xmit
> 
> 
> The controller would then update the BPF_MAP_TYPE_PROG_ARRAY instead of
> attaching to egress interface itself as in the series here. I think it
> would only require that tail call program return XDP_REDIRECT so the
> driver knows to follow through with the redirect. OTOH the egress program
> can decide to DROP or PASS as well. The DROP case is straight forward,
> packet gets dropped. The PASS case is interesting because it will cause
> the packet to go to the stack. Which may or may not be expected I guess.
> We could always lint the programs or force the programs to return only
> XDP_REDIRECT/XDP_PASS from libbpf side.
> 
> Would there be any differences from my example and your series from the
> datapath side? I think from the BPF program side the only difference
> would be return codes XDP_REDIRECT vs XDP_PASS. The control plane is
> different however. I don't have a good sense of one being better than
> the other. Do you happen to see some reason to prefer native xdp egress
> program types over prog array usage?

host ingress to VM is one use case; VM to VM on the same host is another.

> 
> From performance side I suspect they will be more or less equivalant.
> 
> On the positive side using a PROG_ARRAY doesn't require a new attach
> point. A con might be right-sizing the PROG_ARRAY to map to interfaces?
> Do you have 1000's of interfaces here? Or some unknown number of

1000ish is probably the right ballpark - up to 500 VM's on a host each
with a public and private network connection. From there each interface
can have their own firewall (ingress and egress; most likely VM unique
data, but to be flexible potentially different programs e.g., blacklist
vs whitelist). Each VM will definitely have its own network data - mac
and network addresses, and since VMs are untrusted packet validation in
both directions is a requirement.

With respect to lifecycle management of the programs and the data,
putting VM specific programs and maps on VM specific taps simplifies
management. VM terminates, taps are deleted, programs and maps
disappear. So no validator thread needed to handle stray data / programs
from the inevitable cleanup problems when everything is lumped into 1
program / map or even array of programs and maps.

To me the distributed approach is the simplest and best. The program on
the host nics can be stupid simple; no packet parsing beyond the
ethernet header. It's job is just a traffic demuxer very much like a
switch. All VM logic and data is local to the VM's interfaces.


> interfaces? I've had building resizable hash/array maps for awhile
> on my todo list so could add that for other use cases as well if that
> was the only problem.
> 
> Sorry for the late reply it took me a bit of time to mull over the
> patches.
> 
> Thanks,
> John
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ