lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Feb 2019 21:36:57 -0800
From:   Peter Oskolkov <posk@...gle.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     David Ahern <dsahern@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        netdev <netdev@...r.kernel.org>, Peter Oskolkov <posk@...k.io>,
        Willem de Bruijn <willemb@...gle.com>
Subject: Re: [PATCH bpf-next v11 0/7] bpf: add BPF_LWT_ENCAP_IP option to bpf_lwt_push_encap

On Wed, Feb 13, 2019 at 8:21 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Wed, Feb 13, 2019 at 08:44:51PM -0700, David Ahern wrote:
> > On 2/13/19 7:39 PM, Alexei Starovoitov wrote:
> > > On Wed, Feb 13, 2019 at 05:46:26PM -0700, David Ahern wrote:
> > >> On 2/13/19 12:53 PM, Peter Oskolkov wrote:
> > >>> This patchset implements BPF_LWT_ENCAP_IP mode in bpf_lwt_push_encap
> > >>> BPF helper. It enables BPF programs (specifically, BPF_PROG_TYPE_LWT_IN
> > >>> and BPF_PROG_TYPE_LWT_XMIT prog types) to add IP encapsulation headers
> > >>> to packets (e.g. IP/GRE, GUE, IPIP).
> > >>>
> > >>> This is useful when thousands of different short-lived flows should be
> > >>> encapped, each with different and dynamically determined destination.
> > >>> Although lwtunnels can be used in some of these scenarios, the ability
> > >>> to dynamically generate encap headers adds more flexibility, e.g.
> > >>> when routing depends on the state of the host (reflected in global bpf
> > >>> maps).
> > >>>
> > >>
> > >>
> > >> For the set:
> > >> Reviewed-by: David Ahern <dsahern@...il.com>
> > >
> > > Applied. Thanks everyone!
> > >
> >
> > Looks like a cleanup round is needed.
> >
> > I changed the routes to fail with unreachable:
> >
> > @@ -179,16 +175,16 @@
> >       ip -netns ${NS3} tunnel add gre_dev mode gre remote ${IPv4_1} local
> > ${IPv4_GRE} ttl 255
> >       ip -netns ${NS3} link set gre_dev up
> >       ip -netns ${NS3} addr add ${IPv4_GRE} dev gre_dev
> > -     ip -netns ${NS1} route add ${IPv4_GRE}/32 dev veth5 via ${IPv4_6}
> > -     ip -netns ${NS2} route add ${IPv4_GRE}/32 dev veth7 via ${IPv4_8}
> > +     ip -netns ${NS1} route add unreachable ${IPv4_GRE}/32
> > +     ip -netns ${NS2} route add unreachable ${IPv4_GRE}/32
> >
> >
> >       # configure IPv6 GRE device in NS3, and a route to it via the "bottom"
> > route
> >       ip -netns ${NS3} -6 tunnel add name gre6_dev mode ip6gre remote
> > ${IPv6_1} local ${IPv6_GRE} ttl 255
> >       ip -netns ${NS3} link set gre6_dev up
> >       ip -netns ${NS3} -6 addr add ${IPv6_GRE} nodad dev gre6_dev
> > -     ip -netns ${NS1} -6 route add ${IPv6_GRE}/128 dev veth5 via ${IPv6_6}
> > -     ip -netns ${NS2} -6 route add ${IPv6_GRE}/128 dev veth7 via ${IPv6_8}
> > +     ip -netns ${NS1} -6 route add unreachable ${IPv6_GRE}/128
> > +     ip -netns ${NS2} -6 route add unreachable ${IPv6_GRE}/128
> >
> >       # rp_filter gets confused by what these tests are doing, so disable it
> >       ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
> > @@ -220,7 +216,6 @@
> >
> >
> > and then removed all of the set -e and exit 1's in the script (really
> > should let all of the tests run versus bailing on the first failure).
> >
> > With kmemleak enabled I see a lot of suspected memory leaks - some may
> > not be related to this change but it is triggering the suspected leak:
>
> argh. Thanks a lot for catching it.
> Let's figure out the fix quickly.

Reproduced. Looking.

> If it's too intrusive we can revert and reapply.
> I'm not going to send a pull-req to Dave with a known issue like this.
>

Powered by blists - more mailing lists