lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200921182638.5d8343fd@carbon>
Date:   Mon, 21 Sep 2020 18:26:38 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     Lorenz Bauer <lmb@...udflare.com>,
        Maciej Żenczykowski 
        <maze@...gle.com>, Saeed Mahameed <saeed@...nel.org>,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        BPF-dev-list <bpf@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Shaun Crampton <shaun@...era.io>,
        David Miller <davem@...emloft.net>,
        Marek Majkowski <marek@...udflare.com>, brouer@...hat.com
Subject: Re: BPF redirect API design issue for BPF-prog MTU feedback?

On Mon, 21 Sep 2020 17:08:17 +0200
Daniel Borkmann <daniel@...earbox.net> wrote:

> On 9/21/20 2:49 PM, Jesper Dangaard Brouer wrote:
> > On Mon, 21 Sep 2020 11:37:18 +0100
> > Lorenz Bauer <lmb@...udflare.com> wrote:  
> >> On Sat, 19 Sep 2020 at 00:06, Maciej Żenczykowski <maze@...gle.com> wrote:  
> >>>     
> >>>> This is a good point.  As bpf_skb_adjust_room() can just be run after
> >>>> bpf_redirect() call, then a MTU check in bpf_redirect() actually
> >>>> doesn't make much sense.  As clever/bad BPF program can then avoid the
> >>>> MTU check anyhow.  This basically means that we have to do the MTU
> >>>> check (again) on kernel side anyhow to catch such clever/bad BPF
> >>>> programs.  (And I don't like wasting cycles on doing the same check two
> >>>> times).  
> >>>
> >>> If you get rid of the check in bpf_redirect() you might as well get
> >>> rid of *all* the checks for excessive mtu in all the helpers that
> >>> adjust packet size one way or another way.  They *all* then become
> >>> useless overhead.
> >>>
> >>> I don't like that.  There may be something the bpf program could do to
> >>> react to the error condition (for example in my case, not modify
> >>> things and just let the core stack deal with things - which will
> >>> probably just generate packet too big icmp error).
> >>>
> >>> btw. right now our forwarding programs first adjust the packet size
> >>> then call bpf_redirect() and almost immediately return what it
> >>> returned.
> >>>
> >>> but this could I think easily be changed to reverse the ordering, so
> >>> we wouldn't increase packet size before the core stack was informed we
> >>> would be forwarding via a different interface.  
> >>
> >> We do the same, except that we also use XDP_TX when appropriate. This
> >> complicates the matter, because there is no helper call we could
> >> return an error from.  
> > 
> > Do notice that my MTU work is focused on TC-BPF.  For XDP-redirect the
> > MTU check is done in xdp_ok_fwd_dev() via __xdp_enqueue(), which also
> > happens too late to give BPF-prog knowledge/feedback.  For XDP_TX I
> > audited the drivers when I implemented xdp_buff.frame_sz, and they
> > handled (or I added) handling against max HW MTU. E.g. mlx5 [1].
> > 
> > [1] https://elixir.bootlin.com/linux/v5.9-rc6/source/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c#L267
> >   
> >> My preference would be to have three helpers: get MTU for a device,
> >> redirect ctx to a device (with MTU check), resize ctx (without MTU
> >> check) but that doesn't work with XDP_TX. Your idea of doing checks
> >> in redirect and adjust_room is pragmatic and seems easier to
> >> implement.  
> >   
> > I do like this plan/proposal (with 3 helpers), but it is not possible
> > with current API.  The main problem is the current bpf_redirect API
> > doesn't provide the ctx, so we cannot do the check in the BPF-helper.
> > 
> > Are you saying we should create a new bpf_redirect API (that incl packet ctx)?  
> 
> Sorry for jumping in late here... one thing that is not clear to me
> is that if we are fully sure that skb is dropped by stack anyway due
> to invalid MTU (redirect to ingress does this via dev_forward_skb(),

Yes, TC-redirecting to *INGRESS* have a slightly relaxed MTU check via
is_skb_forwardable() called via ____dev_forward_skb().  This MTU check
seems redundant as netstack will do MTU checks anyhow.

> it's not fully clear to me whether it's also the case for the
> dev_queue_xmit()),

This seems the problematic case; TC-ingress redirect to netdev-egress,
that basically calls dev_queue_xmit().  I tried to follow the code all
the way into ixgbe driver, and I didn't see any MTU checks.

We might have to add a MTU check here, as it could be considered a
bug/problematic that we allow this. (e.g. netdev with large MTU can
redirect frames larger than MTU of egress netdev).


> then why not dropping all the MTU checks aside
> from SKB_MAX_ALLOC sanity check for BPF helpers 

I agree, and think that the MTU checks in the BPF-helpers, make little
sense, as we have found ways to circumvent these checks (as discussed
in thread).

> and have something like a device object (similar to e.g. TCP sockets)
> exposed to BPF prog where we can retrieve the object and read
> dev->mtu from the prog, so the BPF program could then do the
> "exception" handling internally w/o extra prog needed (we also
> already expose whether skb is GSO or not).

I do think we need some BPF-helper that allows BPF-prog to lookup MTU
of a netdev, so it can do proper ICMP exception handling.

I looked at doing ICMP exception handling on kernel-side, but realized
that this is not possible at the TC-redirect layer, as we have not
decoded the L3 protocol at this point (e.g. cannot know if I need to
call icmp_send or icmp6_send).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ