[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190206004714.pz44evow5uwgvt4x@ast-mbp.dhcp.thefacebook.com>
Date: Tue, 5 Feb 2019 16:47:16 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Stanislav Fomichev <sdf@...ichev.me>
Cc: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Stanislav Fomichev <sdf@...gle.com>,
Network Development <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
simon.horman@...ronome.com, Willem de Bruijn <willemb@...gle.com>
Subject: Re: [RFC bpf-next 0/7] net: flow_dissector: trigger BPF hook when
called from eth_get_headlen
On Tue, Feb 05, 2019 at 12:40:03PM -0800, Stanislav Fomichev wrote:
> On 02/05, Willem de Bruijn wrote:
> > On Tue, Feb 5, 2019 at 12:57 PM Stanislav Fomichev <sdf@...gle.com> wrote:
> > >
> > > Currently, when eth_get_headlen calls flow dissector, it doesn't pass any
> > > skb. Because we use passed skb to lookup associated networking namespace
> > > to find whether we have a BPF program attached or not, we always use
> > > C-based flow dissector in this case.
> > >
> > > The goal of this patch series is to add new networking namespace argument
> > > to the eth_get_headlen and make BPF flow dissector programs be able to
> > > work in the skb-less case.
> > >
> > > The series goes like this:
> > > 1. introduce __init_skb and __init_skb_shinfo; those will be used to
> > > initialize temporary skb
> > > 2. introduce skb_net which can be used to get networking namespace
> > > associated with an skb
> > > 3. add new optional network namespace argument to __skb_flow_dissect and
> > > plumb through the callers
> > > 4. add new __flow_bpf_dissect which constructs temporary on-stack skb
> > > (using __init_skb) and calls BPF flow dissector program
> >
> > The main concern I see with this series is this cost of skb zeroing
> > for every packet in the device driver receive routine, *independent*
> > from the real skb allocation and zeroing which will likely happen
> > later.
> Yes, plus ~200 bytes on the stack for the callers.
>
> Not sure how visible this zeroing though, I can probably try to get some
> numbers from BPF_PROG_TEST_RUN (running current version vs running with
> on-stack skb).
imo extra 256 byte memset for every packet is non starter.
Powered by blists - more mailing lists