lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190206005931.GF10769@mini-arch>
Date:   Tue, 5 Feb 2019 16:59:31 -0800
From:   Stanislav Fomichev <sdf@...ichev.me>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Stanislav Fomichev <sdf@...gle.com>,
        Network Development <netdev@...r.kernel.org>,
        David Miller <davem@...emloft.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        simon.horman@...ronome.com, Willem de Bruijn <willemb@...gle.com>
Subject: Re: [RFC bpf-next 0/7] net: flow_dissector: trigger BPF hook when
 called from eth_get_headlen

On 02/05, Alexei Starovoitov wrote:
> On Tue, Feb 05, 2019 at 12:40:03PM -0800, Stanislav Fomichev wrote:
> > On 02/05, Willem de Bruijn wrote:
> > > On Tue, Feb 5, 2019 at 12:57 PM Stanislav Fomichev <sdf@...gle.com> wrote:
> > > >
> > > > Currently, when eth_get_headlen calls flow dissector, it doesn't pass any
> > > > skb. Because we use passed skb to lookup associated networking namespace
> > > > to find whether we have a BPF program attached or not, we always use
> > > > C-based flow dissector in this case.
> > > >
> > > > The goal of this patch series is to add new networking namespace argument
> > > > to the eth_get_headlen and make BPF flow dissector programs be able to
> > > > work in the skb-less case.
> > > >
> > > > The series goes like this:
> > > > 1. introduce __init_skb and __init_skb_shinfo; those will be used to
> > > >    initialize temporary skb
> > > > 2. introduce skb_net which can be used to get networking namespace
> > > >    associated with an skb
> > > > 3. add new optional network namespace argument to __skb_flow_dissect and
> > > >    plumb through the callers
> > > > 4. add new __flow_bpf_dissect which constructs temporary on-stack skb
> > > >    (using __init_skb) and calls BPF flow dissector program
> > > 
> > > The main concern I see with this series is this cost of skb zeroing
> > > for every packet in the device driver receive routine, *independent*
> > > from the real skb allocation and zeroing which will likely happen
> > > later.
> > Yes, plus ~200 bytes on the stack for the callers.
> > 
> > Not sure how visible this zeroing though, I can probably try to get some
> > numbers from BPF_PROG_TEST_RUN (running current version vs running with
> > on-stack skb).
> 
> imo extra 256 byte memset for every packet is non starter.
We can put pre-allocated/initialized skbs without data into percpu or even
use pcpu_freelist_pop/pcpu_freelist_push to make sure we don't have to think
about having multiple percpu for irq/softirq/process contexts.
Any concerns with that approach?
Any other possible concerns with the overall series?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ