[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 4 Apr 2016 11:27:27 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Brenden Blanco <bblanco@...mgrid.com>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>, davem@...emloft.net,
netdev@...r.kernel.org, tom@...bertland.com, ogerlitz@...lanox.com,
daniel@...earbox.net, john.fastabend@...il.com
Subject: Re: [RFC PATCH 4/5] mlx4: add support for fast rx drop bpf program
On Sat, Apr 02, 2016 at 11:11:52PM -0700, Brenden Blanco wrote:
> On Sat, Apr 02, 2016 at 10:23:31AM +0200, Jesper Dangaard Brouer wrote:
> [...]
> >
> > I think you need to DMA sync RX-page before you can safely access
> > packet data in page (on all arch's).
> >
> Thanks, I will give that a try in the next spin.
> > > + ethh = (struct ethhdr *)(page_address(frags[0].page) +
> > > + frags[0].page_offset);
> > > + if (mlx4_call_bpf(prog, ethh, length)) {
> >
> > AFAIK length here covers all the frags[n].page, thus potentially
> > causing the BPF program to access memory out of bound (crash).
> >
> > Having several page fragments is AFAIK an optimization for jumbo-frames
> > on PowerPC (which is a bit annoying for you use-case ;-)).
> >
> Yeah, this needs some more work. I can think of some options:
> 1. limit pseudo skb.len to first frag's length only, and signal to
> program that the packet is incomplete
> 2. for nfrags>1 skip bpf processing, but this could be functionally
> incorrect for some use cases
> 3. run the program for each frag
> 4. reject ndo_bpf_set when frags are possible (large mtu?)
>
> My preference is to go with 1, thoughts?
hmm and what program will do with 'incomplete' packet?
imo option 4 is only way here. If phys_dev bpf program already
attached to netdev then mlx4_en_change_mtu() can reject jumbo mtus.
My understanding of mlx4_en_calc_rx_buf is that mtu < 1514
will have num_frags==1. That's the common case and one we
want to optimize for.
If later we can find a way to change mlx4 driver to support
phys_dev bpf programs with jumbo mtus, great.
Powered by blists - more mailing lists