lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ3xEMiGaCg8P08kt=WCe8MYsvBb=WiTZUnt21Kb+=vRi-g84A@mail.gmail.com>
Date:	Sun, 10 Jul 2016 00:37:00 +0300
From:	Or Gerlitz <gerlitz.or@...il.com>
To:	Saeed Mahameed <saeedm@....mellanox.co.il>
Cc:	Brenden Blanco <bblanco@...mgrid.com>,
	"David S. Miller" <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Martin KaFai Lau <kafai@...com>,
	Jesper Dangaard Brouer <brouer@...hat.com>,
	Ari Saha <as754m@....com>,
	Alexei Starovoitov <alexei.starovoitov@...il.com>,
	john fastabend <john.fastabend@...il.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Thomas Graf <tgraf@...g.ch>, Tom Herbert <tom@...bertland.com>,
	Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH v6 04/12] net/mlx4_en: add support for fast rx drop bpf program

On Sat, Jul 9, 2016 at 10:58 PM, Saeed Mahameed
<saeedm@....mellanox.co.il> wrote:
> On Fri, Jul 8, 2016 at 5:15 AM, Brenden Blanco <bblanco@...mgrid.com> wrote:
>> Add support for the BPF_PROG_TYPE_XDP hook in mlx4 driver.
>>
>> In tc/socket bpf programs, helpers linearize skb fragments as needed
>> when the program touchs the packet data. However, in the pursuit of
>> speed, XDP programs will not be allowed to use these slower functions,
>> especially if it involves allocating an skb.
>>
>> Therefore, disallow MTU settings that would produce a multi-fragment
>> packet that XDP programs would fail to access. Future enhancements could
>> be done to increase the allowable MTU.
>>
>> Signed-off-by: Brenden Blanco <bblanco@...mgrid.com>
>> ---
>>  drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 38 ++++++++++++++++++++++++++
>>  drivers/net/ethernet/mellanox/mlx4/en_rx.c     | 36 +++++++++++++++++++++---
>>  drivers/net/ethernet/mellanox/mlx4/mlx4_en.h   |  5 ++++
>>  3 files changed, 75 insertions(+), 4 deletions(-)
>>
> [...]
>> +               /* A bpf program gets first chance to drop the packet. It may
>> +                * read bytes but not past the end of the frag.
>> +                */
>> +               if (prog) {
>> +                       struct xdp_buff xdp;
>> +                       dma_addr_t dma;
>> +                       u32 act;
>> +
>> +                       dma = be64_to_cpu(rx_desc->data[0].addr);
>> +                       dma_sync_single_for_cpu(priv->ddev, dma,
>> +                                               priv->frag_info[0].frag_size,
>> +                                               DMA_FROM_DEVICE);
>
> In case of XDP_PASS we will dma_sync again in the normal path,

yep, correct

> this can be improved by doing the dma_sync as soon as we can and once and
> for all, regardless of the path the packet is going to take
> (XDP_DROP/mlx4_en_complete_rx_desc/mlx4_en_rx_skb).

how you would envision this can be done in a not very ugly way?


>> +
>> +                       xdp.data = page_address(frags[0].page) +
>> +                                                       frags[0].page_offset;
>> +                       xdp.data_end = xdp.data + length;
>> +
>> +                       act = bpf_prog_run_xdp(prog, &xdp);
>> +                       switch (act) {
>> +                       case XDP_PASS:
>> +                               break;
>> +                       default:
>> +                               bpf_warn_invalid_xdp_action(act);
>> +                       case XDP_DROP:
>> +                               goto next;
>
> The drop action here (goto next) will release the current rx_desc
> buffers and use new ones to refill, I know that the mlx4 rx scheme
> will release/allocate new pages once every ~32 packet, but one
> improvement can really help here especially for XDP_DROP benchmarks is
> to reuse the current rx_desc buffers in case it is going to be
> dropped.

> Considering if mlx4 rx buffer scheme doesn't allow gaps, Maybe this
> can be added later as future improvement for the whole mlx4 rx data
> path drop decisions.

yes, I think it makes sense to look on this as future improvement.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ