lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Feb 2017 14:28:24 -0800
From:   Tom Herbert <tom@...bertland.com>
To:     Edward Cree <ecree@...arflare.com>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Kernel Team <kernel-team@...com>
Subject: Re: [PATCH RFC v2 1/8] xdp: Infrastructure to generalize XDP

On Tue, Feb 14, 2017 at 2:08 PM, Edward Cree <ecree@...arflare.com> wrote:
> On 14/02/17 21:07, Tom Herbert wrote:
>> Off the top of my head... I'd say may we might be able to have a
>> minimally invasive interface with something like:
>>
>> XDP_RUN(hook, xdp, drv_xdp_handle_action)
>>
>> This replaces xdp_run and return codes are processed in the called
>> functions. Its a macro so that xdp_handle_action can be inlined.
> I don't see why callbacks are needed, since XDP programs (I assume)
>  aren't supposed to block.  This XDP_RUN ends up looking a lot like
>  NF_HOOK, for no good reason that I can see (unlike NF hooks, we never
>  do things like NF_QUEUE).
>> Batching could then be done in the backend XDP so that it would be
>> transparent to the driver.
> I also don't see how you can transparently batch and still allow the
>  handler to be inlined - you'd have to stash a function pointer that
>  you could call later when you decide to dispatch a batch of packets.
>
That would be handled either in XDP_RUN or XDP flush. There should be
no need to save pointer.

> To me, the sensible interface (which makes the batching explicit to
>  the driver, which I think is necessary) is to have an int (or maybe
>  unsigned int, which is the return type of xdp_hookfn, I'm not sure
>  which is intended) member in struct xdp_buff.
> Then the driver can call something like
>         XDP_RUN_ARRAY(napi, xdp_array, array_len);
> which is semantically equivalent to
>         unsigned int i;
>         for (i = 0; i < array_len; i++)
>                 xdp_array[i].ret = xdp_hook_run(napi, xdp_array + i);
> except that it may run the hooks in 'row-major order'.
> No callbacks needed, the driver can just loop over xdp_array reading
>  the .ret and applying the relevant action to each packet.
>
> This also has the advantage that the driver knows how many packets it
>  might have to process in a single batch (i.e. NAPI_POLL_WEIGHT) and
>  can allocate the array statically, whereas an XDP hook that tried to
>  transparently be 'helpful' would have to guess and/or use kmalloc.
>
But that has the disadvantage of requiring drivers to implement yet
another loop in drivers for which they each will need to choose bounds
and this makes the complexity of batching explicit in the driver.
Probably the biggest issues with XDP is the potential impact in the
core data path on every driver for something that is a narrow use case
feature. Minimizing the impact on drivers is a high order goal I
believe, even if it might be at the expense of having complete
flexibility.

Tom

> -Ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ