[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877dyelb0p.fsf@toke.dk>
Date: Fri, 17 Apr 2020 11:25:10 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: David Ahern <dsahern@...il.com>, David Ahern <dsahern@...nel.org>,
netdev@...r.kernel.org
Cc: davem@...emloft.net, kuba@...nel.org,
prashantbhole.linux@...il.com, jasowang@...hat.com,
brouer@...hat.com, toshiaki.makita1@...il.com,
daniel@...earbox.net, john.fastabend@...il.com, ast@...nel.org,
kafai@...com, songliubraving@...com, yhs@...com, andriin@...com,
David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH RFC-v5 bpf-next 09/12] dev: Support xdp in the Tx path for xdp_frames
David Ahern <dsahern@...il.com> writes:
> On 4/16/20 8:02 AM, Toke Høiland-Jørgensen wrote:
>>> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
>>> index 58bdca5d978a..bedecd07d898 100644
>>> --- a/kernel/bpf/devmap.c
>>> +++ b/kernel/bpf/devmap.c
>>> @@ -322,24 +322,33 @@ static int bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
>>> {
>>> struct net_device *dev = bq->dev;
>>> int sent = 0, drops = 0, err = 0;
>>> + unsigned int count = bq->count;
>>> int i;
>>>
>>> - if (unlikely(!bq->count))
>>> + if (unlikely(!count))
>>> return 0;
>>>
>>> - for (i = 0; i < bq->count; i++) {
>>> + for (i = 0; i < count; i++) {
>>> struct xdp_frame *xdpf = bq->q[i];
>>>
>>> prefetch(xdpf);
>>> }
>>>
>>> - sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
>>> + if (static_branch_unlikely(&xdp_egress_needed_key)) {
>>> + count = do_xdp_egress_frame(dev, bq->q, &count);
>>
>> nit: seems a bit odd to pass the point to count, then reassign it with
>> the return value?
>
> thanks for noticing that. leftover from the evolution of this. changed to
> count = do_xdp_egress_frame(dev, bq->q, count);
Thought it might be. Great!
>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>> index 1bbaeb8842ed..f23dc6043329 100644
>>> --- a/net/core/dev.c
>>> +++ b/net/core/dev.c
>>> @@ -4720,6 +4720,76 @@ u32 do_xdp_egress_skb(struct net_device *dev, struct sk_buff *skb)
>>> }
>>> EXPORT_SYMBOL_GPL(do_xdp_egress_skb);
>>>
>>> +static u32 __xdp_egress_frame(struct net_device *dev,
>>> + struct bpf_prog *xdp_prog,
>>> + struct xdp_frame *xdp_frame,
>>> + struct xdp_txq_info *txq)
>>> +{
>>> + struct xdp_buff xdp;
>>> + u32 act;
>>> +
>>> + xdp.data_hard_start = xdp_frame->data - xdp_frame->headroom;
>>> + xdp.data = xdp_frame->data;
>>> + xdp.data_end = xdp.data + xdp_frame->len;
>>> + xdp_set_data_meta_invalid(&xdp);
>>
>> Why invalidate the metadata? On the contrary we'd want metadata from the
>> RX side to survive, wouldn't we?
>
> right, replaced with:
> xdp.data_meta = xdp.data - metasize;
OK.
>>
>>> + xdp.txq = txq;
>>> +
>>> + act = bpf_prog_run_xdp(xdp_prog, &xdp);
>>> + act = handle_xdp_egress_act(act, dev, xdp_prog);
>>> +
>>> + /* if not dropping frame, readjust pointers in case
>>> + * program made changes to the buffer
>>> + */
>>> + if (act != XDP_DROP) {
>>> + int headroom = xdp.data - xdp.data_hard_start;
>>> + int metasize = xdp.data - xdp.data_meta;
>>> +
>>> + metasize = metasize > 0 ? metasize : 0;
>>> + if (unlikely((headroom - metasize) < sizeof(*xdp_frame)))
>>> + return XDP_DROP;
>>> +
>>> + xdp_frame = xdp.data_hard_start;
>>> + xdp_frame->data = xdp.data;
>>> + xdp_frame->len = xdp.data_end - xdp.data;
>>> + xdp_frame->headroom = headroom - sizeof(*xdp_frame);
>>> + xdp_frame->metasize = metasize;
>>> + /* xdp_frame->mem is unchanged */
>>> + }
>>> +
>>> + return act;
>>> +}
>>> +
>>> +unsigned int do_xdp_egress_frame(struct net_device *dev,
>>> + struct xdp_frame **frames,
>>> + unsigned int *pcount)
>>> +{
>>> + struct bpf_prog *xdp_prog;
>>> + unsigned int count = *pcount;
>>> +
>>> + xdp_prog = rcu_dereference(dev->xdp_egress_prog);
>>> + if (xdp_prog) {
>>> + struct xdp_txq_info txq = { .dev = dev };
>>
>> Do you have any thoughts on how to populate this for the redirect case?
>
> not sure I understand. This is the redirect case. ie.., On rx a program
> is run, XDP_REDIRECT is returned and the packet is queued. Once the
> queue fills or flush is done, bq_xmit_all is called to send the
> frames.
I just meant that eventually we'd want to populate xdp_txq_info with a
TX HWQ index (and possibly other stuff), right? So how do you figure
we'd get that information at this call site?
-Toke
Powered by blists - more mailing lists