[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871r1laqg6.fsf@toke.dk>
Date: Thu, 06 Jan 2022 15:28:41 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v5 6/7] bpf: Add "live packet" mode for XDP in
bpf_prog_run()
Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
> On Mon, Jan 03, 2022 at 04:08:11PM +0100, Toke Høiland-Jørgensen wrote:
>> +static void xdp_test_run_init_page(struct page *page, void *arg)
>> +{
>> + struct xdp_page_head *head = phys_to_virt(page_to_phys(page));
>> + struct xdp_buff *new_ctx, *orig_ctx;
>> + u32 headroom = XDP_PACKET_HEADROOM;
>> + struct xdp_test_data *xdp = arg;
>> + size_t frm_len, meta_len;
>> + struct xdp_frame *frm;
>> + void *data;
>> +
>> + orig_ctx = xdp->orig_ctx;
>> + frm_len = orig_ctx->data_end - orig_ctx->data_meta;
>> + meta_len = orig_ctx->data - orig_ctx->data_meta;
>> + headroom -= meta_len;
>> +
>> + new_ctx = &head->ctx;
>> + frm = &head->frm;
>> + data = &head->data;
>> + memcpy(data + headroom, orig_ctx->data_meta, frm_len);
>> +
>> + xdp_init_buff(new_ctx, TEST_XDP_FRAME_SIZE, &xdp->rxq);
>> + xdp_prepare_buff(new_ctx, data, headroom, frm_len, true);
>> + new_ctx->data_meta = new_ctx->data + meta_len;
>
> data vs data_meta is the other way around, no?
>
> Probably needs a selftest to make sure.
Yup, you're right; nice catch! Will fix and add a test for it.
>> +static int xdp_recv_frames(struct xdp_frame **frames, int nframes,
>> + struct net_device *dev)
>> +{
>> + gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
>> + void *skbs[TEST_XDP_BATCH];
>> + int i, n;
>> + LIST_HEAD(list);
>> +
>> + n = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs);
>> + if (unlikely(n == 0)) {
>> + for (i = 0; i < nframes; i++)
>> + xdp_return_frame(frames[i]);
>> + return -ENOMEM;
>> + }
>> +
>> + for (i = 0; i < nframes; i++) {
>> + struct xdp_frame *xdpf = frames[i];
>> + struct sk_buff *skb = skbs[i];
>> +
>> + skb = __xdp_build_skb_from_frame(xdpf, skb, dev);
>> + if (!skb) {
>> + xdp_return_frame(xdpf);
>> + continue;
>> + }
>> +
>> + list_add_tail(&skb->list, &list);
>> + }
>> + netif_receive_skb_list(&list);
>
> Does it need local_bh_disable() like cpumap does?
Yes, I think it probably does, actually. Or at least having it can
potentially improve performance since we're then sure that the whole
batch will be processed at once. Will add!
> I've applied patches 1 - 5.
Thanks! Will respin this and the selftest :)
Powered by blists - more mailing lists