[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ilsly6db.fsf@toke.dk>
Date: Thu, 10 Mar 2022 20:06:40 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org,
netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
ast@...nel.org, daniel@...earbox.net, brouer@...hat.com,
pabeni@...hat.com, echaudro@...hat.com, toshiaki.makita1@...il.com,
andrii@...nel.org
Subject: Re: [PATCH v4 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order
to accept non-linear skb
Lorenzo Bianconi <lorenzo.bianconi@...hat.com> writes:
>> Lorenzo Bianconi <lorenzo@...nel.org> writes:
>>
>> > Introduce veth_convert_xdp_buff_from_skb routine in order to
>> > convert a non-linear skb into a xdp buffer. If the received skb
>> > is cloned or shared, veth_convert_xdp_buff_from_skb will copy it
>> > in a new skb composed by order-0 pages for the linear and the
>> > fragmented area. Moreover veth_convert_xdp_buff_from_skb guarantees
>> > we have enough headroom for xdp.
>> > This is a preliminary patch to allow attaching xdp programs with frags
>> > support on veth devices.
>> >
>> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
>>
>> It's cool that we can do this! A few comments below:
>
> Hi Toke,
>
> thx for the review :)
>
> [...]
>
>> > +static int veth_convert_xdp_buff_from_skb(struct veth_rq *rq,
>> > + struct xdp_buff *xdp,
>> > + struct sk_buff **pskb)
>> > +{
>>
>> nit: It's not really "converting" and skb into an xdp_buff, since the
>> xdp_buff lives on the stack; so maybe 'veth_init_xdp_buff_from_skb()'?
>
> I kept the previous naming convention used for xdp_convert_frame_to_buff()
> (my goal would be to move it in xdp.c and reuse this routine for the
> generic-xdp use case) but I am fine with
> veth_init_xdp_buff_from_skb().
Consistency is probably good, but right now we have functions of the
form 'xdp_convert_X_to_Y()' and 'xdp_update_Y_from_X()'. So to follow
that you'd have either 'veth_update_xdp_buff_from_skb()' or
'veth_convert_skb_to_xdp_buff()' :)
>> > + struct sk_buff *skb = *pskb;
>> > + u32 frame_sz;
>> >
>> > if (skb_shared(skb) || skb_head_is_locked(skb) ||
>> > - skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
>> > + skb_shinfo(skb)->nr_frags) {
>>
>> So this always clones the skb if it has frags? Is that really needed?
>
> if we look at skb_cow_data(), paged area is always considered not writable
Ah, right, did not know that. Seems a bit odd, but OK.
>> Also, there's a lot of memory allocation and copying going on here; have
>> you measured the performance?
>
> even in the previous implementation we always reallocate the skb if the
> conditions above are verified so I do not expect any difference in the single
> buffer use-case but I will run some performance tests.
No, I wouldn't expect any difference for the single-buffer case, but I
would also be interested in how big the overhead is of having to copy
the whole jumbo-frame?
BTW, just noticed one other change - before we had:
> - headroom = skb_headroom(skb) - mac_len;
> if (skb_shared(skb) || skb_head_is_locked(skb) ||
> - skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
And in your patch that becomes:
> + } else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
> + pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
> + goto drop;
So the mac_len subtraction disappeared; that seems wrong?
>> > +
>> > + if (xdp_buff_has_frags(&xdp))
>> > + skb->data_len = skb_shinfo(skb)->xdp_frags_size;
>> > + else
>> > + skb->data_len = 0;
>>
>> We can remove entire frags using xdp_adjust_tail, right? Will that get
>> propagated in the right way to the skb frags due to the dual use of
>> skb_shared_info, or?
>
> bpf_xdp_frags_shrink_tail() can remove entire frags and it will modify
> metadata contained in the skb_shared_info (e.g. nr_frags or the frag
> size of the given page). We should consider the data_len field in this
> case. Agree?
Right, that's what I assumed; makes sense. But adding a comment
mentioning this above the update of data_len might be helpful? :)
-Toke
Powered by blists - more mailing lists