[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c3e478c-5000-1726-6ce9-9b0a3ccfe1e5@gmail.com>
Date: Fri, 4 Sep 2020 09:15:04 -0600
From: David Ahern <dsahern@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, davem@...emloft.net,
lorenzo.bianconi@...hat.com, echaudro@...hat.com,
sameehj@...zon.com, kuba@...nel.org, john.fastabend@...il.com,
daniel@...earbox.net, ast@...nel.org, shayagr@...zon.com,
David Ahern <dsahern@...nel.org>
Subject: Re: [PATCH v2 net-next 1/9] xdp: introduce mb in xdp_buff/xdp_frame
On 9/4/20 1:19 AM, Jesper Dangaard Brouer wrote:
> On Thu, 3 Sep 2020 18:07:05 -0700
> Alexei Starovoitov <alexei.starovoitov@...il.com> wrote:
>
>> On Thu, Sep 03, 2020 at 10:58:45PM +0200, Lorenzo Bianconi wrote:
>>> Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer to specify
>>> if shared_info area has been properly initialized for non-linear
>>> xdp buffers
>>>
>>> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
>>> ---
>>> include/net/xdp.h | 8 ++++++--
>>> net/core/xdp.c | 1 +
>>> 2 files changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/include/net/xdp.h b/include/net/xdp.h
>>> index 3814fb631d52..42f439f9fcda 100644
>>> --- a/include/net/xdp.h
>>> +++ b/include/net/xdp.h
>>> @@ -72,7 +72,8 @@ struct xdp_buff {
>>> void *data_hard_start;
>>> struct xdp_rxq_info *rxq;
>>> struct xdp_txq_info *txq;
>>> - u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
>>> + u32 frame_sz:31; /* frame size to deduce data_hard_end/reserved tailroom*/
>>> + u32 mb:1; /* xdp non-linear buffer */
>>> };
>>>
>>> /* Reserve memory area at end-of data area.
>>> @@ -96,7 +97,8 @@ struct xdp_frame {
>>> u16 len;
>>> u16 headroom;
>>> u32 metasize:8;
>>> - u32 frame_sz:24;
>>> + u32 frame_sz:23;
>>> + u32 mb:1; /* xdp non-linear frame */
>>
>> Hmm. Last time I checked compilers were generating ugly code with bitfields.
>> Not performant and not efficient.
>> frame_sz is used in the fast path.
>> I suspect the first hunk alone will cause performance degradation.
>> Could you use normal u8 or u32 flag field?
>
> For struct xdp_buff sure we can do this. For struct xdp_frame, I'm not
> sure, as it is a state compressed version of xdp_buff + extra
> information. The xdp_frame have been called skb-light, and I know
> people (e.g Ahern) wants to add more info to this, vlan, RX-hash, csum,
> and we must keep this to 1-cache-line, for performance reasons.
>
> You do make a good point, that these bit-fields might hurt performance
> more. I guess, we need to test this. As I constantly worry that we
> will slowly kill XDP performance with a 1000 paper-cuts.
>
That struct is tight on space, and we have to be very smart about
additions. dev_rx for example seems like it could just be the netdev
index rather than a pointer or perhaps can be removed completely. I
believe it is only used for 1 use case (redirects to CPUMAP); maybe that
code can be refactored to handle the dev outside of xdp_frame.
xdp_mem_info is 2 u32's; the type in that struct really could be a u8.
In this case it means removing struct in favor of 2 elements to reclaim
the space, but as we reach the 64B limit this is a place to change.
e.g., make it a single u32 with the id only 24 bits though the
rhashtable key can stay u32 but now with the combined type + id.
As for frame_sz, why does it need to be larger than a u16?
If it really needs to be larger than u16, there are several examples of
using a bit (or bits) in the data path. dst metrics for examples uses
lowest 4 bits of the dst pointer as a bitfield. It does so using a mask
with accessors vs a bitfield. Perhaps that is the way to go here.
Powered by blists - more mailing lists