[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pj41zlmty6lvzl.fsf@u68c7b5b1d2d758.ant.amazon.com>
Date: Mon, 21 Dec 2020 22:55:42 +0200
From: Shay Agroskin <shayagr@...zon.com>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
CC: Saeed Mahameed <saeed@...nel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
BPF-dev-list <bpf@...r.kernel.org>,
"Network Development" <netdev@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
"Alexei Starovoitov" <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
"Jubran, Samih" <sameehj@...zon.com>,
John Fastabend <john.fastabend@...il.com>,
David Ahern <dsahern@...nel.org>,
"Jesper Brouer" <brouer@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>,
"Jason Wang" <jasowang@...hat.com>
Subject: Re: [PATCH v5 bpf-next 03/14] xdp: add xdp_shared_info data structure
Lorenzo Bianconi <lorenzo.bianconi@...hat.com> writes:
>>
>>
>> Lorenzo Bianconi <lorenzo.bianconi@...hat.com> writes:
>>
>> >> On Mon, 2020-12-07 at 17:32 +0100, Lorenzo Bianconi wrote:
>> >> > Introduce xdp_shared_info data structure to contain info
>> >> > about
>> >> > "non-linear" xdp frame. xdp_shared_info will alias
>> >> > skb_shared_info
>> >> > allowing to keep most of the frags in the same cache-line.
>> [...]
>> >>
>> >> > + u16 nr_frags;
>> >> > + u16 data_length; /* paged area length */
>> >> > + skb_frag_t frags[MAX_SKB_FRAGS];
>> >>
>> >> why MAX_SKB_FRAGS ? just use a flexible array member
>> >> skb_frag_t frags[];
>> >>
>> >> and enforce size via the n_frags and on the construction of
>> >> the
>> >> tailroom preserved buffer, which is already being done.
>> >>
>> >> this is waste of unnecessary space, at lease by definition
>> >> of
>> >> the
>> >> struct, in your use case you do:
>> >> memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) *
>> >> num_frags);
>> >> And the tailroom space was already preserved for a full
>> >> skb_shinfo.
>> >> so i don't see why you need this array to be of a fixed
>> >> MAX_SKB_FRAGS
>> >> size.
>> >
>> > In order to avoid cache-misses, xdp_shared info is built as a
>> > variable
>> > on mvneta_rx_swbm() stack and it is written to "shared_info"
>> > area only on the
>> > last fragment in mvneta_swbm_add_rx_fragment(). I used
>> > MAX_SKB_FRAGS to be
>> > aligned with skb_shared_info struct but probably we can use
>> > even
>> > a smaller value.
>> > Another approach would be to define two different struct,
>> > e.g.
>> >
>> > stuct xdp_frag_metadata {
>> > u16 nr_frags;
>> > u16 data_length; /* paged area length */
>> > };
>> >
>> > struct xdp_frags {
>> > skb_frag_t frags[MAX_SKB_FRAGS];
>> > };
>> >
>> > and then define xdp_shared_info as
>> >
>> > struct xdp_shared_info {
>> > stuct xdp_frag_metadata meta;
>> > skb_frag_t frags[];
>> > };
>> >
>> > In this way we can probably optimize the space. What do you
>> > think?
>>
>> We're still reserving ~sizeof(skb_shared_info) bytes at the end
>> of
>> the first buffer and it seems like in mvneta code you keep
>> updating all three fields (frags, nr_frags and data_length).
>> Can you explain how the space is optimized by splitting the
>> structs please?
>
> using xdp_shared_info struct we will have the first 3 fragments
> in the
> same cacheline of nr_frags while using skb_shared_info struct
> only the
> first fragment will be in the same cacheline of
> nr_frags. Moreover
> skb_shared_info has multiple fields unused by xdp.
>
> Regards,
> Lorenzo
>
Thanks for your reply. I was actually referring to your suggestion
to Saeed. Namely, defining
struct xdp_shared_info {
struct xdp_frag_metadata meta;
skb_frag_t frags[];
}
I don't see what benefits there are to this scheme compared to the
original patch
Thanks,
Shay
>>
>> >>
>> >> > +};
>> >> > +
[...]
>>
>> Saeed, the stack receives skb_shared_info when the frames are
>> passed to the stack (skb_add_rx_frag is used to add the whole
>> information to skb's shared info), and for XDP_REDIRECT use
>> case,
>> it doesn't seem like all drivers check page's tailroom for more
>> information anyway (ena doesn't at least).
>> Can you please explain what do you mean by "break the stack"?
>>
>> Thanks, Shay
>>
>> >>
>> [...]
>> >
>> >>
>>
Powered by blists - more mailing lists