[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <874jtweo56.fsf@toke.dk>
Date: Thu, 15 Dec 2022 15:29:25 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Larysa Zaremba <larysa.zaremba@...el.com>,
Stanislav Fomichev <sdf@...gle.com>
Cc: bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, martin.lau@...ux.dev, song@...nel.org,
yhs@...com, john.fastabend@...il.com, kpsingh@...nel.org,
haoluo@...gle.com, jolsa@...nel.org,
David Ahern <dsahern@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Willem de Bruijn <willemb@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Anatoly Burakov <anatoly.burakov@...el.com>,
Alexander Lobakin <alexandr.lobakin@...el.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
Maryam Tahhan <mtahhan@...hat.com>, xdp-hints@...-project.net,
netdev@...r.kernel.org
Subject: Re: [xdp-hints] Re: [RFC bpf-next v2 10/14] ice: Support rx
timestamp metadata for xdp
Larysa Zaremba <larysa.zaremba@...el.com> writes:
> On Thu, Nov 03, 2022 at 08:25:28PM -0700, Stanislav Fomichev wrote:
>> + /* if (r5 == NULL) return; */
>> + BPF_JMP_IMM(BPF_JNE, BPF_REG_5, 0, S16_MAX),
>
> S16_MAX jump crashes my system and I do not see such jumps used very often
> in bpf code found in-tree, setting a fixed jump length worked for me.
> Also, I think BPF_JEQ is a correct condition in this case, not BPF_JNE.
>
> But the main reason for my reply is that I have implemented RX hash hint
> for ice both as unrolled bpf code and with BPF_EMIT_CALL [0].
> Both bpf_xdp_metadata_rx_hash() and bpf_xdp_metadata_rx_hash_supported()
> are implemented in those 2 ways.
>
> RX hash is the easiest hint to read, so performance difference
> should be more visible than when reading timestapm.
>
> Counting packets in an rxdrop XDP program on a single queue
> gave me the following numbers:
>
> - unrolled: 41264360 pps
> - BPF_EMIT_CALL: 40370651 pps
>
> So, reading a single hint in an unrolled way instead of calling 2 driver
> functions in a row, gives us a 2.2% performance boost.
> Surely, the difference will increase, if we read more than a single hint.
> Therefore, it would be great to implement at least some simple hints
> functions as unrolled.
>
> [0] https://github.com/walking-machine/linux/tree/ice-kfunc-hints-clean
Right, so this corresponds to ~0.5ns function call overhead, which is a
bit less than what I was seeing[0], but you're also getting 41 Mpps
where I was getting 25, so I assume your hardware is newer :)
And yeah, I agree that ideally we really should inline these functions.
However, seeing as that may be a ways off[1], I suppose we'll have to
live with the function call overhead for now. As long as we're
reasonably confident that inlining can be added later without disruptive
API breaks I am OK with proceeding without inlining for now, though.
That way, inlining will just be a nice performance optimisation once it
does land, and who knows, maybe this will provide the impetus for
someone to land it sooner rather than later...
-Toke
[0] https://lore.kernel.org/r/875yellcx6.fsf@toke.dk
[1] https://lore.kernel.org/r/CAADnVQ+MyE280Q-7iw2Y-P6qGs4xcDML-tUrXEv_EQTmeESVaQ@mail.gmail.com
Powered by blists - more mailing lists