lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 15 Dec 2022 15:29:25 +0100
From:   Toke Høiland-Jørgensen <>
To:     Larysa Zaremba <>,
        Stanislav Fomichev <>
        David Ahern <>,
        Jakub Kicinski <>,
        Willem de Bruijn <>,
        Jesper Dangaard Brouer <>,
        Anatoly Burakov <>,
        Alexander Lobakin <>,
        Magnus Karlsson <>,
        Maryam Tahhan <>,,
Subject: Re: [xdp-hints] Re: [RFC bpf-next v2 10/14] ice: Support rx
 timestamp metadata for xdp

Larysa Zaremba <> writes:

> On Thu, Nov 03, 2022 at 08:25:28PM -0700, Stanislav Fomichev wrote:
>> +			/* if (r5 == NULL) return; */
>> +			BPF_JMP_IMM(BPF_JNE, BPF_REG_5, 0, S16_MAX),
> S16_MAX jump crashes my system and I do not see such jumps used very often
> in bpf code found in-tree, setting a fixed jump length worked for me.
> Also, I think BPF_JEQ is a correct condition in this case, not BPF_JNE.
> But the main reason for my reply is that I have implemented RX hash hint
> for ice both as unrolled bpf code and with BPF_EMIT_CALL [0].
> Both bpf_xdp_metadata_rx_hash() and bpf_xdp_metadata_rx_hash_supported() 
> are implemented in those 2 ways.
> RX hash is the easiest hint to read, so performance difference
> should be more visible than when reading timestapm.
> Counting packets in an rxdrop XDP program on a single queue
> gave me the following numbers:
> - unrolled:		41264360 pps
> - BPF_EMIT_CALL:	40370651 pps
> So, reading a single hint in an unrolled way instead of calling 2 driver
> functions in a row, gives us a 2.2% performance boost.
> Surely, the difference will increase, if we read more than a single hint.
> Therefore, it would be great to implement at least some simple hints
> functions as unrolled.
> [0]

Right, so this corresponds to ~0.5ns function call overhead, which is a
bit less than what I was seeing[0], but you're also getting 41 Mpps
where I was getting 25, so I assume your hardware is newer :)

And yeah, I agree that ideally we really should inline these functions.
However, seeing as that may be a ways off[1], I suppose we'll have to
live with the function call overhead for now. As long as we're
reasonably confident that inlining can be added later without disruptive
API breaks I am OK with proceeding without inlining for now, though.
That way, inlining will just be a nice performance optimisation once it
does land, and who knows, maybe this will provide the impetus for
someone to land it sooner rather than later...



Powered by blists - more mailing lists