[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d97e33e0-0b23-4f58-b1a4-5e171defe732@intel.com>
Date: Thu, 13 Nov 2025 10:40:02 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
"David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, linux-kernel
<linux-kernel@...r.kernel.org>, Simon Horman <horms@...nel.org>,
Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH] x86_64: inline csum_ipv6_magic()
On 11/13/25 10:18, Eric Dumazet wrote:
> So it would seem the patch saves 371 bytes for this config.
>
>> Or, is there a discrete, measurable performance gain from doing this?
> IPv6 incoming TCP/UDP paths call this function twice per packet, which is sad...
> One call per TX packet.
>
> Depending on the cpus I can see csum_ipv6_magic() using up to 0.75 %
> of cpu cycles.
> Then there is the cost in the callers, harder to measure...
Oh, wow. That's more than I was expecting. But it does make sense.
Thanks for the info. I'll stick this in the queue to apply in a month or
so after the next -rc1, unless it needs more urgency.
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Powered by blists - more mailing lists