[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iK6hwMo_i3F8pnCUQmCJ+wWq8HJOu-dGz94REZr+2oSGQ@mail.gmail.com>
Date: Tue, 23 Dec 2025 06:03:33 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, linux-kernel <linux-kernel@...r.kernel.org>,
Simon Horman <horms@...nel.org>, Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH] x86_64: inline csum_ipv6_magic()
On Thu, Nov 13, 2025 at 7:40 PM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 11/13/25 10:18, Eric Dumazet wrote:
> > So it would seem the patch saves 371 bytes for this config.
> >
> >> Or, is there a discrete, measurable performance gain from doing this?
> > IPv6 incoming TCP/UDP paths call this function twice per packet, which is sad...
> > One call per TX packet.
> >
> > Depending on the cpus I can see csum_ipv6_magic() using up to 0.75 %
> > of cpu cycles.
> > Then there is the cost in the callers, harder to measure...
>
> Oh, wow. That's more than I was expecting. But it does make sense.
> Thanks for the info. I'll stick this in the queue to apply in a month or
> so after the next -rc1, unless it needs more urgency.
>
> Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Gentle ping, I have not seen this patch reaching the tip tree.
Thanks a lot !
Powered by blists - more mailing lists