[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKjUZjw-9ACNWrEd_H+o79Uwkw9NVuujQ3w=c2pGRFotg@mail.gmail.com>
Date: Sat, 6 Jan 2024 11:26:06 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Laight <David.Laight@...lab.com>, Noah Goldstein <goldstein.w.n@...il.com>,
kernel test robot <lkp@...el.com>, "x86@...nel.org" <x86@...nel.org>,
"oe-kbuild-all@...ts.linux.dev" <oe-kbuild-all@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>, "bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, "hpa@...or.com" <hpa@...or.com>
Subject: Re: x86/csum: Remove unnecessary odd handling
On Sat, Jan 6, 2024 at 1:19 AM Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> On Fri, 5 Jan 2024 at 15:53, David Laight <David.Laight@...lab.com> wrote:
> >
> > I'd have to fix his benchmark code first :-)
> > You can't use the TSC unless you lock the cpu frequency.
> > The longer the test runs for the faster the cpu will run.
>
> They'll stabilize, it has soem cycle result aging code.
>
> But yes, set the CPU policy to 'performance' or do performance
> counters if you care deeply.
>
> > On a related point, do you remember what the 'killer app'
> > was for doing the checksum in copy_to/from_user?
>
> No. It's a long time ago, and many things have changed since.
>
> It's possible the copy-and-csum it's not worth it any more, simply
> because all modern network cards will do the csum for us, and I think
> loopback sets a flag saying "no need to checksum" too.
>
> But I do have a strong memory of it being a big deal back when. A
> _loong_ time ago.
>
On a related note, at least with clang, I found that csum_ipv6_magic()
is needlessly using temporary on-stack storage,
showing a stall on Cascade Lake unless I am patching add32_with_carry() :
diff --git a/arch/x86/include/asm/checksum_64.h
b/arch/x86/include/asm/checksum_64.h
index 407beebadaf45a748f91a36b78bd1d023449b132..c3d6f47626c70d81f0c2ba401d85050b09a39922
100644
--- a/arch/x86/include/asm/checksum_64.h
+++ b/arch/x86/include/asm/checksum_64.h
@@ -171,7 +171,7 @@ static inline unsigned add32_with_carry(unsigned
a, unsigned b)
asm("addl %2,%0\n\t"
"adcl $0,%0"
: "=r" (a)
- : "0" (a), "rm" (b));
+ : "0" (a), "r" (b));
return a;
}
Before patch :
ffffffff81b9b600 <csum_ipv6_magic>:
ffffffff81b9b600: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
ffffffff81b9b601: R_X86_64_NONE __fentry__-0x4
ffffffff81b9b605: 55 push %rbp
ffffffff81b9b606: 48 89 e5 mov %rsp,%rbp
ffffffff81b9b609: 48 83 ec 04 sub $0x4,%rsp // This
will be removed after patch
ffffffff81b9b60d: 0f ca bswap %edx
ffffffff81b9b60f: 89 c9 mov %ecx,%ecx
ffffffff81b9b611: 48 c1 e1 08 shl $0x8,%rcx
ffffffff81b9b615: 44 89 c0 mov %r8d,%eax
ffffffff81b9b618: 48 01 d0 add %rdx,%rax
ffffffff81b9b61b: 48 01 c8 add %rcx,%rax
ffffffff81b9b61e: 48 03 07 add (%rdi),%rax
ffffffff81b9b621: 48 13 47 08 adc 0x8(%rdi),%rax
ffffffff81b9b625: 48 13 06 adc (%rsi),%rax
ffffffff81b9b628: 48 13 46 08 adc 0x8(%rsi),%rax
ffffffff81b9b62c: 48 83 d0 00 adc $0x0,%rax
ffffffff81b9b630: 48 89 c1 mov %rax,%rcx
ffffffff81b9b633: 48 c1 e9 20 shr $0x20,%rcx
ffffffff81b9b637: 89 4d fc mov %ecx,-0x4(%rbp) //
Store exc on the stack
ffffffff81b9b63a: 03 45 fc add -0x4(%rbp),%eax //
reads the value right away, stalling some Intel cpus.
ffffffff81b9b63d: 83 d0 00 adc $0x0,%eax
ffffffff81b9b640: 89 c1 mov %eax,%ecx // Note that
ecs content was scratch anyway
ffffffff81b9b642: c1 e1 10 shl $0x10,%ecx
ffffffff81b9b645: 25 00 00 ff ff and $0xffff0000,%eax
ffffffff81b9b64a: 01 c8 add %ecx,%eax
ffffffff81b9b64c: 15 ff ff 00 00 adc $0xffff,%eax
ffffffff81b9b651: f7 d0 not %eax
ffffffff81b9b653: c1 e8 10 shr $0x10,%eax
ffffffff81b9b656: 48 83 c4 04 add $0x4,%rsp
ffffffff81b9b65a: 5d pop %rbp
ffffffff81b9b65b: c3 ret
ffffffff81b9b65c: cc int3
ffffffff81b9b65d: 00 00 add %al,(%rax)
ffffffff81b9b65f: cc int3
After patch:
00000000000000a0 <csum_ipv6_magic>:
a0: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
a1: R_X86_64_NONE __fentry__-0x4
a5: 55 push %rbp
a6: 48 89 e5 mov %rsp,%rbp
a9: 0f ca bswap %edx
ab: 89 c9 mov %ecx,%ecx
ad: 48 c1 e1 08 shl $0x8,%rcx
b1: 44 89 c0 mov %r8d,%eax
b4: 48 01 d0 add %rdx,%rax
b7: 48 01 c8 add %rcx,%rax
ba: 48 03 07 add (%rdi),%rax
bd: 48 13 47 08 adc 0x8(%rdi),%rax
c1: 48 13 06 adc (%rsi),%rax
c4: 48 13 46 08 adc 0x8(%rsi),%rax
c8: 48 83 d0 00 adc $0x0,%rax
cc: 48 89 c1 mov %rax,%rcx
cf: 48 c1 e9 20 shr $0x20,%rcx
d3: 01 c8 add %ecx,%eax // just better !
d5: 83 d0 00 adc $0x0,%eax
d8: 89 c1 mov %eax,%ecx
da: c1 e1 10 shl $0x10,%ecx
dd: 25 00 00 ff ff and $0xffff0000,%eax
e2: 01 c8 add %ecx,%eax
e4: 15 ff ff 00 00 adc $0xffff,%eax
e9: f7 d0 not %eax
eb: c1 e8 10 shr $0x10,%eax
ee: 5d pop %rbp
ef: c3 ret
f0: cc int3
Powered by blists - more mailing lists