[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wiZbMaEFFftonkjDGMtFDzOEzUyhbkSzE9Th21zNGaRtA@mail.gmail.com>
Date: Sat, 14 Nov 2020 10:02:10 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: David Laight <David.Laight@...lab.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: load_unaligned_zeropad() on x86-64
On Sat, Nov 14, 2020 at 7:53 AM David Laight <David.Laight@...lab.com> wrote:
>
> The change e419b4cc585680940bc42f8ca8a071d6023fb1bb added
> asm code for load_unaligned_zeropad().
>
> However it doesn't look right for 64bit.
> It masks the address with ~3 not ~7 so the second
> access could still cross a page boundary and fault.
Can you explain more what you think is wrong?
It uses
"and %3,%1\n\t"
for the masking, but note how that's a "%3", not a "$3".
And %3 is this asm argument
"i" (-sizeof(unsigned long)),
which is -4 or -8 (which is the same as ~3 or ~7).
The other masking is to get the byte offset within the unsigned long,
to do the shifting. Again, that uses '%4', which is
"i" (sizeof(unsigned long)-1));
so 3 or 7.
So on my build, the code expands to
1: mov (%rsi),%rdx # MEM[(long unsigned int *)ct_58], ret
2:
.section .fixup,"ax"
3: lea (%rsi),%rcx # MEM[(long unsigned int *)ct_58], dummy
and $-8,%rcx #, dummy
mov (%rcx),%rdx # dummy, ret
leal (%rsi),%ecx # MEM[(long unsigned int *)ct_58]
andl $7,%ecx #
shll $3,%ecx
shr %cl,%rdx # ret
jmp 2b
.previous
which looks ok to me.
It's possible that it's buggy (that page crossing basically never
happens - only with PAGEALLOC debugging, and even then only in some
really odd and unlikely situations). So it gets basically zero test
coverage, which is never a good thing. But if it's buggy, it's not
obvious to me, and I don't see any ~3 issue.
Linus
Powered by blists - more mailing lists