lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 3 Apr 2022 00:36:53 -0700
From:   Andrew Pinski <pinskia@...il.com>
To:     Mark Rutland <mark.rutland@....com>
Cc:     Jeremy Linton <jeremy.linton@....com>,
        GCC Mailing List <gcc@....gnu.org>, f.fainelli@...il.com,
        maz@...nel.org, marcan@...can.st,
        LKML <linux-kernel@...r.kernel.org>, opendmb@...il.com,
        Catalin Marinas <catalin.marinas@....com>, will@...nel.org,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect

On Fri, Apr 1, 2022 at 10:24 AM Mark Rutland via Gcc <gcc@....gnu.org> wrote:
>
> Hi Jeremy,
>
> Thanks for raising this.
>
> On Fri, Apr 01, 2022 at 11:44:06AM -0500, Jeremy Linton wrote:
> > The relaxed variants of read/write macros are only declared
> > as `asm volatile()` which forces the compiler to generate the
> > instruction in the code path as intended. The only problem
> > is that it doesn't also tell the compiler that there may
> > be memory side effects. Meaning that if a function is comprised
> > entirely of relaxed io operations, the compiler may think that
> > it only has register side effects and doesn't need to be called.
>
> As I mentioned on a private mail, I don't think that reasoning above is
> correct, and I think this is a miscompilation (i.e. a compiler bug).
>
> The important thing is that any `asm volatile` may have a side effects
> generally outside of memory or GPRs, and whether the assembly contains a memory
> load/store is immaterial. We should not need to add a memory clobber in order
> to retain the volatile semantic.
>
> See:
>
>   https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Volatile
>
> ... and consider the x86 example that reads rdtsc, or an arm64 sequence like:
>
> | void do_sysreg_thing(void)
> | {
> |       unsigned long tmp;
> |
> |       tmp = read_sysreg(some_reg);
> |       tmp |= SOME_BIT;
> |       write_sysreg(some_reg);
> | }
>
> ... where there's no memory that we should need to hazard against.
>
> This patch might workaround the issue, but I don't believe it is a correct fix.

It might not be the most restricted fix but it is a fix.
The best fix is to tell that you are writing to that location of memory.
volatile asm does not do what you think it does.
You didn't read further down about memory clobbers:
https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Clobbers-and-Scratch-Registers
Specifically this part:
The "memory" clobber tells the compiler that the assembly code
performs memory reads or writes to items other than those listed in
the input and output operands




>
> > For an example function look at bcmgenet_enable_dma(), before the
> > relaxed variants were removed. When built with gcc12 the code
> > contains the asm blocks as expected, but then the function is
> > never called.
>
> So it sounds like this is a regression in GCC 12, which IIUC isn't released yet
> per:

It is NOT a bug in GCC 12. Just you depended on behavior which
accidently worked in the cases you were looking at. GCC 12 did not
change in this area at all even.

Thanks,
Andrew Pinski

>
>   https://gcc.gnu.org/gcc-12/changes.html
>
> ... which says:
>
> | Note: GCC 12 has not been released yet
>
> Surely we can fix it prior to release?
>
> Thanks,
> Mark.
>
> >
> > Signed-off-by: Jeremy Linton <jeremy.linton@....com>
> > ---
> >  arch/arm64/include/asm/io.h | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
> > index 7fd836bea7eb..3cceda7948a0 100644
> > --- a/arch/arm64/include/asm/io.h
> > +++ b/arch/arm64/include/asm/io.h
> > @@ -24,25 +24,25 @@
> >  #define __raw_writeb __raw_writeb
> >  static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
> >  {
> > -     asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
> > +     asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> >  }
> >
> >  #define __raw_writew __raw_writew
> >  static inline void __raw_writew(u16 val, volatile void __iomem *addr)
> >  {
> > -     asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
> > +     asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> >  }
> >
> >  #define __raw_writel __raw_writel
> >  static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
> >  {
> > -     asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
> > +     asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> >  }
> >
> >  #define __raw_writeq __raw_writeq
> >  static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
> >  {
> > -     asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
> > +     asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> >  }
> >
> >  #define __raw_readb __raw_readb
> > --
> > 2.35.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ