lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Sep 2019 15:11:10 +0100
From:   Will Deacon <will@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Alessio Balsini <balsini@...roid.com>, mingo@...nel.org,
        juri.lelli@...hat.com, linux-kernel@...r.kernel.org,
        dietmar.eggemann@....com, luca.abeni@...tannapisa.it,
        bristot@...hat.com, dvyukov@...gle.com, tglx@...utronix.de,
        vpillai@...italocean.com, kernel-team@...roid.com, will@...nel.org
Subject: Re: [RFC][PATCH 01/13] sched/deadline: Impose global limits on
 sched_attr::sched_period

Hi Joel,

On Wed, Sep 04, 2019 at 09:24:18AM -0400, Joel Fernandes wrote:
> On Wed, Sep 04, 2019 at 01:30:38PM +0200, Peter Zijlstra wrote:
> > On Wed, Sep 04, 2019 at 06:16:16AM -0400, Steven Rostedt wrote:
> > > On Mon, 2 Sep 2019 11:16:23 +0200
> > > Peter Zijlstra <peterz@...radead.org> wrote:
> > > 
> > > > in sched_dl_period_handler(). And do:
> > > > 
> > > > +	preempt_disable();
> > > > 	max = (u64)READ_ONCE(sysctl_sched_dl_period_max) * NSEC_PER_USEC;
> > > > 	min = (u64)READ_ONCE(sysctl_sched_dl_period_min) * NSEC_PER_USEC;
> > > > +	preempt_enable();
> > > 
> > > Hmm, I'm curious. Doesn't the preempt_disable/enable() also add
> > > compiler barriers which would remove the need for the READ_ONCE()s here?
> > 
> > They do add compiler barriers; but they do not avoid the compiler
> > tearing stuff up.
> 
> Neither does WRITE_ONCE() on some possibly buggy but currently circulating
> compilers :(

Hmm. The example above is using READ_ONCE, which is a different kettle of
frogs.

> As Will said in:
> https://lore.kernel.org/lkml/20190821103200.kpufwtviqhpbuv2n@willie-the-truck/
> 
> void bar(u64 *x)
> {
> 	*(volatile u64 *)x = 0xabcdef10abcdef10;
> }
> 
> gives:
> 
> bar:
> 	mov	w1, 61200
> 	movk	w1, 0xabcd, lsl 16
> 	str	w1, [x0]
> 	str	w1, [x0, 4]
> 	ret
> 
> Speaking of which, Will, is there a plan to have compiler folks address this
> tearing issue and are bugs filed somewhere? I believe aarch64 gcc is buggy,
> and clang is better but is still buggy?

Well, it depends on your point of view. Personally, I think tearing a
volatile access (e.g. WRITE_ONCE) is buggy and it seems as though the GCC
developers agree:

https://gcc.gnu.org/ml/gcc-patches/2019-08/msg01500.html

so it's likely this will be fixed for AArch64 GCC. I couldn't persuade
clang to break the volatile case, so think we're good there too.

For the non-volatile case, I don't actually consider it to be a bug,
although I sympathise with the desire to avoid a retrospective tree-wide
sweep adding random WRITE_ONCE invocations to stores that look like they
might be concurrent. In other words, I think I'd suggest:

  * The use of WRITE_ONCE in new code (probably with a comment justifying it)
  * The introduction of WRITE_ONCE to existing code where it can be shown to
    be fixing a real bug (e.g. by demonstrating that a compiler actually
    gets it wrong)

For the /vast/ majority of cases, the compiler will do the right thing
even without WRITE_ONCE, simply because that's going to be the most
performant choice as well.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ