lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180705122300.42164577@gandalf.local.home>
Date:   Thu, 5 Jul 2018 12:23:00 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     Joe Korty <joe.korty@...current-rt.com>,
        Julia Cartwright <julia@...com>, tglx@...utronix.de,
        linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH RT] sched/migrate_disable: fallback to preempt_disable()
 instead barrier()

[ Added Peter ]

On Thu, 5 Jul 2018 17:50:34 +0200
Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:

> migrate_disable() does nothing !SMP && !RT. This is bad for two reasons:
> - The futex code relies on the fact migrate_disable() is part of spin_lock().
>   There is a workaround for the !in_atomic() case in migrate_disable() which
>   work-arounds the different ordering (non-atomic lock and atomic unlock).

But isn't it only part of spin_lock() in the RT case?

> 
> - we have a few instances where preempt_disable() is replaced with
>   migrate_disable().

What? Really? I thought we only replace preempt_disable() with a
local_lock(). Which gives annotation to why a preempt_disable() exists.
And on non-RT, local_lock() is preempt_disable().

> 
> For both cases it is bad if migrate_disable() ends up as barrier() instead of
> preempt_disable(). Let migrate_disable() fallback to preempt_disable().
> 

I still don't understand exactly what is "bad" about it.

IIRC, I remember Peter not wanting any open coded "migrate_disable"
calls. It was to be for internal use cases only, and specifically, only
for RT.

Personally, I think making migrate_disable() into preempt_disable() on
NON_RT is incorrect too.

-- Steve



> Cc: stable-rt@...r.kernel.org
> Reported-by: joe.korty@...current-rt.com
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  include/linux/preempt.h | 4 ++--
>  kernel/sched/core.c     | 2 ++
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/preempt.h b/include/linux/preempt.h
> index 043e431a7e8e..d46688d521e6 100644
> --- a/include/linux/preempt.h
> +++ b/include/linux/preempt.h
> @@ -241,8 +241,8 @@ static inline int __migrate_disabled(struct task_struct *p)
>  }
>  
>  #else
> -#define migrate_disable()		barrier()
> -#define migrate_enable()		barrier()
> +#define migrate_disable()		preempt_disable()
> +#define migrate_enable()		preempt_enable()
>  static inline int __migrate_disabled(struct task_struct *p)
>  {
>  	return 0;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ac3fb8495bd5..626a62218518 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7326,6 +7326,7 @@ void migrate_disable(void)
>  #endif
>  
>  	p->migrate_disable++;
> +	preempt_disable();
>  }
>  EXPORT_SYMBOL(migrate_disable);
>  
> @@ -7349,6 +7350,7 @@ void migrate_enable(void)
>  
>  	WARN_ON_ONCE(p->migrate_disable <= 0);
>  	p->migrate_disable--;
> +	preempt_enable();
>  }
>  EXPORT_SYMBOL(migrate_enable);
>  #endif

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ