lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 23 May 2018 08:47:45 +0200
From:   Juri Lelli <juri.lelli@...hat.com>
To:     "Joel Fernandes (Google)" <joelaf@...gle.com>
Cc:     linux-kernel@...r.kernel.org,
        "Joel Fernandes (Google)" <joel@...lfernandes.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Luca Abeni <luca.abeni@...tannapisa.it>,
        Todd Kjos <tkjos@...gle.com>, claudio@...dence.eu.com,
        kernel-team@...roid.com, linux-pm@...r.kernel.org
Subject: Re: [PATCH RFC] schedutil: Address the r/w ordering race in kthread

Hi Joel,

On 22/05/18 16:50, Joel Fernandes (Google) wrote:
> Currently there is a race in schedutil code for slow-switch single-CPU
> systems. Fix it by enforcing ordering the write to work_in_progress to
> happen before the read of next_freq.
> 
> Kthread                                       Sched update
> 
> sugov_work()				      sugov_update_single()
> 
>       lock();
>       // The CPU is free to rearrange below
>       // two in any order, so it may clear
>       // the flag first and then read next
>       // freq. Lets assume it does.
>       work_in_progress = false
> 
>                                                if (work_in_progress)
>                                                      return;
> 
>                                                sg_policy->next_freq = 0;
>       freq = sg_policy->next_freq;
>                                                sg_policy->next_freq = real-freq;
>       unlock();
> 
> Reported-by: Viresh Kumar <viresh.kumar@...aro.org>
> CC: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> CC: Peter Zijlstra <peterz@...radead.org>
> CC: Ingo Molnar <mingo@...hat.com>
> CC: Patrick Bellasi <patrick.bellasi@....com>
> CC: Juri Lelli <juri.lelli@...hat.com>
> Cc: Luca Abeni <luca.abeni@...tannapisa.it>
> CC: Todd Kjos <tkjos@...gle.com>
> CC: claudio@...dence.eu.com
> CC: kernel-team@...roid.com
> CC: linux-pm@...r.kernel.org
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
> I split this into separate patch, because this race can also happen in
> mainline.
> 
>  kernel/sched/cpufreq_schedutil.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 5c482ec38610..ce7749da7a44 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
>  	 */
>  	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
>  	freq = sg_policy->next_freq;
> +
> +	/*
> +	 * sugov_update_single can access work_in_progress without update_lock,
> +	 * make sure next_freq is read before work_in_progress is set.

s/set/reset/

> +	 */
> +	smp_mb();
> +

Also, doesn't this need a corresponding barrier (I guess in
sugov_should_update_freq)? That being a wmb and this a rmb?

Best,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ