[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0hKB7s52K+=0Gk-_10tLzORN+VXnJuVe9odyEdwgK-PnQ@mail.gmail.com>
Date: Wed, 23 May 2018 10:23:56 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: "Joel Fernandes (Google)" <joelaf@...gle.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Joel Fernandes (Google)" <joel@...lfernandes.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Patrick Bellasi <patrick.bellasi@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Luca Abeni <luca.abeni@...tannapisa.it>,
Todd Kjos <tkjos@...gle.com>,
Claudio Scordino <claudio@...dence.eu.com>,
kernel-team@...roid.com, Linux PM <linux-pm@...r.kernel.org>
Subject: Re: [PATCH RFC] schedutil: Address the r/w ordering race in kthread
On Wed, May 23, 2018 at 1:50 AM, Joel Fernandes (Google)
<joelaf@...gle.com> wrote:
> Currently there is a race in schedutil code for slow-switch single-CPU
> systems. Fix it by enforcing ordering the write to work_in_progress to
> happen before the read of next_freq.
>
> Kthread Sched update
>
> sugov_work() sugov_update_single()
>
> lock();
> // The CPU is free to rearrange below
> // two in any order, so it may clear
> // the flag first and then read next
> // freq. Lets assume it does.
> work_in_progress = false
>
> if (work_in_progress)
> return;
>
> sg_policy->next_freq = 0;
> freq = sg_policy->next_freq;
> sg_policy->next_freq = real-freq;
> unlock();
>
> Reported-by: Viresh Kumar <viresh.kumar@...aro.org>
> CC: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> CC: Peter Zijlstra <peterz@...radead.org>
> CC: Ingo Molnar <mingo@...hat.com>
> CC: Patrick Bellasi <patrick.bellasi@....com>
> CC: Juri Lelli <juri.lelli@...hat.com>
> Cc: Luca Abeni <luca.abeni@...tannapisa.it>
> CC: Todd Kjos <tkjos@...gle.com>
> CC: claudio@...dence.eu.com
> CC: kernel-team@...roid.com
> CC: linux-pm@...r.kernel.org
> Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> ---
> I split this into separate patch, because this race can also happen in
> mainline.
>
> kernel/sched/cpufreq_schedutil.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 5c482ec38610..ce7749da7a44 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
> */
> raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> freq = sg_policy->next_freq;
> +
> + /*
> + * sugov_update_single can access work_in_progress without update_lock,
> + * make sure next_freq is read before work_in_progress is set.
> + */
> + smp_mb();
> +
This requires a corresponding barrier somewhere else.
> sg_policy->work_in_progress = false;
> raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
>
> --
Also, as I said I actually would prefer to use the spinlock in the
one-CPU case when the kthread is used.
I'll have a patch for that shortly.
Powered by blists - more mailing lists