lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180522235028.80564-1-joel@joelfernandes.org>
Date:   Tue, 22 May 2018 16:50:28 -0700
From:   "Joel Fernandes (Google)" <joelaf@...gle.com>
To:     linux-kernel@...r.kernel.org
Cc:     "Joel Fernandes (Google)" <joel@...lfernandes.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Luca Abeni <luca.abeni@...tannapisa.it>,
        Todd Kjos <tkjos@...gle.com>, claudio@...dence.eu.com,
        kernel-team@...roid.com, linux-pm@...r.kernel.org
Subject: [PATCH RFC] schedutil: Address the r/w ordering race in kthread

Currently there is a race in schedutil code for slow-switch single-CPU
systems. Fix it by enforcing ordering the write to work_in_progress to
happen before the read of next_freq.

Kthread                                       Sched update

sugov_work()				      sugov_update_single()

      lock();
      // The CPU is free to rearrange below
      // two in any order, so it may clear
      // the flag first and then read next
      // freq. Lets assume it does.
      work_in_progress = false

                                               if (work_in_progress)
                                                     return;

                                               sg_policy->next_freq = 0;
      freq = sg_policy->next_freq;
                                               sg_policy->next_freq = real-freq;
      unlock();

Reported-by: Viresh Kumar <viresh.kumar@...aro.org>
CC: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>
CC: Patrick Bellasi <patrick.bellasi@....com>
CC: Juri Lelli <juri.lelli@...hat.com>
Cc: Luca Abeni <luca.abeni@...tannapisa.it>
CC: Todd Kjos <tkjos@...gle.com>
CC: claudio@...dence.eu.com
CC: kernel-team@...roid.com
CC: linux-pm@...r.kernel.org
Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
---
I split this into separate patch, because this race can also happen in
mainline.

 kernel/sched/cpufreq_schedutil.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 5c482ec38610..ce7749da7a44 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -401,6 +401,13 @@ static void sugov_work(struct kthread_work *work)
 	 */
 	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
 	freq = sg_policy->next_freq;
+
+	/*
+	 * sugov_update_single can access work_in_progress without update_lock,
+	 * make sure next_freq is read before work_in_progress is set.
+	 */
+	smp_mb();
+
 	sg_policy->work_in_progress = false;
 	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
 
-- 
2.17.0.441.gb46fe60e1d-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ