lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190416012001.5338-2-xiyou.wangcong@gmail.com>
Date:   Mon, 15 Apr 2019 18:20:01 -0700
From:   Cong Wang <xiyou.wangcong@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     linux-edac@...r.kernel.org, Cong Wang <xiyou.wangcong@...il.com>,
        Tony Luck <tony.luck@...el.com>,
        Borislav Petkov <bp@...en8.de>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH 2/2] ras: close the race condition with timer

cec_timer_fn() is a timer callback which reads ce_arr.array[]
and updates its decay values. Elements could be added to or
removed from this global array in parallel, although the array
itself will not grow or shrink. del_lru_elem_unlocked() uses
FULL_COUNT() as a key to find a right element to remove,
which could be affected by the parallel timer.

Fix this by converting the mutex to a spinlock and holding it
inside the timer.

Fixes: 011d82611172 ("RAS: Add a Corrected Errors Collector")
Cc: Tony Luck <tony.luck@...el.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
---
 drivers/ras/cec.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index 61332c9aab5a..a82c9d08d47a 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -117,7 +117,7 @@ static struct ce_array {
 	};
 } ce_arr;
 
-static DEFINE_MUTEX(ce_mutex);
+static DEFINE_SPINLOCK(ce_lock);
 static u64 dfs_pfn;
 
 /* Amount of errors after which we offline */
@@ -171,7 +171,9 @@ static void cec_mod_timer(struct timer_list *t, unsigned long interval)
 
 static void cec_timer_fn(struct timer_list *unused)
 {
+	spin_lock(&ce_lock);
 	do_spring_cleaning(&ce_arr);
+	spin_unlock(&ce_lock);
 
 	cec_mod_timer(&cec_timer, timer_interval);
 }
@@ -265,9 +267,9 @@ static u64 __maybe_unused del_lru_elem(void)
 	if (!ca->n)
 		return 0;
 
-	mutex_lock(&ce_mutex);
+	spin_lock_bh(&ce_lock);
 	pfn = del_lru_elem_unlocked(ca);
-	mutex_unlock(&ce_mutex);
+	spin_unlock_bh(&ce_lock);
 
 	return pfn;
 }
@@ -288,7 +290,7 @@ int cec_add_elem(u64 pfn)
 
 	ca->ces_entered++;
 
-	mutex_lock(&ce_mutex);
+	spin_lock_bh(&ce_lock);
 
 	if (ca->n == MAX_ELEMS)
 		WARN_ON(!del_lru_elem_unlocked(ca));
@@ -349,7 +351,7 @@ int cec_add_elem(u64 pfn)
 		do_spring_cleaning(ca);
 
 unlock:
-	mutex_unlock(&ce_mutex);
+	spin_unlock_bh(&ce_lock);
 
 	return ret;
 }
@@ -406,7 +408,7 @@ static int array_dump(struct seq_file *m, void *v)
 	u64 prev = 0;
 	int i;
 
-	mutex_lock(&ce_mutex);
+	spin_lock_bh(&ce_lock);
 
 	seq_printf(m, "{ n: %d\n", ca->n);
 	for (i = 0; i < ca->n; i++) {
@@ -431,7 +433,7 @@ static int array_dump(struct seq_file *m, void *v)
 
 	seq_printf(m, "Action threshold: %d\n", count_threshold);
 
-	mutex_unlock(&ce_mutex);
+	spin_unlock_bh(&ce_lock);
 
 	return 0;
 }
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ