lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Jul 2017 15:45:00 -0700
From:   Tim Chen <tim.c.chen@...ux.intel.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Tim Chen <tim.c.chen@...ux.intel.com>,
        Ying Huang <ying.huang@...el.com>,
        Wenwei Tao <wenwei.tww@...baba-inc.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Minchan Kim <minchan@...nel.org>,
        Rik van Riel <riel@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hillf Danton <hillf.zj@...baba-inc.com>
Subject: [PATCH 1/2] mm/swap: Fix race conditions in swap_slots cache init

Memory allocations can happen before the swap_slots cache initialization
is completed during cpu bring up.  If we are low on memory, we could call
get_swap_page and access swap_slots_cache before it is fully initialized.

Add a check in get_swap_page for initialized swap_slots_cache
to prevent this condition.  Similar check already exists in
free_swap_slot.  Also annotate the checks to indicate the likely
condition.

We also added a memory barrier to make sure that the locks
initialization are done before the assignment of cache->slots
and cache->slots_ret pointers. This ensures the assumption
that it is safe to acquire the slots cache locks and use the slots
cache when the corresponding cache->slots or cache->slots_ret
pointers are non null.

Reported-by: Wenwei Tao <wenwei.tww@...baba-inc.com>
Acked-by: Ying Huang <ying.huang@...el.com>
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
 mm/swap_slots.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 58f6c78..4c5457c 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -148,6 +148,14 @@ static int alloc_swap_slot_cache(unsigned int cpu)
 	cache->nr = 0;
 	cache->cur = 0;
 	cache->n_ret = 0;
+	/*
+	 * We intialized alloc_lock and free_lock earlier.
+	 * We use !cache->slots or !cache->slots_ret
+	 * to know if it is safe to acquire the corresponding
+	 * lock and use the cache.  Memory barrier
+	 * below ensures the assumption.
+	 */
+	mb();
 	cache->slots = slots;
 	slots = NULL;
 	cache->slots_ret = slots_ret;
@@ -273,7 +281,7 @@ int free_swap_slot(swp_entry_t entry)
 	struct swap_slots_cache *cache;
 
 	cache = &get_cpu_var(swp_slots);
-	if (use_swap_slot_cache && cache->slots_ret) {
+	if (likely(use_swap_slot_cache && cache->slots_ret)) {
 		spin_lock_irq(&cache->free_lock);
 		/* Swap slots cache may be deactivated before acquiring lock */
 		if (!use_swap_slot_cache) {
@@ -318,7 +326,7 @@ swp_entry_t get_swap_page(void)
 	cache = raw_cpu_ptr(&swp_slots);
 
 	entry.val = 0;
-	if (check_cache_active()) {
+	if (likely(check_cache_active() && cache->slots)) {
 		mutex_lock(&cache->alloc_lock);
 		if (cache->slots) {
 repeat:
-- 
2.9.4

Powered by blists - more mailing lists