lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160301234530.006909801@linuxfoundation.org>
Date:	Tue, 01 Mar 2016 23:54:04 +0000
From:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:	<linux-kernel@...r.kernel.org>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	<stable@...r.kernel.org>, Zheng Liu <wenqing.lz@...bao.com>,
	Joshua Schmid <jschmid@...e.com>,
	Eric Wheeler <bcache@...ux.ewheeler.net>,
	Zhu Yanhai <zhu.yanhai@...il.com>,
	Kent Overstreet <kmo@...erainc.com>, Jens Axboe <axboe@...com>
Subject: [PATCH 4.4 062/342] bcache: fix a livelock when we cause a huge number of cache misses

4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Zheng Liu <gnehzuil.liu@...il.com>

commit 2ef9ccbfcb90cf84bdba320a571b18b05c41101b upstream.

Subject :	[PATCH v2] bcache: fix a livelock in btree lock
Date :	Wed, 25 Feb 2015 20:32:09 +0800 (02/25/2015 04:32:09 AM)

This commit tries to fix a livelock in bcache.  This livelock might
happen when we causes a huge number of cache misses simultaneously.

When we get a cache miss, bcache will execute the following path.

->cached_dev_make_request()
  ->cached_dev_read()
    ->cached_lookup()
      ->bch->btree_map_keys()
        ->btree_root()  <------------------------
          ->bch_btree_map_keys_recurse()        |
            ->cache_lookup_fn()                 |
              ->cached_dev_cache_miss()         |
                ->bch_btree_insert_check_key() -|
                  [If btree->seq is not equal to seq + 1, we should return
                   EINTR and traverse btree again.]

In bch_btree_insert_check_key() function we first need to check upgrade
flag (op->lock == -1), and when this flag is true we need to release
read btree->lock and try to take write btree->lock.  During taking and
releasing this write lock, btree->seq will be monotone increased in
order to prevent other threads modify this in cache miss (see btree.h:74).
But if there are some cache misses caused by some requested, we could
meet a livelock because btree->seq is always changed by others.  Thus no
one can make progress.

This commit will try to take write btree->lock if it encounters a race
when we traverse btree.  Although it sacrifice the scalability but we
can ensure that only one can modify the btree.

Signed-off-by: Zheng Liu <wenqing.lz@...bao.com>
Tested-by: Joshua Schmid <jschmid@...e.com>
Tested-by: Eric Wheeler <bcache@...ux.ewheeler.net>
Cc: Joshua Schmid <jschmid@...e.com>
Cc: Zhu Yanhai <zhu.yanhai@...il.com>
Cc: Kent Overstreet <kmo@...erainc.com>
Signed-off-by: Jens Axboe <axboe@...com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 drivers/md/bcache/btree.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -2162,8 +2162,10 @@ int bch_btree_insert_check_key(struct bt
 		rw_lock(true, b, b->level);
 
 		if (b->key.ptr[0] != btree_ptr ||
-		    b->seq != seq + 1)
+                   b->seq != seq + 1) {
+                       op->lock = b->level;
 			goto out;
+               }
 	}
 
 	SET_KEY_PTRS(check_key, 1);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ