[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <356ccc29-31af-4274-b372-c8fd1a9b10cb@default>
Date: Wed, 25 Jan 2012 14:32:51 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@...cle.com>
To: Konrad Wilk <konrad.wilk@...cle.com>, linux-kernel@...r.kernel.org,
devel@...verdev.osuosl.org, Greg Kroah-Hartman <gregkh@...e.de>
Cc: Seth Jennings <sjenning@...ux.vnet.ibm.com>,
Nitin Gupta <ngupta@...are.org>
Subject: [PATCH] zcache: fix deadlock condition
I discovered this deadlock condition awhile ago working on RAMster
but it affects zcache as well. The list spinlock must be
locked prior to the page spinlock and released after. As
a result, the page copy must also be done while the locks are held.
Applies to 3.2. Konrad, please push (via GregKH?)...
this is definitely a bug fix so need not be pushed during
a -rc0 window.
Signed-off-by: Dan Magenheimer <dan.magenheimer@...cle.com>
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 56c1f9c..5b9f74e 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -358,8 +358,8 @@ static struct zbud_hdr *zbud_create(uint16_t client_id, uint16_t pool_id,
if (unlikely(zbpg == NULL))
goto out;
/* ok, have a page, now compress the data before taking locks */
- spin_lock(&zbpg->lock);
spin_lock(&zbud_budlists_spinlock);
+ spin_lock(&zbpg->lock);
list_add_tail(&zbpg->bud_list, &zbud_unbuddied[nchunks].list);
zbud_unbuddied[nchunks].count++;
zh = &zbpg->buddy[0];
@@ -389,12 +389,11 @@ init_zh:
zh->oid = *oid;
zh->pool_id = pool_id;
zh->client_id = client_id;
- /* can wait to copy the data until the list locks are dropped */
- spin_unlock(&zbud_budlists_spinlock);
-
to = zbud_data(zh, size);
memcpy(to, cdata, size);
spin_unlock(&zbpg->lock);
+ spin_unlock(&zbud_budlists_spinlock);
+
zbud_cumul_chunk_counts[nchunks]++;
atomic_inc(&zcache_zbud_curr_zpages);
zcache_zbud_cumul_zpages++;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists