lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 Jan 2013 08:21:45 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Pekka Enberg <penberg@...nel.org>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Nitin Gupta <ngupta@...are.org>,
	Konrad Rzeszutek Wilk <konrad@...nok.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	stable@...r.kernel.org, Jerome Marchand <jmarchan@...hat.com>
Subject: Re: [RESEND PATCH v5 1/4] zram: Fix deadlock bug in partial write

On Mon, Jan 28, 2013 at 09:16:35AM +0200, Pekka Enberg wrote:
> On Mon, Jan 28, 2013 at 2:38 AM, Minchan Kim <minchan@...nel.org> wrote:
> > Now zram allocates new page with GFP_KERNEL in zram I/O path
> > if IO is partial. Unfortunately, It may cuase deadlock with
> 
> s/cuase/cause/g

Thanks!

> 
> > reclaim path so this patch solves the problem.
> 
> It'd be nice to know about the problem in more detail. I'm also
> curious on why you decided on GFP_ATOMIC for the read path and
> GFP_NOIO in the write path.

In read path, we called kmap_atomic.

How about this?
------------------------- >8 -------------------------------

>From 9f8756ae0b0f2819f93cb94dcd38da372843aa12 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Mon, 21 Jan 2013 13:58:52 +0900
Subject: [RESEND PATCH v5 1/4] zram: Fix deadlock bug in partial read/write

Now zram allocates new page with GFP_KERNEL in zram I/O path
if IO is partial. Unfortunately, It may cause deadlock with
reclaim path like below.

write_page from fs
fs_lock
allocation(GFP_KERNEL)
reclaim
pageout
				write_page from fs
				fs_lock <-- deadlock

This patch fixes it by using GFP_ATOMIC and GFP_NOIO.
In read path, we called kmap_atomic so that we need GFP_ATOMIC
while we need GFP_NOIO in write path.

Cc: stable@...r.kernel.org
Cc: Jerome Marchand <jmarchan@...hat.com>
Acked-by: Nitin Gupta <ngupta@...are.org>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
We could use GFP_IO instead of GFP_ATOMIC in zram_bvec_read with
some modification related to buffer allocation in case of partial IO.
But it needs more churn and prevent merge this patch into stable
if we should send this to stable so I'd like to keep it as simple
as possbile. GFP_IO usage could be separate patch after we merge it.
Thanks.
 drivers/staging/zram/zram_drv.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 61fb8f1..b285b3a 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -220,7 +220,7 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 	user_mem = kmap_atomic(page);
 	if (is_partial_io(bvec))
 		/* Use  a temporary buffer to decompress the page */
-		uncmem = kmalloc(PAGE_SIZE, GFP_KERNEL);
+		uncmem = kmalloc(PAGE_SIZE, GFP_ATOMIC);
 	else
 		uncmem = user_mem;
 
@@ -268,7 +268,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		 * This is a partial IO. We need to read the full page
 		 * before to write the changes.
 		 */
-		uncmem = kmalloc(PAGE_SIZE, GFP_KERNEL);
+		uncmem = kmalloc(PAGE_SIZE, GFP_NOIO);
 		if (!uncmem) {
 			pr_info("Error allocating temp memory!\n");
 			ret = -ENOMEM;
-- 
1.7.9.5

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ