lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100330224953.284932305@linux.site>
Date:	Tue, 30 Mar 2010 15:48:40 -0700
From:	Greg KH <gregkh@...e.de>
To:	linux-kernel@...r.kernel.org, stable@...nel.org
Cc:	stable-review@...nel.org, torvalds@...ux-foundation.org,
	akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk,
	Salman Qazi <sqazi@...gle.com>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Greg Kroah-Hartman <gregkh@...e.de>
Subject: [20/45] drivers/char/mem.c: avoid OOM lockup during large reads from /dev/zero

2.6.27-stable review patch.  If anyone has any objections, please let us know.

------------------

From: Salman Qazi <sqazi@...gle.com>

commit 730c586ad5228c339949b2eb4e72b80ae167abc4 upstream.

While running 20 parallel instances of dd as follows:

  #!/bin/bash
  for i in `seq 1 20`; do
           dd if=/dev/zero of=/export/hda3/dd_$i bs=1073741824 count=1 &
  done
  wait

on a 16G machine, we noticed that rather than just killing the processes,
the entire kernel went down.  Stracing dd reveals that it first does an
mmap2, which makes 1GB worth of zero page mappings.  Then it performs a
read on those pages from /dev/zero, and finally it performs a write.

The machine died during the reads.  Looking at the code, it was noticed
that /dev/zero's read operation had been changed by
557ed1fa2620dc119adb86b34c614e152a629a80 ("remove ZERO_PAGE") from giving
zero page mappings to actually zeroing the page.

The zeroing of the pages causes physical pages to be allocated to the
process.  But, when the process exhausts all the memory that it can, the
kernel cannot kill it, as it is still in the kernel mode allocating more
memory.  Consequently, the kernel eventually crashes.

To fix this, I propose that when a fatal signal is pending during
/dev/zero read operation, we simply return and let the user process die.

Signed-off-by: Salman Qazi <sqazi@...gle.com>
Cc: Nick Piggin <nickpiggin@...oo.com.au>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
[ Modified error return and comment trivially.  - Linus]
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>

---
 drivers/char/mem.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -724,6 +724,9 @@ static ssize_t read_zero(struct file * f
 		written += chunk - unwritten;
 		if (unwritten)
 			break;
+		/* Consider changing this to just 'signal_pending()' with lots of testing */
+		if (fatal_signal_pending(current))
+			return written ? written : -EINTR;
 		buf += chunk;
 		count -= chunk;
 		cond_resched();


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ