lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 30 May 2009 20:32:57 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	"Larry H." <research@...reption.com>
Cc:	Rik van Riel <riel@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>, pageexec@...email.hu,
	Arjan van de Ven <arjan@...radead.org>,
	linux-kernel@...r.kernel.org, Linus Torvalds <torvalds@...l.org>,
	linux-mm@...ck.org, Ingo Molnar <mingo@...hat.com>
Subject: Re: [patch 0/5] Support for sanitization flag in low-level page
	allocator


* Larry H. <research@...reption.com> wrote:

> Done. I just tested with different 'leak' sizes on a kernel 
> patched with the latest memory sanitization patch and the 
> kfree/kmem_cache_free one:
> 
> 	10M	- no occurrences with immediate scanmem
> 	40M	- no occurrences with immediate scanmem
> 	80M	- no occurrences with immediate scanmem
> 	160M	- no occurrences with immediate scanmem
> 	250M	- no occurrences with immediate scanmem
> 	300M	- no occurrences with immediate scanmem
> 	500M	- no occurrences with immediate scanmem
> 	600M	- with immediate zeromem 600 and scanmem afterwards,
> 		 no occurrences.

Is the sensitive data (or portions/transformations of it) copied to 
the kernel stack and used there?

If not then this isnt a complete/sufficient/fair test of how 
sensitive data like crypto keys gets used by the kernel.

In reality sensitive data, if it's relied upon by the kernel, can 
(and does) make it to the kernel stack. We see it happen every day 
with function return values. Let me quote the example i mentioned 
earlier today:

[   96.138788]  [<ffffffff810ab62e>] perf_counter_exit_task+0x10e/0x3f3
[   96.145464]  [<ffffffff8104cf46>] do_exit+0x2e7/0x722
[   96.150837]  [<ffffffff810630cf>] ? up_read+0x9/0xb
[   96.156036]  [<ffffffff8151cc0b>] ? do_page_fault+0x27d/0x2a5
[   96.162141]  [<ffffffff8104d3f4>] do_group_exit+0x73/0xa0
[   96.167860]  [<ffffffff8104d433>] sys_exit_group+0x12/0x16
[   96.173665]  [<ffffffff8100bb2b>] system_call_fastpath+0x16/0x1b

This is a real stackdump and the 'ffffffff8151cc0b' 64-bit word is 
actually a leftover from a previous system entry. ( And this is at 
the bottom of the stack that gets cleared all the time - the top of 
the kernel stack is a lot more more persistent in practice and 
crypto calls tend to have a healthy stack footprint. )

Similarly, other sensitive data can be leaked via the kernel stack 
too.

So IMO the GFP_SENSITIVE facility (beyond being a technical misnomer 
- it should be something like GFP_NON_PERSISTENT instead) actually 
results in subtly _worse_ security in the end: because people (and 
organizations) 'think' that their keys are safe against information 
leaks via this space, while they are not.

The kernel stack can be freed, be reused by something else partially 
and then written out to disk (say as part of hibernation) where it's 
recoverable from the disk image.

Furthermore, there's no guarantee at all that a task wont stay 
around for a long time - with sensitive data still on its kernel 
stack.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ