[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54AFF8B5.8050305@suse.cz>
Date: Fri, 09 Jan 2015 16:50:13 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
Andy Lutomirski <luto@...capital.net>
CC: "Kirill A. Shutemov" <kirill@...temov.name>,
Pavel Machek <pavel@....cz>,
Mark Seaborn <mseaborn@...omium.org>,
kernel list <linux-kernel@...r.kernel.org>
Subject: Re: DRAM unreliable under specific access patern
On 01/08/2015 02:03 PM, One Thousand Gnomes wrote:
> On Mon, 5 Jan 2015 18:26:07 -0800
> Andy Lutomirski <luto@...capital.net> wrote:
>
> Thats less of a concern I think. As far as I can tell it would depend how
> the memory is wired what actually gets hit. I'm not clear if its within
> the range or not.
>
>> > When I read the paper I thought that vdso would be interesting target for
>> > the attack, but having all these constrains in place, it's hard aim the
>> > attack anything widely used.
>> >
>>
>> The vdso and the vvar page are both at probably-well-known physical
>> addresses, so you can at least target the kernel a little bit. I
>> *think* that kASLR helps a little bit here.
>
> SMEP likewise if you were able to use 1GB to corrupt matching lines
> elsewhere in RAM (eg the syscall table), but that would I think depend
> how the RAM is physically configured.
>
> Thats why the large TLB case worries me. With 4K pages and to an extent
> with 2MB pages its actually quite hard to line up an attack if you know
> something about the target. With 1GB hugepages you control the lower bits
> of the physical address precisely. The question is whether that merely
> enables you to decide where to shoot yourself or it goes beyond that ?
I haven't read the details yet to judge if it's feasible in this case, but even
without hugepages, it's possible (albeit elaborately) to control physical
mapping from userspace. I've done this in the past, to have optimal mapping
(basically page coloring) to L2/L3 caches. It was done by allocating bunch of
memory, determining its physical addresses from /proc/self/pagemap, and then
rearanging it via mremap.
Then it's also quite trivial to induce cache misses without clflush, using just
few addresses that map to the same cache set, without having to cycle throuh
more memory than the cache size is.
But as I said, I haven't read the details here to see if the required access
pattern to corrupt ram can be combined with these kinds of tricks...
> (Outside HPC anyway: for HPC cases it bites both ways I suspect - you've
> got the ability to ensure you don't hit those access patterns while using
> 1GB pages, but also nothing to randomise stuff to make them unlikely if
> you happen to have worst case aligned data).
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists