lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 24 Jul 2011 22:44:50 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Borislav Petkov <bp@...64.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	"Przywara, Andre" <Andre.Przywara@....com>,
	"Pohlack, Martin" <Martin.Pohlack@....com>
Subject: Re: [PATCH] x86, AMD: Correct F15h IC aliasing issue


* Borislav Petkov <bp@...64.org> wrote:

> On Sun, Jul 24, 2011 at 02:30:46PM -0400, Ingo Molnar wrote:
> > > > So I really think that you might be *much* better off just changing
> > > > mmap_rnd(), and nothing else. Just make *that* mask off the three low
> > > > bits of the random address, ie something like
> > > > 
> > > >   diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
> > > >   index 1dab5194fd9d..6b62ab5a5ae1 100644
> > > >   --- a/arch/x86/mm/mmap.c
> > > >   +++ b/arch/x86/mm/mmap.c
> > > >   @@ -90,6 +90,9 @@ static unsigned long mmap_rnd(void)
> > > >                           rnd = (long)get_random_int() % (1<<8);
> > > >                   else
> > > >                           rnd = (long)(get_random_int() % (1<<28));
> > > >   +
> > > >   +               if (avoid_aliasing_in_bits_14_12)
> > > >   +                       rnd &= ~7;
> > > >           }
> > > >           return rnd << PAGE_SHIFT;
> > > >    }
> > > > 
> > > > would be fundamentally very safe - it would already take all our
> > > > current anti-randomization code into account.
> > > > 
> > > > No?
> > > 
> > > Hehe, we had that idea initially. However, the special 1% case I was
> > > hinting at is this:
> > > 
> > > process P0, mapping libraries A, B, C
> > > 
> > > and
> > > 
> > > process P1, mapping libraries A, C
> > > 
> > > Library C ends up possibly with aliasing VAs and there's the 
> > > problem again. [...]
> > 
> > Well, since all library positions are randomized, and the quirk masks 
> > out bits 12,13,14, all libraries that are not explicitly fix-mapped 
> > will end up on a 32K granular VA address.
> 
> Right, but IIUC, mmap_rnd() is used to determine mm->mmap_base so 
> the mmap starting address will have [14:12] cleared but the initial 
> address of library C's mapping in the example above will possibly 
> differ in those bits due to different linking order, right?

Indeed, because only the mmap base is randomized, not the individual 
vmas.

I'd still suggest to ignore this link order case first, and get the 
99% fix in place ... that one is also obviously backportable - the 
big patch not so much.

Then send in a patch on top of that that solves the remaining 1% also 
with numbers attached, so that we can see the speed/complexity 
tradeoff ...

> > Also, in practice on most distros most libraries will be 
> > prelinked to the same address in all processes.
> 
> I think at least on RHEL there's a daemon doing prelinking every 
> two weeks or so...

I'd suggest to run a script that extracts the actual mapped offsets 
from/proc/*/maps files and measures how much of an issue this all is 
in practice.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ