lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Jul 2011 07:14:26 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Andre Przywara <andre.przywara@....com>
CC:	"H. Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...64.org>,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	"Pohlack, Martin" <Martin.Pohlack@....com>
Subject: Re: [PATCH] x86, AMD: Correct F15h IC aliasing issue

On 07/26/2011 10:42 PM, Andre Przywara wrote:
>
> There is no need to determine it by calculating, because it caused by 
> the special design of the BD L1 cache and thus fixed.
> And a calculation would be even more confusing:
>
> The L1I is virtually indexed, but physically tagged.
> 64KB L1I cache, 64 Bytes per Cacheline = 1024 cache lines
> 1024 lines / 2 way associative = 512 indexes
> 64 Bytes per Cacheline (6 bits) + 512 indexes (9 bits) = bits [14:0]
> virtual and physical addresses are the same for bits [11:0], which 
> leaves the remaining 14:12 susceptible for aliasing.
>
> So bit 12 comes from PAGESIZE and yes, the 14 could be derived from 
> the CPUID cache info, but I don't see much value in breaking this down 
> this way.
> But I agree that there should be some comment in the patch which at 
> least notes that bits [14:12] are due to the L1I design, maybe we can 
> copy a nicer version of the above math in the commit message for 
> reference.
>

If among the 12,432.8 cpuid leaves exposed by the cpu we had a bit that 
said L1I was shared, and another that said it was virtually indexed, and 
others describing the cache size, cache line size, and number of ways, 
then we could perform the arithmetic at runtime, yes?

That means that if the caches grow or increase their associativity, then 
we don't need to patch the kernel again.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ