lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241024125947.GDZxpEw9BLJ46B5VKC@fat_crate.local>
Date: Thu, 24 Oct 2024 14:59:47 +0200
From: Borislav Petkov <bp@...en8.de>
To: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Cc: Dave Hansen <dave.hansen@...el.com>, linux-kernel@...r.kernel.org,
	tglx@...utronix.de, mingo@...hat.com, dave.hansen@...ux.intel.com,
	Thomas.Lendacky@....com, nikunj@....com, Santosh.Shukla@....com,
	Vasant.Hegde@....com, Suravee.Suthikulpanit@....com,
	David.Kaplan@....com, x86@...nel.org, hpa@...or.com,
	peterz@...radead.org, seanjc@...gle.com, pbonzini@...hat.com,
	kvm@...r.kernel.org
Subject: Re: [RFC 02/14] x86/apic: Initialize Secure AVIC APIC backing page

On Thu, Oct 24, 2024 at 06:01:16PM +0530, Neeraj Upadhyay wrote:
> With Secure AVIC enabled, source vCPU directly writes to the Interrupt
> Request Register (IRR) offset in the target CPU's backing page. So, the IPI
> is directly requested in target vCPU's backing page by source vCPU context
> and not by HV.

So the source vCPU will fault in the target vCPU's backing page if it is not
there anymore. And if it is part of a 2M translation, the likelihood that it
is there is higher.

> As I clarified above, it's the source vCPU which need to load each backing
> page.

So if we have 4K backing pages, the source vCPU will fault-in the target's
respective backing page into its TLB and send the IPI. And if it is an IPI to
multiple vCPUs, then it will have to fault in each vCPU's backing page in
succession.

However, when the target vCPU gets to VMRUN, the backing page will have to be
faulted in into the target vCPU's TLB too.

And this is the same with a 2M backing page - the target vCPUs will have to
fault that 2M page translation too.

But then if the target vCPU wants to send IPIs itself, the 2M backing pages
will be there already. Hmmm.

> I don't have the data at this point. That is the reason I will send this
> contiguous allocation as a separate patch (if required) when I can get data
> on some workloads which are impacted by this.

Yes, that would clarify whether something more involved than simply using 4K
pages is needed.

> For smp_call_function_many(), where a source CPU sends IPI to multiple CPUs,
> source CPU writes to backing pages of different target CPUs within this function.
> So, accesses have temporal locality. For other use cases, I need to enable
> perf with Secure AVIC to collect the TLB misses on a IPI benchmark and get
> back with the numbers.

Right, I can see some TLB walks getting avoided if you have a single 2M page
but without actually measuring it, I don't know. If I had to venture a guess,
it probably won't show any difference but who knows...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ