lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8734ng0wb8.ffs@tglx>
Date: Thu, 08 Aug 2024 01:04:11 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Sean Christopherson <seanjc@...gle.com>
Cc: "Xin Li (Intel)" <xin@...or.com>, linux-kernel@...r.kernel.org,
 mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
 x86@...nel.org, hpa@...or.com, peterz@...radead.org,
 andrew.cooper3@...rix.com
Subject: Re: [PATCH v1 3/3] x86/entry: Set FRED RSP0 on return to userspace
 instead of context switch

On Wed, Aug 07 2024 at 14:55, Sean Christopherson wrote:
> On Wed, Aug 07, 2024, Thomas Gleixner wrote:
>> Though I wonder if this should not have a per CPU cache for that.
>> 
>>         if (cpu_feature_enabled(X86_FEATURE_FRED) {
>>         	unsigned long rsp0 = (unsigned long) task_stack_page(current) + THREAD_SIZE;
>> 
>> 		if (__this_cpu_read(cpu_fred_rsp0) != rsp0) {
>> 			wrmsrns(MSR_IA32_FRED_RSP0, fred_rsp0);
>>                         this_cpu_write(cpu_fred_rsp0, rsp0);
>>                 }
>>         }	
>> 
>> Hmm?
>
> A per-CPU cache would work for KVM too, KVM would just need to stuff the cache
> with the guest's value instead of setting _TIF_LOAD_USER_STATES.

There are two options vs. the cache:

1) Use a cache only

static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
						  unsigned long ti_work)
{
	if (IS_ENABLED(CONFIG_X86_DEBUG_FPU) || unlikely(ti_work))
		arch_exit_work(ti_work);

        fred_rsp0_magic();

2) Use both the TIF bit and the cache

static inline void arch_exit_work(unsigned long ti_work)
{
        if (....)
        ...
        fred_rsp0_magic();
}

#1 has the charm that it avoids arch_exit_work() completely when ti_work
   is 0, but has the unconditional check on the per CPU cache variable
   and the extra conditional on every return

#2 has the charm that it avoids touching the per CPU cache variable on
   return when the TIF bit is not set, but comes with the downside of
   going through the ti_work chain to update RSP0

I'm inclined to claim that #1 wins. Why?

The probability for a RSP0 update is higher than the probability for
ti_work != 0, and for syscall heavy workloads the extra per CPU read and
the conditional is not really relevant when we stick this into struct
pcpu_hot. That cacheline is hot anyway in that code path due to
'current' and other members there.

But that's a problem for micro benchmark experts to figure out. I'm not
one of them :)

In any case the cache should be a win over an unconditionl MSR write
independent of MRMSRNS.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ