lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Oct 2021 09:50:24 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        "Chang S. Bae" <chang.seok.bae@...el.com>, bp@...e.de,
        luto@...nel.org, mingo@...nel.org, x86@...nel.org
Cc:     len.brown@...el.com, lenb@...nel.org, dave.hansen@...el.com,
        thiago.macieira@...el.com, jing2.liu@...el.com,
        ravi.v.shankar@...el.com, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org
Subject: Re: [PATCH v10 10/28] x86/fpu/xstate: Update the XSTATE save function
 to support dynamic states

On 02/10/21 23:31, Thomas Gleixner wrote:
> You have two options:
> 
>    1) Always allocate the large buffer size which is required to
>       accomodate all possible features.
> 
>       Trivial, but waste of memory.
> 
>    2) Make the allocation dynamic which seems to be trivial to do in
>       kvm_load_guest_fpu() at least for vcpu->user_fpu.
> 
>       The vcpu->guest_fpu handling can probably be postponed to the
>       point where AMX is actually exposed to guests, but it's probably
>       not the worst idea to think about the implications now.
> 
> Paolo, any opinions?

Unless we're missing something, dynamic allocation should not be hard to 
do for both guest_fpu and user_fpu; either near the call sites of 
kvm_save_current_fpu, or in the function itself.  Basically adding 
something like

	struct kvm_fpu {
		struct fpu *state;
		unsigned size;
	} user_fpu, guest_fpu;

to struct kvm_vcpu.  Since the size can vary, it can be done simply with 
kzalloc instead of the x86_fpu_cache that KVM has now.

The only small complication is that kvm_save_current_fpu is called 
within fpregs_lock; the allocation has to be outside so that you can use 
GFP_KERNEL even on RT kernels.   If the code looks better with 
fpregs_lock moved within kvm_save_current_fpu, go ahead and do it like that.

Paolo

Powered by blists - more mailing lists