lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 16 Feb 2024 09:03:31 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Oliver Upton <oliver.upton@...ux.dev>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	David Matlack <dmatlack@...gle.com>, Pasha Tatashin <tatashin@...gle.com>, 
	Michael Krebs <mkrebs@...gle.com>
Subject: Re: [PATCH 2/2] KVM: selftests: Test forced instruction emulation in
 dirty log test (x86 only)

On Fri, Feb 16, 2024, Oliver Upton wrote:
> On Thu, Feb 15, 2024 at 04:26:02PM -0800, Sean Christopherson wrote:
> > On Thu, Feb 15, 2024, Oliver Upton wrote:
> > > On Thu, Feb 15, 2024 at 01:33:48PM -0800, Sean Christopherson wrote:
> > > 
> > > [...]
> > > 
> > > > +/* TODO: Expand this madness to also support u8, u16, and u32 operands. */
> > > > +#define vcpu_arch_put_guest(mem, val, rand) 						\
> > > > +do {											\
> > > > +	if (!is_forced_emulation_enabled || !(rand & 1)) {				\
> > > > +		*mem = val;								\
> > > > +	} else if (rand & 2) {								\
> > > > +		__asm__ __volatile__(KVM_FEP "movq %1, %0"				\
> > > > +				     : "+m" (*mem)					\
> > > > +				     : "r" (val) : "memory");				\
> > > > +	} else {									\
> > > > +		uint64_t __old = READ_ONCE(*mem);					\
> > > > +											\
> > > > +		__asm__ __volatile__(KVM_FEP LOCK_PREFIX "cmpxchgq %[new], %[ptr]"	\
> > > > +				     : [ptr] "+m" (*mem), [old] "+a" (__old)		\
> > > > +				     : [new]"r" (val) : "memory", "cc");		\
> > > > +	}										\
> > > > +} while (0)
> > > > +
> > > 
> > > Last bit of bikeshedding then I'll go... Can you just use a C function
> > > and #define it so you can still do ifdeffery to slam in a default
> > > implementation?
> > 
> > Yes, but the macro shenanigans aren't to create a default, they're to set the
> > stage for expanding to other sizes without having to do:
> > 
> >   vcpu_arch_put_guest{8,16,32,64}()
> > 
> > or if we like bytes instead of bits:
> > 
> >   vcpu_arch_put_guest{1,2,4,8}()
> > 
> > I'm not completely against that approach; it's not _that_ much copy+paste
> > boilerplate, but it's enough that I think that macros would be a clear win,
> > especially if we want to expand what instructions are used.
> 
> Oh, I see what you're after. Yeah, macro shenanigans are the only way
> out then. Wasn't clear to me if the interface you wanted w/ the selftest
> was a u64 write that you cracked into multiple writes behind the
> scenes.

I don't want to split u64 into multiple writes, as that would really violate the
principle of least surprise.  Even the RMW of the CMPXCHG is pushing things.

What I want is to provide an API that can be used by tests to generate guest writes
for the native/common sizes.  E.g. so that xen_shinfo_test can write 8-bit fields
using the APIs (don't ask me how long it took me to find a decent example that
wasn't using a 64-bit value :-) ).

	struct vcpu_info {
		uint8_t evtchn_upcall_pending;
		uint8_t evtchn_upcall_mask;
		unsigned long evtchn_pending_sel;
		struct arch_vcpu_info arch;
		struct pvclock_vcpu_time_info time;
	}; /* 64 bytes (x86) */

	vcpu_arch_put_guest(vi->evtchn_upcall_pending, 0);
	vcpu_arch_put_guest(vi->evtchn_pending_sel, 0);

And of course fleshing that out poked a bunch of holes in my plan, so after a
bit of scope screep...

---
#define vcpu_arch_put_guest(mem, __val) 						\
do {											\
	const typeof(mem) val = (__val);						\
											\
	if (!is_forced_emulation_enabled || guest_random_bool(&guest_rng)) {		\
		(mem) = val;								\
	} else if (guest_random_bool(&guest_rng)) {					\
		__asm__ __volatile__(KVM_FEP "mov %1, %0"				\
				     : "+m" (mem)					\
				     : "r" (val) : "memory");				\
	} else {									\
		uint64_t __old = READ_ONCE(mem);					\
											\
		__asm__ __volatile__(KVM_FEP LOCK_PREFIX "cmpxchg %[new], %[ptr]"	\
				     : [ptr] "+m" (mem), [old] "+a" (__old)		\
				     : [new]"r" (val) : "memory", "cc");		\
	}										\
} while (0)
---

Where guest_rng is a global pRNG instance

	struct guest_random_state {
		uint32_t seed;
	};

	extern uint32_t guest_random_seed;
	extern struct guest_random_state guest_rng;

that's configured with a completely random seed by default, but can be overriden
by tests for determinism, e.g. in dirty_log_perf_test

  void __attribute((constructor)) kvm_selftest_init(void)
  {
	/* Tell stdout not to buffer its content. */
	setbuf(stdout, NULL);

	guest_random_seed = random();

	kvm_selftest_arch_init();
  }

and automatically configured for each VM.

	pr_info("Random seed: 0x%x\n", guest_random_seed);
	guest_rng = new_guest_random_state(guest_random_seed);
	sync_global_to_guest(vm, guest_rng);

	kvm_arch_vm_post_create(vm);

Long term, I want to get to the point where the library code supports specifying
a seed for every test, i.e. so that every test that uses the pRNG can be as
deterministic as possible.  But that's definitely a future problem :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ