lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2OBYYQq6cwptSws@google.com>
Date: Wed, 18 Dec 2024 18:13:53 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH 14/20] KVM: selftests: Collect *all* dirty entries in each
 dirty_log_test iteration

On Tue, Dec 17, 2024, Maxim Levitsky wrote:
> On Fri, 2024-12-13 at 17:07 -0800, Sean Christopherson wrote:
> > Collect all dirty entries during each iteration of dirty_log_test by
> > doing a final collection after the vCPU has been stopped.  To deal with
> > KVM's destructive approach to getting the dirty bitmaps, use a second
> > bitmap for the post-stop collection.
> > 
> > Collecting all entries that were dirtied during an iteration simplifies
> > the verification logic *and* improves test coverage.
> > 
> >   - If a page is written during iteration X, but not seen as dirty until
> >     X+1, the test can get a false pass if the page is also written during
> >     X+1.
> > 
> >   - If a dirty page used a stale value from a previous iteration, the test
> >     would grant a false pass.
> > 
> >   - If a missed dirty log occurs in the last iteration, the test would fail
> >     to detect the issue.
> > 
> > E.g. modifying mark_page_dirty_in_slot() to dirty an unwritten gfn:
> > 
> > 	if (memslot && kvm_slot_dirty_track_enabled(memslot)) {
> > 		unsigned long rel_gfn = gfn - memslot->base_gfn;
> > 		u32 slot = (memslot->as_id << 16) | memslot->id;
> > 
> > 		if (!vcpu->extra_dirty &&
> > 		    gfn_to_memslot(kvm, gfn + 1) == memslot) {
> > 			vcpu->extra_dirty = true;
> > 			mark_page_dirty_in_slot(kvm, memslot, gfn + 1);
> > 		}
> > 		if (kvm->dirty_ring_size && vcpu)
> > 			kvm_dirty_ring_push(vcpu, slot, rel_gfn);
> > 		else if (memslot->dirty_bitmap)
> > 			set_bit_le(rel_gfn, memslot->dirty_bitmap);
> > 	}
> > 
> > isn't detected with the current approach, even with an interval of 1ms
> > (when running nested in a VM; bare metal would be even *less* likely to
> > detect the bug due to the vCPU being able to dirty more memory).  Whereas
> > collecting all dirty entries consistently detects failures with an
> > interval of 700ms or more (the longer interval means a higher probability
> > of an actual write to the prematurely-dirtied page).
> 
> While this patch might improve coverage for this particular case,
> I think that this patch will make the test to be much more deterministic,

The verification will be more deterministic, but the actual testcase itself is
just as random as it was before.

> and thus have less chance of catching various races in the kernel that can happen.
> 
> In fact in my option I prefer moving this test in other direction by
> verifying dirty ring while the *vCPU runs* as well, in other words, not
> stopping the vCPU at all unless its dirty ring is full.

I don't see how letting verification be coincident with the vCPU running is at
all interesting for a dirty logging.  Host userspace reading guest memory while
it's being written by the guest doesn't stress KVM's dirty logging in any meaningful
way.  E.g. it exercises hardware far more than anything else.  If we want to stress
that boundary, then we should spin up another vCPU or host thread to randomly read
while the test is in-progress, and also to write to bytes 4095:8 (assuming a 4KiB
page size), e.g. to ensure that dueling writes to a cacheline that trigger false
sharing are handled correct.

But letting the vCPU-under-test keep changing the memory while it's being validated
would add significant complexity, without any benefit insofar as I can see.  As
evidenced by the bug the current approach can't detect, heavily stressing the
system is meaningless if it's impossible to separate the signal from the noise.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ