lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874jxzvxak.wl-maz@kernel.org>
Date:   Fri, 26 Aug 2022 16:28:51 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Oliver Upton <oliver.upton@...ux.dev>,
        Gavin Shan <gshan@...hat.com>, kvmarm@...ts.cs.columbia.edu,
        linux-arm-kernel@...ts.infradead.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kselftest@...r.kernel.org, peterx@...hat.com, corbet@....net,
        james.morse@....com, alexandru.elisei@....com,
        suzuki.poulose@....com, catalin.marinas@....com, will@...nel.org,
        shuah@...nel.org, seanjc@...gle.com, drjones@...hat.com,
        dmatlack@...gle.com, bgardon@...gle.com, ricarkol@...gle.com,
        zhenyzha@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking

On Fri, 26 Aug 2022 11:58:08 +0100,
Paolo Bonzini <pbonzini@...hat.com> wrote:
> 
> On 8/23/22 22:35, Marc Zyngier wrote:
> >> Heh, yeah I need to get that out the door. I'll also note that Gavin's
> >> changes are still relevant without that series, as we do write unprotect
> >> in parallel at PTE granularity after commit f783ef1c0e82 ("KVM: arm64:
> >> Add fast path to handle permission relaxation during dirty logging").
> > 
> > Ah, true. Now if only someone could explain how the whole
> > producer-consumer thing works without a trace of a barrier, that'd be
> > great...
> 
> Do you mean this?
>
> void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset)

Of course not. I mean this:

static int kvm_vm_ioctl_reset_dirty_pages(struct kvm *kvm)
{
	unsigned long i;
	struct kvm_vcpu *vcpu;
	int cleared = 0;

	if (!kvm->dirty_ring_size)
		return -EINVAL;

	mutex_lock(&kvm->slots_lock);

	kvm_for_each_vcpu(i, vcpu, kvm)
		cleared += kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring);
[...]
}

and this

int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
{
	u32 cur_slot, next_slot;
	u64 cur_offset, next_offset;
	unsigned long mask;
	int count = 0;
	struct kvm_dirty_gfn *entry;
	bool first_round = true;

	/* This is only needed to make compilers happy */
	cur_slot = cur_offset = mask = 0;

	while (true) {
		entry = &ring->dirty_gfns[ring->reset_index & (ring->size - 1)];

		if (!kvm_dirty_gfn_harvested(entry))
			break;
[...]

}

which provides no ordering whatsoever when a ring is updated from one
CPU and reset from another.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ