lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Apr 2018 16:33:08 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     KarimAllah Ahmed <karahmed@...zon.de>, x86@...nel.org,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     hpa@...or.com, jmattson@...gle.com, mingo@...hat.com,
        rkrcmar@...hat.com, tglx@...utronix.de
Subject: Re: [PATCH 04/10] KVM: Introduce a new guest mapping API

Coming back to this (nice) series.

On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> +bool kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map)
> +{
> +	kvm_pfn_t pfn;
> +	void *kaddr = NULL;

Can we s/kaddr/hva/ since that's the standard nomenclature?

> +	struct page *page = NULL;
> +
> +	if (map->kaddr && map->gfn == gfn)
> +		/* If the mapping is valid and guest memory is already mapped */
> +		return true;

Please return 0/-EINVAL instead.  More important, this optimization is
problematic because:

1) the underlying memslots array could have changed.  You'd also need to
store the generation count (see kvm_read_guest_cached for an example)

2) worse, the memslots array could have switched between the SMM and
non-SMM address spaces.  This is by the way the reason why there is no
kvm_vcpu_read_guest_cached API.

However, all the places you're changing in patches 4-10 are doing
already kvm_vcpu_gpa_to_page, so I suggest just dropping this optimization.

> +	else if (map->kaddr)
> +		/* If the mapping is valid but trying to map a different guest pfn */
> +		kvm_vcpu_unmap(map);
> +
> +	pfn = kvm_vcpu_gfn_to_pfn(vcpu, gfn);

Please change the API to:

static int __kvm_map_gfn(struct kvm_memslot *memslots, gfn_t gfn,
		         struct kvm_host_map *map)

	calling gfn_to_pfn_memslot(memslots, gfn)

int kvm_vcpu_map_gfn(struct kvm_vcpu *vcpu gfn_t gfn,
		     struct kvm_host_map *map)

	calling kvm_vcpu_gfn_to_memslot + __kvm_map

void kvm_unmap_gfn(struct kvm_host_map *map)


> +	if (is_error_pfn(pfn))
> +		return false;

Should this be is_error_noslot_pfn?

> +	if (pfn_valid(pfn)) {
> +		page = pfn_to_page(pfn);
> +		kaddr = vmap(&page, 1, VM_MAP, PAGE_KERNEL);
> +	} else {
> +		kaddr = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB);
> +	}
> +
> +	if (!kaddr)
> +		return false;
> +
> +	map->page = page;
> +	map->kaddr = kaddr;
> +	map->pfn = pfn;
> +	map->gfn = gfn;
> +
> +	return true;
> +}

> 
> +void kvm_vcpu_unmap(struct kvm_host_map *map)
> +{
> +	if (!map->kaddr)
> +		return;
> +
> +	if (map->page)
> +		kunmap(map->page);
> +	else
> +		memunmap(map->kaddr);
> +
> +	kvm_release_pfn_dirty(map->pfn);
> +	memset(map, 0, sizeof(*map));

This can clear just map->kaddr (map->hva after the above review).

Thanks,

Paolo

> +}
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ