lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 13 May 2018 10:47:12 +0200
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Radim Krčmář <rkrcmar@...hat.com>
Cc:     kvm@...r.kernel.org, x86@...nel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Roman Kagan <rkagan@...tuozzo.com>,
        "K. Y. Srinivasan" <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        "Michael Kelley \(EOSG\)" <Michael.H.Kelley@...rosoft.com>,
        Mohammed Gamal <mmorsy@...hat.com>,
        Cathy Avery <cavery@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/6] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation

Radim Krčmář <rkrcmar@...hat.com> writes:

> 2018-04-16 13:08+0200, Vitaly Kuznetsov:
...
>
>> +		/*
>> +		 * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we
>> +		 * can't analyze it here, flush TLB regardless of the specified
>> +		 * address space.
>> +		 */
>> +		kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
>> +
>> +		/*
>> +		 * It is possible that vCPU will migrate and we will kick wrong
>> +		 * CPU but vCPU's TLB will anyway be flushed upon migration as
>> +		 * we already made KVM_REQ_TLB_FLUSH request.
>> +		 */
>> +		cpu = vcpu->cpu;
>> +		if (cpu != -1 && cpu != me && cpu_online(cpu) &&
>> +		    kvm_arch_vcpu_should_kick(vcpu))
>> +			cpumask_set_cpu(cpu, &hv_current->tlb_lush);
>> +	}
>> +
>> +	if (!cpumask_empty(&hv_current->tlb_lush))
>> +		smp_call_function_many(&hv_current->tlb_lush, ack_flush,
>> +				       NULL, true);
>
> Hm, quite a lot of code duplication with EX hypercall and also
> kvm_make_all_cpus_request ... I'm thinking about making something like
>
>   kvm_make_some_cpus_request(struct kvm *kvm, unsigned int req,
>                              bool (*predicate)(struct kvm_vcpu *vcpu))
>
> or to implement a vp_index -> vcpu mapping and using
>
>   kvm_vcpu_request_mask(struct kvm *kvm, unsigned int req, long *vcpu_bitmap)
>
> The latter would probably simplify logic of the EX hypercall.

We really want to avoid memory allocation for cpumask on this path and
that's what kvm_make_all_cpus_request() currently does (when
CPUMASK_OFFSTACK). vcpu bitmap is probably OK as KVM_MAX_VCPUS is much
lower.

Making cpumask allocation avoidable leads us to the following API:

bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
				 long *vcpu_bitmap, cpumask_var_t tmp);

or, if we want to prettify this a little bit, we may end up with the
following pair:

bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
				 long *vcpu_bitmap);

bool __kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
  				   long *vcpu_bitmap, cpumask_var_t tmp);

and from hyperv code we'll use the later. With this, no code duplication
is required.

Does this look acceptable?

-- 
  Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ