lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 18 Aug 2017 17:22:30 +0200
From:   Denys Vlasenko <dvlasenk@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
Cc:     Joerg Roedel <joro@...tes.org>, tglx@...utronix.de,
        mingo@...hat.com, hpa@...or.com, x86@...nel.org,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: SVM: refactor avic VM ID allocation

On 08/17/2017 04:33 PM, Paolo Bonzini wrote:
> On 15/08/2017 22:12, Radim Krčmář wrote:
>> 2017-08-11 22:11+0200, Denys Vlasenko:
>>> With lightly tweaked defconfig:
>>>
>>>     text    data     bss      dec     hex filename
>>> 11259661 5109408 2981888 19350957 12745ad vmlinux.before
>>> 11259661 5109408  884736 17253805 10745ad vmlinux.after
>>>
>>> Only compile-tested.
>>>
>>> Signed-off-by: Denys Vlasenko <dvlasenk@...hat.com>
>>> Cc: Joerg Roedel <joro@...tes.org>
>>> Cc: pbonzini@...hat.com
>>> Cc: rkrcmar@...hat.com
>>> Cc: tglx@...utronix.de
>>> Cc: mingo@...hat.com
>>> Cc: hpa@...or.com
>>> Cc: x86@...nel.org
>>> Cc: kvm@...r.kernel.org
>>> Cc: linux-kernel@...r.kernel.org
>>> ---
>>
>> Ah, I suspected my todo wasn't this short;  thanks for the patch!
>>
>>> @@ -1468,6 +1433,22 @@ static int avic_vm_init(struct kvm *kvm)
>>>  	clear_page(page_address(l_page));
>>>
>>>  	spin_lock_irqsave(&svm_vm_data_hash_lock, flags);
>>> + again:
>>> +	vm_id = next_vm_id = (next_vm_id + 1) & AVIC_VM_ID_MASK;
>>> +	if (vm_id == 0) { /* id is 1-based, zero is not okay */
>>
>> Suravee, did the reserved zero mean something?
>>
>>> +		next_vm_id_wrapped = 1;
>>> +		goto again;
>>> +	}
>>> +	/* Is it still in use? Only possible if wrapped at least once */
>>> +	if (next_vm_id_wrapped) {
>>> +		hash_for_each_possible(svm_vm_data_hash, ka, hnode, vm_id) {
>>> +			struct kvm *k2 = container_of(ka, struct kvm, arch);
>>> +			struct kvm_arch *vd2 = &k2->arch;
>>> +			if (vd2->avic_vm_id == vm_id)
>>> +				goto again;
>>
>> Although hitting the case where all 2^24 ids are already used is
>> practically impossible, I don't like the loose end ...
>
> I think even the case where 2^16 ids are used deserves a medal.  Why
> don't we just make the bitmap 8 KiB and call it a day? :)

Well, the RAM is cheap, but a 4-byte variable is still thousands of times
smaller than a 8K bitmap.

Since a 256 element hash table is used here, you need to have ~one hundred
VMs to start seeing (very small) degradation in speed of creation of new VMs
compared to bitmap approach, and that is only after 16777216 VMs
were created since reboot.

If you want to spend RAM on speeding this up, you can increase hash table size
instead. That would speed up avic_ga_log_notifier() too.

Powered by blists - more mailing lists