lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54D3C314.3070903@redhat.com>
Date:	Thu, 05 Feb 2015 14:23:00 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>,
	Jan Kiszka <jan.kiszka@...mens.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
CC:	rkrcmar@...hat.com, mtosatti@...hat.com
Subject: Re: [PATCH RFC] kvm: x86: add halt_poll module parameter

On 02/05/2015 02:20 PM, Paolo Bonzini wrote:
> 
> 
> On 05/02/2015 19:55, Jan Kiszka wrote:
>>> This patch introduces a new module parameter for the KVM module; when it
>>> is present, KVM attempts a bit of polling on every HLT before scheduling
>>> itself out via kvm_vcpu_block.
>>
>> Wouldn't it be better to tune this on a per-VM basis? Think of mixed
>> workloads with some latency-sensitive and some standard VMs.
> 
> Yes, but:
> 
> 1) this turned out to be very cheap, so a per-host tunable is not too bad;
> 
> 2) it also affects only very few workloads (for example network
> workloads can already do polling in the guest) so it only affects few
> people;
> 
> 3) long term anyway we want it to auto tune, which is better than tuning
> it per-VM.

We may want to auto tune it per VM.

However, if we make auto tuning work well, I do not
think we want to expose a user visible tunable per
VM, and commit to keeping that kind of interface
around forever.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ