lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 25 Mar 2024 20:12:04 +0000
From: Colton Lewis <coltonlewis@...gle.com>
To: Quentin Perret <qperret@...gle.com>
Cc: kvm@...r.kernel.org, maz@...nel.org, oliver.upton@...ux.dev, 
	james.morse@....com, suzuki.poulose@....com, yuzenghui@...wei.com, 
	catalin.marinas@....com, will@...nel.org, pbonzini@...hat.com, 
	mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, 
	vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, 
	bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com, 
	linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] KVM: arm64: Add KVM_CAP to control WFx trapping

Thanks for the feedback.

Quentin Perret <qperret@...gle.com> writes:

> On Friday 22 Mar 2024 at 14:24:35 (+0000), Quentin Perret wrote:
>> On Tuesday 19 Mar 2024 at 16:43:41 (+0000), Colton Lewis wrote:
>> > Add a KVM_CAP to control WFx (WFI or WFE) trapping based on scheduler
>> > runqueue depth. This is so they can be passed through if the runqueue
>> > is shallow or the CPU has support for direct interrupt injection. They
>> > may be always trapped by setting this value to 0. Technically this
>> > means traps will be cleared when the runqueue depth is 0, but that
>> > implies nothing is running anyway so there is no reason to care. The
>> > default value is 1 to preserve previous behavior before adding this
>> > option.

>> I recently discovered that this was enabled by default, but it's not
>> obvious to me everyone will want this enabled, so I'm in favour of
>> figuring out a way to turn it off (in fact we might want to make this
>> feature opt in as the status quo used to be to always trap).

Setting the introduced threshold to zero will cause it to trap whenever
something is running. Is there a problem with doing it that way?

I'd also be interested to get more input before changing the current
default behavior.


>> There are a few potential issues I see with having this enabled:

>>   - a lone vcpu thread on a CPU will completely screw up the host
>>     scheduler's load tracking metrics if the vCPU actually spends a
>>     significant amount of time in WFI (the PELT signal will no longer
>>     be a good proxy for "how much CPU time does this task need");

>>   - the scheduler's decision will impact massively the behaviour of the
>>     vcpu task itself. Co-scheduling a task with a vcpu task (or not) will
>>     impact massively the perceived behaviour of the vcpu task in a way
>>     that is entirely unpredictable to the scheduler;

>>   - while the above problems might be OK for some users, I don't think
>>     this will always be true, e.g. when running on big.LITTLE systems the
>>     above sounds nightmare-ish;

>>   - the guest spending long periods of time in WFI prevents the host from
>>     being able to enter deeper idle states, which will impact power very
>>     negatively;

>> And probably a whole bunch of other things.

>> > Think about his option as a threshold. The instruction will be trapped
>> > if the runqueue depth is higher than the threshold.

>> So talking about the exact interface, I'm not sure exposing this to
>> userspace is really appropriate. The current rq depth is next to
>> impossible for userspace to control well.

Using runqueue depth is going off of a suggestion from Oliver [1], who I've
also talked to internally at Google a few times about this.

But hearing your comment makes me lean more towards having some
enumeration of behaviors like TRAP_ALWAYS, TRAP_NEVER,
TRAP_IF_MULTIPLE_TASKS.

>> My gut feeling tells me we might want to gate all of this on
>> PREEMPT_FULL instead, since PREEMPT_FULL is pretty much a way to say
>> "I'm willing to give up scheduler tracking accuracy to gain throughput
>> when I've got a task running alone on a CPU". Thoughts?

> And obviously I meant s/PREEMPT_FULL/NOHZ_FULL, but hopefully that was
> clear :-)

Sounds good to me but I've not touched anything scheduling related before.

[1] https://lore.kernel.org/kvmarm/Zbgx8hZgWCmtzMjH@linux.dev/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ