lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54EADBB6.4020005@freescale.com>
Date:	Mon, 23 Feb 2015 09:50:14 +0200
From:	Purcareata Bogdan <b43198@...escale.com>
To:	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Alexander Graf <agraf@...e.de>,
	Bogdan Purcareata <bogdan.purcareata@...escale.com>,
	<linuxppc-dev@...ts.ozlabs.org>, <linux-rt-users@...r.kernel.org>
CC:	<linux-kernel@...r.kernel.org>, <scottwood@...escale.com>,
	<mihai.caraman@...escale.com>, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux

On 20.02.2015 17:06, Sebastian Andrzej Siewior wrote:
> On 02/20/2015 03:57 PM, Paolo Bonzini wrote:
>>
>>
>> On 20/02/2015 15:54, Sebastian Andrzej Siewior wrote:
>>> Usually you see "scheduling while atomic" on -RT and convert them to
>>> raw locks if it is appropriate.
>>>
>>> Bogdan wrote in 2/2 that he needs to limit the number of CPUs in oder
>>> not cause a DoS and large latencies in the host. I haven't seen an
>>> answer to my why question. Because if the conversation leads to
>>> large latencies in the host then it does not look right.
>>>
>>> Each host PIC has a rawlock and does mostly just mask/unmask and the
>>> raw lock makes sure the value written is not mixed up due to
>>> preemption.
>>> This hardly increase latencies because the "locked" path is very short.
>>> If this conversation leads to higher latencies then the locked path is
>>> too long and hardly suitable to become a rawlock.
>>
>> Yes, but large latencies just mean the code has to be rewritten (x86
>> doesn't anymore do event injection in an atomic regions for example).
>> Until it is, using raw_spin_lock is correct.
>
> It does not sound like it. It sounds more like disabling interrupts to
> get things run faster and then limit it on a different corner to not
> blow up everything.
> Max latencies was decreased "Max latency (us)  70        62" and that
> is why this is done? For 8 us and possible DoS in case there are too
> many cpus?

The main reason for this patch was to enable KVM guests to run on RT 
hosts in certain scenarios, such as delivering external interrupts to 
the guest and the guest being SMP. The cyclictest measurements were just 
a "sanity check" to make sure the latencies don't get messed up too bad, 
albeit in a light scenario (guest with 1 VCPU), for a use case when the 
guest is not SMP and doesn't have any external interrupts delivered. 
This latter scenario works even without the kvm openpic being a 
raw_spinlock.

Previous to this patch, KVM was indeed blowing up on guest_enter [1], 
and making the openpic lock a raw_spinlock fixes that, without causing 
too much cyclictest damage when the guest doesn't have many VCPUs. I had 
a discussion with Scott Wood a while ago regarding delivering external 
interrupts to the guest, and he mentioned that the correct solution was 
to rework the entire interrupt delivery mechanism into multiple lock 
domains, minimize the code on the EPR path and the locking involved. 
Until that can be achieved, converting the openpic lock to a 
raw_spinlock would be acceptable, as long as we keep the number of guest 
VCPUs small, so as to not cause big host latencies.

[1] http://lxr.free-electrons.com/source/include/linux/kvm_host.h#L762

Best regards,
Bogdan P.

>> Paolo
>>
>
> Sebastian
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ