lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54EAD6C2.3080601@freescale.com>
Date:	Mon, 23 Feb 2015 09:29:06 +0200
From:	Purcareata Bogdan <b43198@...escale.com>
To:	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Alexander Graf <agraf@...e.de>,
	Bogdan Purcareata <bogdan.purcareata@...escale.com>,
	<linuxppc-dev@...ts.ozlabs.org>, <linux-rt-users@...r.kernel.org>
CC:	<linux-kernel@...r.kernel.org>, <scottwood@...escale.com>,
	<mihai.caraman@...escale.com>, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux

On 20.02.2015 16:54, Sebastian Andrzej Siewior wrote:
> On 02/20/2015 03:12 PM, Paolo Bonzini wrote:
>>> Thomas, what is the usual approach for patches like this? Do you take
>>> them into your rt tree or should they get integrated to upstream?
>>
>> Patch 1 is definitely suitable for upstream, that's the reason why we
>> have raw_spin_lock vs. raw_spin_unlock.
>
> raw_spin_lock were introduced in c2f21ce2e31286a0a32 ("locking:
> Implement new raw_spinlock). They are used in context which runs with
> IRQs off - especially on -RT. This includes usually interrupt
> controllers and related core-code pieces.
>
> Usually you see "scheduling while atomic" on -RT and convert them to
> raw locks if it is appropriate.
>
> Bogdan wrote in 2/2 that he needs to limit the number of CPUs in oder
> not cause a DoS and large latencies in the host. I haven't seen an
> answer to my why question. Because if the conversation leads to
> large latencies in the host then it does not look right.

What I did notice were bad cyclictest results, when run in a guest with 
24 VCPUs. There were 24 netperf flows running in the guest. The max 
cyclictest latencies got up to 15ms in the guest, however I haven't 
captured any host side information related to preemptirqs off statistics.

What I was planning to do in the past days was to rerun the test and 
come up with the host preemptirqs off disabled statistics (mainly the 
max latency), so I could have a more reliable argument. I haven't had 
the time nor the setup to do that yet, and will come back with this as 
soon as I have them available.

> Each host PIC has a rawlock and does mostly just mask/unmask and the
> raw lock makes sure the value written is not mixed up due to
> preemption.
> This hardly increase latencies because the "locked" path is very short.
> If this conversation leads to higher latencies then the locked path is
> too long and hardly suitable to become a rawlock.

 From my understanding, the kvm openpic emulation code does more than 
just that - it requires to be atomic with interrupt delivery. This might 
mean the bad cyclictest max latencies visible from the guest side 
(15ms), may also have a correspondent to how much time that raw spinlock 
is taken, leading to an unresponsive host.

Best regards,
Bogdan P.

>> Paolo
>>
>
> Sebastian
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ