lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Apr 2015 13:53:24 +0300
From:	Purcareata Bogdan <b43198@...escale.com>
To:	Scott Wood <scottwood@...escale.com>
CC:	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Alexander Graf <agraf@...e.de>,
	Bogdan Purcareata <bogdan.purcareata@...escale.com>,
	<linuxppc-dev@...ts.ozlabs.org>, <linux-rt-users@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, <mihai.caraman@...escale.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux

On 10.04.2015 02:53, Scott Wood wrote:
> On Thu, 2015-04-09 at 10:44 +0300, Purcareata Bogdan wrote:
>> So at this point I was getting kinda frustrated so I decided to measure
>> the time spend in kvm_mpic_write and kvm_mpic_read. I assumed these were
>> the main entry points in the in-kernel MPIC and were basically executed
>> while holding the spinlock. The scenario was the same - 24 VCPUs guest,
>> with 24 virtio+vhost interfaces, only this time I ran 24 ping flood
>> threads to another board instead of netperf. I assumed this would impose
>> a heavier stress.
>>
>> The latencies look pretty ok, around 1-2 us on average, with the max
>> shown below:
>>
>> .kvm_mpic_read	14.560
>> .kvm_mpic_write	12.608
>>
>> Those are also microseconds. This was run for about 15 mins.
>
> What about other entry points such as kvm_set_msi() and
> kvmppc_mpic_set_epr()?

Thanks for the pointers! I redid the measurements, this time for the functions 
run with the openpic lock down:

.kvm_mpic_read_internal (.kvm_mpic_read)	1.664
.kvmppc_mpic_set_epr				6.880
.kvm_mpic_write_internal (.kvm_mpic_write)	7.840
.openpic_msi_write (.kvm_set_msi)		10.560

Same scenario, 15 mins, numbers are microseconds.

There was a weird situation for .kvmppc_mpic_set_epr - its corresponding inner 
function is kvmppc_set_epr, which is a static inline. Removing the static inline 
yields a compiler crash (Segmentation fault (core dumped) - 
scripts/Makefile.build:441: recipe for target 'arch/powerpc/kvm/kvm.o' failed), 
but that's a different story, so I just let it be for now. Point is the time may 
include other work after the lock has been released, but before the function 
actually returned. I noticed this was the case for .kvm_set_msi, which could 
work up to 90 ms, not actually under the lock. This made me change what I'm 
looking at.

So far it looks pretty decent. Are there any other MPIC entry points worthy of 
investigation? Or perhaps a different stress scenario involving a lot of VCPUs 
and external interrupts?

Thanks,
Bogdan P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ