lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 22 Feb 2024 18:13:43 +0800
From: WANG Xuerui <kernel@...0n.name>
To: maobibo <maobibo@...ngson.cn>, Huacai Chen <chenhuacai@...nel.org>,
 Tianrui Zhao <zhaotianrui@...ngson.cn>, Juergen Gross <jgross@...e.com>,
 Paolo Bonzini <pbonzini@...hat.com>
Cc: loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org,
 virtualization@...ts.linux.dev, kvm@...r.kernel.org
Subject: Re: [PATCH v4 0/6] LoongArch: Add pv ipi support on LoongArch VM

On 2/22/24 18:06, maobibo wrote:
> 
> 
> On 2024/2/22 下午5:34, WANG Xuerui wrote:
>> On 2/17/24 11:15, maobibo wrote:
>>> On 2024/2/15 下午6:25, WANG Xuerui wrote:
>>>> On 2/15/24 18:11, WANG Xuerui wrote:
>>>>> Sorry for the late reply (and Happy Chinese New Year), and thanks 
>>>>> for providing microbenchmark numbers! But it seems the more 
>>>>> comprehensive CoreMark results were omitted (that's also absent in 
>>>>> v3)? While the 
>>>>
>>>> Of course the benchmark suite should be UnixBench instead of 
>>>> CoreMark. Lesson: don't multi-task code reviews, especially not 
>>>> after consuming beer -- a cup of coffee won't fully cancel the 
>>>> influence. ;-)
>>>>
>>> Where is rule about benchmark choices like UnixBench/Coremark for ipi 
>>> improvement?
>>
>> Sorry for the late reply. The rules are mostly unwritten, but in 
>> general you can think of the preference of benchmark suites as a 
>> matter of "effectiveness" -- the closer it's to some real workload in 
>> the wild, the better. Micro-benchmarks is okay for illustrating the 
>> points, but without demonstrating the impact on realistic workloads, a 
>> change could be "useless" in practice or even decrease various 
>> performance metrics (be that throughput or latency or anything that 
>> matters in the certain case), but get accepted without notice.
> yes, micro-benchmark cannot represent the real world, however it does 
> not mean that UnixBench/Coremark should be run. You need to point out 
> what is the negative effective from code, or what is the possible real 
> scenario which may benefit. And points out the reasonable benchmark 
> sensitive for IPIs rather than blindly saying UnixBench/Coremark.

I was not meaning to argue with you, nor was I implying that your 
changes "must be regressing things even though I didn't check myself" -- 
my point is, *any* comparison with realistic workload that shows the 
performance mostly unaffected inside/outside KVM, would give reviewers 
(and yourself too) much more confidence in accepting the change.

For me, personally I think a microbenchmark could be enough, because the 
only externally-visible change is the IPI mechanism overhead, but please 
consider other reviewers that may or may not be familiar enough with 
LoongArch to be able to notice the "triviality". Also, given the 6-patch 
size of the series, it could hardly be considered "trivial".

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ