lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20220219234600.304774-1-21cnbao@gmail.com> Date: Sun, 20 Feb 2022 12:46:00 +1300 From: Barry Song <21cnbao@...il.com> To: maz@...nel.org Cc: 21cnbao@...il.com, linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, linuxarm@...wei.com, song.bao.hua@...ilicon.com, tglx@...utronix.de, will@...nel.org Subject: Re: [PATCH] irqchip/gic-v3: use dsb(ishst) to synchronize data to smp before issuing ipi >> + dsb(ishst); >> >> for_each_cpu(cpu, mask) { >> u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu)); > > I'm not opposed to that change, but I'm pretty curious whether this > makes > any visible difference in practice. Could you measure the effect of this > change > for any sort of IPI heavy workload? > > Thanks, > > M. In practice, at least I don't see much difference on the hardware I am using. So the result probably depends on the implementaion of the real hardwares. I wrote a micro benchmark to measure the latency w/ and w/o the patch on kunpeng920 with 96 cores(2 socket, each socket has 2dies, each die has 24 cores, cpu0-cpu47 belong to socket0, cpu48-95 belong to socket1) by sending IPI to cpu0-cpu95 1000 times from an specified cpu: #include <linux/module.h> #include <linux/timekeeping.h> static void ipi_latency_func(void *val) { } static int __init ipi_latency_init(void) { ktime_t stime, etime, delta; int cpu, i; int start = smp_processor_id(); stime = ktime_get(); for ( i = 0; i < 1000; i++) for (cpu = 0; cpu < 96; cpu++) smp_call_function_single(cpu, ipi_latency_func, NULL, 1); etime = ktime_get(); delta = ktime_sub(etime, stime); printk("%s ipi from cpu%d to cpu0-95 delta of 1000times:%lld\n", __func__, start, delta); return 0; } module_init(ipi_latency_init); static void ipi_latency_exit(void) { } module_exit(ipi_latency_exit); MODULE_DESCRIPTION("IPI benchmark"); MODULE_LICENSE("GPL"); do the below 10times: # taskset -c 0 insmod test.ko # rmmod test and the below 10times: # taskset -c 48 insmod test.ko # rmmod test by taskset -c, I can change the source cpu sending IPI. The result is as below: vanilla kernel: [ 103.391684] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122237009 [ 103.537256] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121364329 [ 103.681276] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121420160 [ 103.826254] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122392403 [ 103.970209] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122371262 [ 104.113879] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122041254 [ 104.257444] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121594453 [ 104.402432] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122592556 [ 104.561434] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121601214 [ 104.705561] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121732767 [ 124.592944] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147048939 [ 124.779280] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147467842 [ 124.958162] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146448676 [ 125.129253] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:141537482 [ 125.298848] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147161504 [ 125.471531] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147833787 [ 125.643133] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147438445 [ 125.814530] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146806172 [ 125.989677] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145971002 [ 126.159497] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:147780655 patched kernel: [ 428.828167] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122195849 [ 428.970822] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122361042 [ 429.111058] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122528494 [ 429.257704] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121155045 [ 429.410186] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121608565 [ 429.570171] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121613673 [ 429.718181] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121593737 [ 429.862615] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:121953875 [ 430.002796] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122102741 [ 430.142741] ipi_latency_init ipi from cpu0 to cpu0-95 delta of 1000times:122005473 [ 516.642812] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145610926 [ 516.817002] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145878266 [ 517.004665] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145602966 [ 517.188758] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145658672 [ 517.372409] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:141329497 [ 517.557313] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146323829 [ 517.733107] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146015196 [ 517.921491] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146439231 [ 518.093129] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:146106916 [ 518.264162] ipi_latency_init ipi from cpu48 to cpu0-95 delta of 1000times:145097868 So there is no much difference between vanilla and patched kernel. What really makes me worried about my hardware is that IPI sent from the second socket always shows worse performance than the first socket. This seems to be a problem worth investigation. Thanks Barry
Powered by blists - more mailing lists