lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CANRm+Cy+6drnvHrgKKdW_6TS7=5=r9_yv+nf=1gKfg+Cx3tWcQ@mail.gmail.com> Date: Tue, 24 May 2016 15:05:54 +0800 From: Wanpeng Li <kernellwp@...il.com> To: David Matlack <dmatlack@...gle.com> Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, kvm list <kvm@...r.kernel.org>, Wanpeng Li <wanpeng.li@...mail.com>, Paolo Bonzini <pbonzini@...hat.com>, Radim Krčmář <rkrcmar@...hat.com>, Christian Borntraeger <borntraeger@...ibm.com>, Yang Zhang <yang.zhang.wz@...il.com> Subject: Re: [PATCH v3] KVM: halt-polling: poll if emulated lapic timer will fire soon 2016-05-24 10:19 GMT+08:00 Wanpeng Li <kernellwp@...il.com>: > 2016-05-24 2:01 GMT+08:00 David Matlack <dmatlack@...gle.com>: >> On Sun, May 22, 2016 at 5:42 PM, Wanpeng Li <kernellwp@...il.com> wrote: >>> From: Wanpeng Li <wanpeng.li@...mail.com> >> >> I'm ok with this patch, but I'd like to better understand the target >> workloads. What type of workloads do you expect to benefit from this? > > dynticks guests I think is one of workloads which can get benefit, > there are lots of upcoming fire timers captured by my feature. Even > during TCP testing. And also the workload of Yang's. > >> >>> >>> If an emulated lapic timer will fire soon(in the scope of 10us the >>> base of dynamic halt-polling, lower-end of message passing workload >>> latency TCP_RR's poll time < 10us) we can treat it as a short halt, >>> and poll to wait it fire, the fire callback apic_timer_fn() will set >>> KVM_REQ_PENDING_TIMER, and this flag will be check during busy poll. >>> This can avoid context switch overhead and the latency which we wake >>> up vCPU. >>> >>> This feature is slightly different from current advance expiration >>> way. Advance expiration rely on the vCPU is running(do polling before >>> vmentry). But in some cases, the timer interrupt may be blocked by >>> other thread(i.e., IF bit is clear) and vCPU cannot be scheduled to >>> run immediately. So even advance the timer early, vCPU may still see >>> the latency. But polling is different, it ensures the vCPU to aware >>> the timer expiration before schedule out. >>> >>> iperf TCP get ~6% bandwidth improvement. >> >> I think my question got lost in the previous thread :). Can you >> explain why TCP bandwidth improves with this patch? > Please forget TCP stuff. I run lmbench ctx switch benchmark: echo HRTICK > /sys/kernel/debug/sched_features in dynticks guests. Context switching - times in microseconds - smaller is better ------------------------------------------------------------------------- Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw --------- ------------- ------ ------ ------ ------ ------ ------- ------- kernel Linux 4.6.0+ 7.9800 11.0 10.8 14.6 9.4300 13.0 10.2 vanilla kernel Linux 4.6.0+ 15.3 13.6 10.7 12.5 9.0000 12.8 7.38000 poll Regards, Wanpeng Li
Powered by blists - more mailing lists