lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 24 Mar 2019 14:07:25 -0400
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Ryan Thibodeaux <thibodux@...il.com>
Cc:     xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        oleksandr_andrushchenko@...m.com, tglx@...utronix.de,
        jgross@...e.com, ryan.thibodeaux@...rlab.io
Subject: Re: [PATCH] x86/xen: Add "xen_timer_slop" command line option

On Sat, Mar 23, 2019 at 08:00:52AM -0400, Ryan Thibodeaux wrote:
> On Fri, Mar 22, 2019 at 06:10:16PM -0400, Boris Ostrovsky wrote:
> > On 3/22/19 2:29 PM, thibodux@...il.com wrote:
> > > From: Ryan Thibodeaux <ryan.thibodeaux@...rlab.io>
> > >
> > > Add a new command-line option "xen_timer_slop=<INT>" that sets the
> > > minimum delta of virtual Xen timers. This commit does not change the
> > > default timer slop value for virtual Xen timers.
> > >
> > > Lowering the timer slop value should improve the accuracy of virtual
> > > timers (e.g., better process dispatch latency), but it will likely
> > > increase the number of virtual timer interrupts (relative to the
> > > original slop setting).
> > >
> > > The original timer slop value has not changed since the introduction
> > > of the Xen-aware Linux kernel code. This commit provides users an
> > > opportunity to tune timer performance given the refinements to
> > > hardware and the Xen event channel processing. It also mirrors
> > > a feature in the Xen hypervisor - the "timer_slop" Xen command line
> > > option.
> > 
> > Is there any data that shows effects of using this new parameter?
> > 
> > -boris
> > 
> 
> For our own testing using "cyclictest" from the rt-tests project,
> lowering the timer slop helped produce the best test runs, especially
> in terms of maximum process dispatch latency (PDL).
> 
> Here is the output from one such test that ran overnight. The Xen
> timer slop in this case was 10000 or 10 microseconds.
> 
> ...
> [root@...p1 ~]# cset shield -c 3
> [root@...p1 ~]# echo ; date ; echo ; \
> ./rt-tests-1.3/cyclictest -p95 -a3 -t1 -m; echo ; date
> 
> Thu Mar 14 19:45:36 UTC 2019
> 
> # /dev/cpu_dma_latency set to 0us
> policy: fifo: loadavg: 0.00 0.02 0.00 1/91 4260
> T: 0 ( 3212) P:95 I:1000 C:57077313 Min: 27 Act: 44 Avg: 43 Max: 145
> ^C
> Fri Mar 15 11:36:53 UTC 2019
> ...
> 
> This test system was configured to use a TSC clocksource, disabled
> C states, and lowered the timer slop. I am not claiming the timer
> slop change was solely responsible for the best results.

How can we then be sure that the proposed change will indeed provide
some sort of benefit?

Were there any other changes between your tests to think that slop
time modification may not be responsible for better results?

-boris


> In other
> testing with the default timer slop setting of 100000 (100
> microseconds), the average PDL would run slightly higher, but the
> spikes were much higher and more in number, often near the 1000s
> and happening multiple times per 10 minutes of testing.
> 
> - Ryan

Powered by blists - more mailing lists