lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190325173658.GA46618@centos-dev.localdomain>
Date:   Mon, 25 Mar 2019 13:36:58 -0400
From:   Ryan Thibodeaux <thibodux@...il.com>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc:     luca abeni <luca.abeni@...tannapisa.it>,
        Dario Faggioli <dfaggioli@...e.com>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        oleksandr_andrushchenko@...m.com, tglx@...utronix.de,
        jgross@...e.com, ryan.thibodeaux@...rlab.io
Subject: Re: [PATCH] x86/xen: Add "xen_timer_slop" command line option

On Mon, Mar 25, 2019 at 10:11:38AM -0400, Ryan Thibodeaux wrote:
> > > [...]
> > >>>> Is there any data that shows effects of using this new parameter?
> > >>>> 

Continuing with the experimental data conversation (thanks to Luca and
Dario for being so generous), I am providing more results from quick
tests this morning.

I ran the same sequence of tests four times with the same hardware,
hypervisor, and Linux guest setup. Only changes between runs was 
adjusting the slop settings in Xen and Linux. This was on a 
build of Xen 4.10 and a Linux guest running the current Xen 
tip.git kernel with my patch.

For each sequence, I ran two variations of cyclictest on an isolated
processor. The first test used an interval of 50 microseconds and
second test used an interval of 1000 microseconds, passing "-i50"
and "-10000" arguments to cyclictest respectively.

The variations of the sequences are as follows:
#1 - default slops:  Xen@...00, Linux@...000
#2 - lowering Linux: Xen@...00, Linux@...0
#3 - lowering Xen:   Xen@...0,  Linux@...000
#4 - lowering both:  Xen@...0,  Linux@...0

The cleaned up test output is below. Only showing the total
stats for each run and the number of spikes / samples that went 
over 100 microseconds. I do not record each sample value like
Luca and Dario, because I want to eliminate as many variables as
possible, e.g., eliminating overhead of writing out raw results.

Looking at the results, you can see that only lowering the Linux
slop (with my proposed patch) does reduce the overall PDL stats for
the shorter interval, but it especially lowers the spikes, in both
cases. Even in test #3 where the Xen slop was lowered, the spikes 
are a problem at the default Linux slop.

Reiterating what Luca and Dario said, lowering both slops is the 
way to consisten results for both interval configurations.

Note: even better stats can likely be achieved with more tuning
and using the RT patchset. These results were just focusing on
a non-specialized configuration.

...
##############################

# Timer Slop: Xen (default, 50000) | Guest (default, 100000)

# Cyclictest Interval (-i50)
Min: 62
Avg: 127
Max: 212
Spikes (over 100): 3892034

# Cyclictest Interval (-i1000)
Min: 24
Avg: 45
Max: 156
Spikes (over 100): 27


##############################

# Timer Slop: Xen (default, 50000) | Guest (5000)

# Cyclictest Interval (-i50)
Min: 25
Avg: 78
Max: 230
Spikes (over 100): 274549

# Cyclictest Interval (-i1000)
Min: 37
Avg: 45
Max: 82
Spikes (over 100): 0


##############################

# Timer Slop: Xen (5000) | Guest (default, 100000)

# Cyclictest Interval (-i50)
Min: 61
Avg: 126
Max: 226
Spikes (over 100): 3877860

# Cyclictest Interval (-i1000)
Min: 37
Avg: 45
Max: 74
Spikes (over 100): 0


##############################

# Timer Slop: Xen (5000) | Guest (5000)

# Cyclictest Interval (-i50)
Min: 13
Avg: 30
Max: 150
Spikes (over 100): 120

# Cyclictest Interval (-i1000)
Min: 37
Avg: 45
Max: 97
Spikes (over 100): 0
...

- Ryan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ