lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <acq6wg6r63nhbxsl5xci3gsow2lwmrongn57l5642h4gnreiol@jz6a3jdiviov>
Date: Mon, 29 Jul 2024 11:32:37 +0200
From: Michal Koutný <mkoutny@...e.com>
To: Xavier <xavier_qy@....com>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, 
	vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, 
	bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH-RT sched v1 0/2] Optimize the RT group scheduling

On Fri, Jun 28, 2024 at 01:21:54AM GMT, Xavier <xavier_qy@....com> wrote:
> The first patch optimizes the enqueue and dequeue of rt_se, the strategy
> employs a bottom-up removal approach.

I haven't read the patches, I only have a remark to the numbers.

> The second patch provides validation for the efficiency improvements made
> by patch 1. The test case count the number of infinite loop executions for
> all threads.
> 
> 		origion          optimized
> 
> 	   10242794134		10659512784
> 	   13650210798		13555924695
> 	   12953159254		13733609646
> 	   11888973428		11742656925
> 	   12791797633		13447598015
> 	   11451270205		11704847480
> 	   13335320346		13858155642
> 	   10682907328		10513565749
> 	   10173249704		10254224697
> 	    8309259793		 8893668653

^^^ This is fine, that's what you measured.

> avg      11547894262          11836376429

But providing averages with that many significant digit is nonsensical
(most of them are noise).

If I put your columns into D (Octave) and estimate some errors:

(std(D)/sqrt(10)) ./ mean(D)
ans =

   0.046626   0.046755

the error itself would be rounded to ~5%, so the averages measured
should be rounded accordingly 

 avg    11500000000      11800000000

or even more conservatively

 avg    12000000000      12000000000

> Run two QEMU emulators simultaneously, one running the original kernel and the
> other running the optimized kernel, and compare the average of the results over
> 10 runs. After optimizing, the number of iterations in the infinite loop increased
> by approximately 2.5%.

Notice that the measure changed is on par with noise in the data (i.e.
it may be accidental). You may need more iterations to get cleaner
result (more convincing data).

HTH,
Michal

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ