lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANQmPXgqeERy55LtwO8a-bZAnSacW7bdL_o3jQkne+aGxUOVMg@mail.gmail.com>
Date:	Mon, 18 Jun 2012 14:28:08 +0800
From:	Chen <hi3766691@...il.com>
To:	Vikram Dhillon <opensolarisdev@...il.com>
Cc:	Mike Galbraith <efault@....de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH][ANNOUNCE]RIFS-ES Scheduling V1 release.

On Mon, Jun 18, 2012 at 11:01 AM, Vikram Dhillon
<opensolarisdev@...il.com> wrote:
> On Sun, Jun 17, 2012 at 6:24 PM, Chen <hi3766691@...il.com> wrote:
>> Sorry for my fault.
>> The patch has build issue and I post the new one.
>
> Hi Chen,
>
> It is always very interesting to see new scheduler designs however can
> you shows us some benchmarks? How does this compare to CFS currently?
> Also how well does it scale beyond desktops?
>
> - Vikram


Here is the benchmark result(Latency) for RIFS-ES and CFS(I have done
them with computer and I am posting them with my Android phone now)

RIFS-ES
========================================
[root@...alhost admin]# latt -c1 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=1
Entries logged: 27

Wakeup averages
-------------------------------------
	Max		      25 usec
	Avg		      10 usec
	Stdev		       3 usec
	Stdev mean	       1 usec

Work averages
-------------------------------------
	Max		   20265 usec
	Avg		   20100 usec
	Stdev		      82 usec
	Stdev mean	      16 usec
[root@...alhost admin]# latt -c2 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=2
Entries logged: 54

Wakeup averages
-------------------------------------
	Max		     824 usec
	Avg		      30 usec
	Stdev		     117 usec
	Stdev mean	      16 usec

Work averages
-------------------------------------
	Max		   21472 usec
	Avg		   20486 usec
	Stdev		     190 usec
	Stdev mean	      26 usec
[root@...alhost admin]# latt -c4 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=4
Entries logged: 104

Wakeup averages
-------------------------------------
	Max		   20423 usec
	Avg		    2343 usec
	Stdev		    3810 usec
	Stdev mean	     374 usec

Work averages
-------------------------------------
	Max		   41037 usec
	Avg		   36096 usec
	Stdev		    4247 usec
	Stdev mean	     416 usec
[root@...alhost admin]# latt -c8 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=8
Entries logged: 184

Wakeup averages
-------------------------------------
	Max		   71284 usec
	Avg		    6112 usec
	Stdev		    9640 usec
	Stdev mean	     711 usec

Work averages
-------------------------------------
	Max		   83043 usec
	Avg		   68615 usec
	Stdev		   10615 usec
	Stdev mean	     783 usec
[root@...alhost admin]# latt -c16 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=16
Entries logged: 320

Wakeup averages
-------------------------------------
	Max		   35669 usec
	Avg		    9690 usec
	Stdev		    8961 usec
	Stdev mean	     501 usec

Work averages
-------------------------------------
	Max		  166790 usec
	Avg		  136161 usec
	Stdev		   30660 usec
	Stdev mean	    1714 usec


RIFS-ES: Total result for wake up average latency:
-c1:
	Avg		      10 usec
-c2:
	Avg		      30 usec
-c4:
	Avg		    2343 usec
-c8:
	Avg		    6112 usec
-c16:
	Avg		    9690 usec

CFS
========================================
[root@...alhost ~]$ latt -c1 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=1
Entries logged: 27

Wakeup averages
-------------------------------------
	Max		      40 usec
	Avg		      21 usec
	Stdev		       8 usec
	Stdev mean	       2 usec

Work averages
-------------------------------------
	Max		   20648 usec
	Avg		   20262 usec
	Stdev		      97 usec
	Stdev mean	      19 usec
[root@...alhost ~]$ latt -c2 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=2
Entries logged: 54

Wakeup averages
-------------------------------------
	Max		      45 usec
	Avg		      19 usec
	Stdev		      10 usec
	Stdev mean	       1 usec

Work averages
-------------------------------------
	Max		   21549 usec
	Avg		   20495 usec
	Stdev		     320 usec
	Stdev mean	      43 usec
[root@...alhost ~]$ latt -c4 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=4
Entries logged: 104

Wakeup averages
-------------------------------------
	Max		   20430 usec
	Avg		    4900 usec
	Stdev		    6274 usec
	Stdev mean	     615 usec

Work averages
-------------------------------------
	Max		   50538 usec
	Avg		   30564 usec
	Stdev		    6492 usec
	Stdev mean	     637 usec
[root@...alhost ~]$ latt -c8 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=8
Entries logged: 184

Wakeup averages
-------------------------------------
	Max		   38195 usec
	Avg		    8068 usec
	Stdev		    8132 usec
	Stdev mean	     599 usec

Work averages
-------------------------------------
	Max		   83525 usec
	Avg		   67597 usec
	Stdev		   10037 usec
	Stdev mean	     740 usec
[root@...alhost ~]$ latt -c16 sleep 10

Parameters: min_wait=100ms, max_wait=500ms, clients=16
Entries logged: 320

Wakeup averages
-------------------------------------
	Max		   83711 usec
	Avg		   17497 usec
	Stdev		   21219 usec
	Stdev mean	    1186 usec

Work averages
-------------------------------------
	Max		  166547 usec
	Avg		  119376 usec
	Stdev		   31758 usec
	Stdev mean	    1775 usec


CFS: Total result for wake up average latency:
-c1:
	Avg		      21 usec
-c2:
	Avg		      19 usec
-c4:
	Avg		    4900 usec
-c8:
	Avg		    8068 usec
-c16:
	Avg		   17497 usec
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ