lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070427140520.GA27854@elte.hu>
Date:	Fri, 27 Apr 2007 16:05:20 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	hechacker1 <hechacker1@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: "REPORT: sd-0.46 vs cfs-v6 vs mainline 2.6.21-rc7 Beryl + Video + Audio"


* hechacker1 <hechacker1@...il.com> wrote:

> "REPORT: sd-0.46 vs cfs-v6 vs mainline 2.6.21-rc7 Beryl + Video + Audio"

thanks for testing it out.

one immediate observation i have is that you used a 2msec granularity 
setting on CFS, but even that did not cause context-switching as high as 
SD's rr_interval==2 setting:

> cfs-v6:
> 700m kernel # cat sched_granularity_ns
> 2000000
> r  b   swpd   free   buff  cache    si   so    bi    bo   in   cs us sy id
> 1  0      0 100412     44 1519364    0    0     0     0 7426 7634 62  4 34  
> 4  0      0 100288     44 1519364    0    0     0     0 7039 7442 60  6 34  

> sd-0.46:
> 700m kernel # cat rr_interval
> 2
> r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id
> 5  0      0 918052    536 832840    0    0   411     0 2387 15242 89 11  0  
> 4  1      0 915600    536 834908    0    0   388     0 2283 15428 90 10  0  

so SD context-switched twice as much and saturated the CPU fully, while 
under cfs-v6 there was 34% idle time left. That double context-switch 
rate and higher CPU utilization could easily result in you experiencing 
a 'smoother' desktop (and smoother video playback) on SD.

could you try to maximize the preemption ratio on CFS by using a 
sched_granularity_ns of 0? Does that result in a higher context-switch 
rate and in better CPU utilization? Thanks,

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ