lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1343313869.6863.93.camel@marge.simpson.net>
Date:	Thu, 26 Jul 2012 16:44:29 +0200
From:	Mike Galbraith <efault@....de>
To:	Alexey Vlasov <renton@...ton.name>
Cc:	linux-kernel@...r.kernel.org, paulmck@...ux.vnet.ibm.com
Subject: Re: Attaching a process to cgroups

On Thu, 2012-07-26 at 17:02 +0400, Alexey Vlasov wrote: 
> On Wed, Jul 25, 2012 at 03:57:47PM +0200, Mike Galbraith wrote:
> > 
> > I'd profile it with perf, and expect to find a large pile of cycles.
> 
> I did it the as following:
> # perf stat cat /proc/self/cgroup 
> 
> 4:blkio:/
> 3:devices:/
> 2:memory:/
> 1:cpuacct:/
> 
>  Performance counter stats for 'cat /proc/self/cgroup':
> 
>           0.472513 task-clock                #    0.000 CPUs utilized          
>                  1 context-switches          #    0.002 M/sec                  
>                  1 CPU-migrations            #    0.002 M/sec                  
>                169 page-faults               #    0.358 M/sec                  
>            1111521 cycles                    #    2.352 GHz                    
>             784737 stalled-cycles-frontend   #   70.60% frontend cycles idle   
>             445520 stalled-cycles-backend    #   40.08% backend  cycles idle   
>             576622 instructions              #    0.52  insns per cycle        
>                                              #    1.36  stalled cycles per insn
>             120032 branches                  #  254.029 M/sec                  
>               6577 branch-misses             #    5.48% of all branches        
> 
>        9.114631804 seconds time elapsed

Sleepy box.

> # perf report --sort comm,dso

perf report --sort symbol,dso won't toss everything in one basket.

> Kernel address maps (/proc/{kallsyms,modules}) were restricted.
> 
> Check /proc/sys/kernel/kptr_restrict before running 'perf record'.
> 
> If some relocation was applied (e.g. kexec) symbols may be misresolved.
> 
> Samples in kernel modules can't be resolved as well.
> 
> # ========
> # captured on: Thu Jul 26 16:23:06 2012
> # hostname : l24
> # os release : 3.3.3-1gb-c-s-m
> # perf version : 3.2
> # arch : x86_64
> # nrcpus online : 24
> # nrcpus avail : 24
> # cpudesc : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
> # total memory : 74181032 kB
> # cmdline : /usr/sbin/perf record cat /proc/self/cgroup
>  
> # event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, id = { 1758, 1759, 1760, 1761, 1762, 1763, 1764, 1765, 1766, 1767, 1768, 1769, 1770, 1771, 1772, 1773, 1774, 1775, 1776, 1777, 1778, 1779, 1780, 1781 }
> # HEADER_CPU_TOPOLOGY info available, use -I to display
> # HEADER_NUMA_TOPOLOGY info available, use -I to display
> # ========
> #
> # Events: 21  cycles
> #
> # Overhead  Command      Shared Object
> # ........  .......  .................
> #
>    100.00%      cat  [kernel.kallsyms]
> 
> but I don't know what next unfortunately.

I'll have to pass.  I would just stop creating thousands of cgroups ;-)
  
They've become a lot more scalable fairly recently, but if you populate
those thousands with frequently runnable tasks, I suspect you'll still
see huge truckloads of scheduler in profiles.. maybe even nothing else. 

> I also checked the same thing on the other server with the 2.6.37 kernel,
> there' some thousands cgroups too, and it somehow works there immediately.




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ