lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101213135104.GE5407@ghostprotocols.net>
Date:	Mon, 13 Dec 2010 11:51:04 -0200
From:	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
To:	Andres Freund <andres@...razel.de>
Cc:	Ralf Hildebrandt <Ralf.Hildebrandt@...rite.de>,
	linux-kernel@...r.kernel.org
Subject: Re: Costly Context Switches

Em Sun, Dec 12, 2010 at 08:07:07PM +0100, Andres Freund escreveu:
> On Sunday 12 December 2010 16:11:12 Ralf Hildebrandt wrote:
> > I recently made a parallel installation of dovecot-2.0 on my mailbox
> > server, which is running dovecot-1.2 without any problems whatsoever.

> > Using dovecot-2.0 on the same hardware, same kernel, with the same
> > users and same mailboxes and usage behaviour results in an immense
> > increase in the load numbers.

> > Switching back to 1.2 results in a immediate decrease of the load back
> > to "normal" numbers.

> > This is mainly due to a 10-20 fold increase of the number of context
> > switches. The same problem has been reported independently by Cor
> > Bosman of XS4All, on different hardware (64bit instead of 32bit,
> > real hardware instead of virtual hardware).

> > So, now the kernel related question: How can I find out WHY the
> > context switches are happening? Are there any "in kernel" statistics
> > I could look at?

> "strace" or "perf trace syscall-counts" would be a good start.

Better to record just "cs" (Context Switches) events and also to collect
callchains when those events take place:

[acme@...icio ~]$ perf record -e cs -g chromium-browser
^C
[acme@...icio ~]$ perf report
# Overhead          Command      Shared Object  Symbol
# ........  ...............  .................  ......
#
    91.32%  chromium-browse  [kernel.kallsyms]  [k] perf_event_task_sched_out
            |
            --- perf_event_task_sched_out
               |          
               |--37.80%-- sysret_careful
               |          |          
               |          |--85.11%-- 0x365300e42d
               |          |          
               |           --14.89%-- 0x365300e24a
               |          
               |--30.29%-- retint_careful
               |          |          
               |          |--29.20%-- 0x3652809658
               |          |          
               |          |--28.32%-- 0x7fcb90463603
               |          |          
               |          |--15.04%-- 0x7fcb8bca2e20
               |          |          
               |          |--14.16%-- 0x7fcb903ff36e
               |          |          
               |           --13.27%-- 0x3652809d2f
               |          
               |--23.86%-- schedule_timeout
               |          |          
               |          |--83.15%-- sys_epoll_wait
               |          |          system_call_fastpath
               |          |          0x3652ce5013
               |          |          
               |           --16.85%-- __skb_recv_datagram
               |                     skb_recv_datagram
               |                     unix_dgram_recvmsg
               |                     __sock_recvmsg
               |                     sock_recvmsg
               |                     __sys_recvmsg
               |                     sys_recvmsg
               |                     system_call_fastpath
               |                     __recvmsg
               |          
               |--4.02%-- __cond_resched
               |          _cond_resched
               |          might_fault
               |          memcpy_toiovec
               |          unix_stream_recvmsg
               |          __sock_recvmsg

               |          sock_aio_read
               |          do_sync_read
               |          vfs_read
               |          sys_read
               |          system_call_fastpath
               |          0x365300e48d
               |          (nil)
               |          
                --4.02%-- futex_wait_queue_me
                          futex_wait
                          do_futex
                          sys_futex
                          system_call_fastpath
                          __pthread_cond_timedwait

     7.05%   chrome-sandbox  [kernel.kallsyms]  [k] perf_event_task_sched_out
             |
             --- perf_event_task_sched_out
                 __cond_resched
                 _cond_resched
                 might_fault
                 filldir
                 proc_fill_cache
                 proc_readfd_common
                 proc_readfd
                 vfs_readdir
                 sys_getdents
                 system_call_fastpath
                 __getdents64

     1.34%      gconftool-2  [kernel.kallsyms]  [k] perf_event_task_sched_out
                |
                --- perf_event_task_sched_out
                    sysret_careful
                   |          
                   |--51.61%-- __recv
                   |          
                    --48.39%-- __recvmsg


- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ