lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D7C42C27E6CB1E4D8CBDF2F81EA92A2603459877BD@azsmsx501.amr.corp.intel.com>
Date:	Mon, 27 Apr 2009 15:15:27 -0700
From:	"Styner, Douglas W" <douglas.w.styner@...el.com>
To:	Andi Kleen <andi@...stfloor.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"oprofile-list@...ts.sf.net" <oprofile-list@...ts.sf.net>,
	"Ma, Chinang" <chinang.ma@...el.com>,
	"willy@...ux.intel.com" <willy@...ux.intel.com>
Subject: RE: Discrepancies between Oprofile and vmstat II


> I believe so, but will confirm.
>
> > opcontrol -e=CPU_CLK_UNHALTED:80000 -e=LLC_MISSES:6000
> > 
> > Using another profiling tool to confirm, we see 74.784% user, 25.174% > > kernel.
>
> Just verifying -- you also see it when you use a shorter period than 80000 > right?

Confirmed.  Changes in profile appear to be due to increased sampling by oprofile with lower value.

Linux OLTP Performance summary
Kernel#            Speedup(x)   Intr/s  CtxSw/s us%     sys%    idle%   iowait%
2.6.30-rc3              1.000   30593   43976   74      25      0       1
2.6.30-rc3              1.001   30534   43210   75      25      0       0

Server configurations:
Intel Xeon Quad-core 2.0GHz  2 cpus/8 cores/8 threads
64GB memory, 3 qle2462 FC HBA, 450 spindles (30 logical units)

======oprofile CPU_CLK_UNHALTED for top 30 functions
-e=CPU_CLK_UNHALTED:80000          -e=CPU_CLK_UNHALTED:20000
Cycles% 2.6.30-rc3                 Cycles% 2.6.30-rc3
68.5544 <database>                 65.6876 <database>
1.1859 qla24xx_start_scsi          1.1033 kstat_irqs_cpu
0.9436 qla24xx_intr_handler        1.0575 rb_get_reader_page
0.8307 __schedule                  1.0034 qla24xx_start_scsi
0.7194 kmem_cache_alloc            0.9305 qla24xx_intr_handler
0.5026 __blockdev_direct_IO        0.8410 ring_buffer_consume
0.4319 __sigsetjmp                 0.8160 __schedule
0.4244 scsi_request_fn             0.5683 kmem_cache_alloc
0.3853 rb_get_reader_page          0.4517 __sigsetjmp
0.3777 __switch_to                 0.4413 unmap_vmas
0.3552 __list_add                  0.3809 __blockdev_direct_IO
0.3552 task_rq_lock                0.3726 __switch_to
0.3371 try_to_wake_up              0.3310 __list_add
0.3221 ring_buffer_consume         0.3206 task_rq_lock
0.2844 aio_complete                0.3123 scsi_request_fn
0.2588 memmove                     0.2977 aio_complete
0.2588 mod_timer                   0.2914 try_to_wake_up
0.2558 generic_make_request        0.2644 page_fault
0.2543 tcp_sendmsg                 0.2436 kmem_cache_free
0.2528 copy_user_generic_string    0.2415 scsi_device_unbusy
0.2468 lock_timer_base             0.2352 copy_user_generic_string
0.2468 memset_c                    0.2311 memmove
0.2288 blk_queue_end_tag           0.2227 lock_timer_base
0.2257 qla2x00_process_completed_re0.2123 generic_make_request
0.2212 kref_get                    0.2103 kfree
0.2182 mempool_free                0.2040 find_vma
0.2137 sd_prep_fn                  0.2019 sd_prep_fn
0.2122 e1000_xmit_frame            0.2019 blk_queue_end_tag
0.2062 dequeue_rt_stack            0.2019 tcp_sendmsg
0.2047 scsi_device_unbusy          0.1998 qla2x00_process_completed_re
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ