[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55094E06.2070004@suse.com>
Date: Wed, 18 Mar 2015 10:05:58 +0000
From: Filipe Manana <fdmanana@...e.com>
To: Huang Ying <ying.huang@...el.com>
CC: Chris Mason <clm@...com>, LKML <linux-kernel@...r.kernel.org>,
LKP ML <lkp@...org>
Subject: Re: [LKP] [Btrfs] 3a8b36f3780: -62.6% fileio.requests_per_sec
On 03/18/2015 08:20 AM, Huang Ying wrote:
> FYI, we noticed the below changes on
>
> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> commit 3a8b36f378060d20062a0918e99fae39ff077bf0 ("Btrfs: fix data loss in the fast fsync path")
>
>
> testbox/testcase/testparams: lkp-sb02/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndwr-sync
>
> f5c0a122800c301e 3a8b36f378060d20062a0918e9
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 45.33 ± 0% -62.6% 16.94 ± 0% fileio.requests_per_sec
> 138983 ± 0% +15.1% 160000 ± 0% fileio.time.voluntary_context_switches
> 16035 ± 0% +13.0% 18124 ± 0% fileio.time.involuntary_context_switches
> 2504328 ± 0% -7.2% 2324488 ± 0% fileio.time.file_system_outputs
> 1.35 ± 1% +2.8% 1.38 ± 0% turbostat.CorWatt
> 0.77 ± 6% +34.6% 1.03 ± 3% turbostat.Pkg%pc3
> 7224199 ± 22% -26.7% 5298697 ± 12% cpuidle.C1-SNB.time
> 8377756 ± 1% +15.7% 9690687 ± 4% cpuidle.C3-SNB.time
> 16035 ± 0% +13.0% 18124 ± 0% time.involuntary_context_switches
> 138983 ± 0% +15.1% 160000 ± 0% time.voluntary_context_switches
> 45941 ± 0% +11.0% 50983 ± 0% softirqs.BLOCK
> 35635 ± 2% +13.7% 40524 ± 2% softirqs.RCU
> 26255 ± 1% +10.5% 29017 ± 0% softirqs.SCHED
> 50650 ± 2% +11.3% 56371 ± 0% softirqs.TIMER
> 3448 ± 0% +1.6% 3503 ± 0% vmstat.io.bo
> 4010 ± 0% +2.7% 4119 ± 0% vmstat.system.cs
> 294711 ± 1% -17.1% 244365 ± 0% meminfo.Active
> 275793 ± 2% -18.1% 225971 ± 0% meminfo.Active(file)
> 53614 ± 6% +27.6% 68412 ± 15% meminfo.DirectMap4k
> 3781 ± 0% -46.9% 2006 ± 0% meminfo.Dirty
> 47786 ± 0% -14.7% 40780 ± 0% meminfo.SReclaimable
> 66047 ± 0% -10.7% 58973 ± 0% meminfo.Slab
> 68947 ± 2% -18.1% 56492 ± 0% proc-vmstat.nr_active_file
> 337110 ± 0% -10.0% 303330 ± 0% proc-vmstat.nr_dirtied
> 944 ± 0% -46.9% 501 ± 0% proc-vmstat.nr_dirty
> 11946 ± 0% -14.7% 10195 ± 0% proc-vmstat.nr_slab_reclaimable
> 335424 ± 0% -9.7% 302754 ± 0% proc-vmstat.nr_written
> 55839 ± 3% -15.0% 47438 ± 0% proc-vmstat.pgactivate
> 1142 ± 5% -16.2% 957 ± 17% slabinfo.btrfs_delayed_ref_head.active_objs
> 1146 ± 5% -16.0% 962 ± 17% slabinfo.btrfs_delayed_ref_head.num_objs
> 1246 ± 6% -29.4% 880 ± 15% slabinfo.btrfs_delayed_tree_ref.active_objs
> 1246 ± 6% -29.4% 880 ± 15% slabinfo.btrfs_delayed_tree_ref.num_objs
> 2037 ± 2% +60.0% 3260 ± 1% slabinfo.btrfs_extent_buffer.num_objs
> 2023 ± 2% +60.7% 3250 ± 1% slabinfo.btrfs_extent_buffer.active_objs
> 13307 ± 0% -57.7% 5634 ± 0% slabinfo.btrfs_extent_state.num_objs
> 260 ± 0% -57.8% 110 ± 0% slabinfo.btrfs_extent_state.num_slabs
> 13292 ± 0% -57.6% 5634 ± 0% slabinfo.btrfs_extent_state.active_objs
> 260 ± 0% -57.8% 110 ± 0% slabinfo.btrfs_extent_state.active_slabs
> 713 ± 1% -51.2% 348 ± 1% slabinfo.btrfs_ordered_extent.active_objs
> 718 ± 1% -48.1% 373 ± 1% slabinfo.btrfs_ordered_extent.num_objs
> 26930 ± 0% -57.1% 11557 ± 0% slabinfo.btrfs_path.num_objs
> 961 ± 0% -57.1% 412 ± 0% slabinfo.btrfs_path.active_slabs
> 961 ± 0% -57.1% 412 ± 0% slabinfo.btrfs_path.num_slabs
> 26930 ± 0% -57.1% 11557 ± 0% slabinfo.btrfs_path.active_objs
> 789 ± 4% -48.5% 406 ± 0% slabinfo.ext4_extent_status.num_objs
> 789 ± 4% -48.5% 406 ± 0% slabinfo.ext4_extent_status.active_objs
> 26083 ± 0% -28.3% 18697 ± 0% slabinfo.radix_tree_node.num_objs
> 26083 ± 0% -28.3% 18697 ± 0% slabinfo.radix_tree_node.active_objs
> 931 ± 0% -28.3% 667 ± 0% slabinfo.radix_tree_node.active_slabs
> 931 ± 0% -28.3% 667 ± 0% slabinfo.radix_tree_node.num_slabs
> 4 ± 38% +129.4% 9 ± 31% sched_debug.cfs_rq[0]:/.runnable_load_avg
> 17 ± 32% -54.9% 8 ± 45% sched_debug.cfs_rq[3]:/.runnable_load_avg
> 385 ± 14% -25.3% 287 ± 17% sched_debug.cfs_rq[3]:/.load
> 51947 ± 3% +15.4% 59938 ± 3% sched_debug.cpu#0.nr_load_updates
> 200860 ± 5% +11.6% 224079 ± 4% sched_debug.cpu#1.ttwu_local
> 47218 ± 2% +7.4% 50701 ± 2% sched_debug.cpu#1.nr_load_updates
> 5 ± 37% +105.0% 10 ± 26% sched_debug.cpu#1.cpu_load[1]
> 226755 ± 4% +11.4% 252611 ± 4% sched_debug.cpu#1.ttwu_count
> 2500 ± 34% -45.6% 1360 ± 33% sched_debug.cpu#3.curr->pid
> 385 ± 14% -25.8% 285 ± 16% sched_debug.cpu#3.load
>
> testbox/testcase/testparams: bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
>
> f5c0a122800c301e 3a8b36f378060d20062a0918e9
> ---------------- --------------------------
> 62.17 ± 0% -64.6% 22.03 ± 0% fileio.requests_per_sec
> 712336 ± 0% -64.2% 255005 ± 0% fileio.time.file_system_inputs
> 712336 ± 0% -64.2% 255005 ± 0% time.file_system_inputs
> 0.73 ± 3% -21.1% 0.58 ± 3% time.user_time
> 46562 ± 0% +29.5% 60303 ± 1% softirqs.RCU
> 57662 ± 0% +16.7% 67299 ± 0% softirqs.SCHED
> 259 ± 0% -64.3% 92 ± 0% vmstat.io.bi
> 3638 ± 1% -10.1% 3272 ± 2% meminfo.Dirty
> 432 ± 8% -25.8% 320 ± 28% proc-vmstat.allocstall
> 253 ± 9% -21.9% 197 ± 19% proc-vmstat.compact_fail
> 506 ± 8% -17.8% 416 ± 14% proc-vmstat.compact_stall
> 11262 ± 3% -20.1% 8996 ± 2% proc-vmstat.kswapd_low_wmark_hit_quickly
> 910 ± 1% -10.5% 815 ± 0% proc-vmstat.nr_dirty
> 17652 ± 2% -16.0% 14833 ± 3% proc-vmstat.pageoutrun
> 59446 ± 0% -20.2% 47455 ± 0% proc-vmstat.pgactivate
> 169701 ± 0% +11.5% 189186 ± 5% proc-vmstat.pgmigrate_success
> 355946 ± 0% -64.2% 127593 ± 0% proc-vmstat.pgpgin
> 27402 ± 7% -23.9% 20844 ± 29% proc-vmstat.pgsteal_direct_dma32
> 4868 ± 0% -56.8% 2104 ± 1% proc-vmstat.workingset_refault
> 1624 ± 3% -5.7% 1530 ± 1% slabinfo.Acpi-ParseExt.active_objs
> 1624 ± 3% -5.7% 1530 ± 1% slabinfo.Acpi-ParseExt.num_objs
> 1009 ± 6% -28.6% 720 ± 17% slabinfo.btrfs_delayed_data_ref.active_objs
> 1016 ± 6% -28.5% 726 ± 16% slabinfo.btrfs_delayed_data_ref.num_objs
> 849 ± 0% -10.3% 761 ± 4% slabinfo.btrfs_delayed_ref_head.active_objs
> 851 ± 0% -9.9% 767 ± 4% slabinfo.btrfs_delayed_ref_head.num_objs
> 10883 ± 1% -47.5% 5709 ± 4% slabinfo.btrfs_extent_state.num_objs
> 213 ± 1% -47.6% 111 ± 4% slabinfo.btrfs_extent_state.num_slabs
> 10794 ± 1% -48.6% 5551 ± 3% slabinfo.btrfs_extent_state.active_objs
> 213 ± 1% -47.6% 111 ± 4% slabinfo.btrfs_extent_state.active_slabs
> 6596 ± 0% -58.5% 2735 ± 1% slabinfo.btrfs_path.num_objs
> 235 ± 0% -58.6% 97 ± 1% slabinfo.btrfs_path.active_slabs
> 235 ± 0% -58.6% 97 ± 1% slabinfo.btrfs_path.num_slabs
> 6592 ± 0% -58.6% 2731 ± 1% slabinfo.btrfs_path.active_objs
> 5055 ± 6% -9.0% 4601 ± 2% slabinfo.kmalloc-32.num_objs
> 5055 ± 6% -9.0% 4601 ± 2% slabinfo.kmalloc-32.active_objs
> 1512 ± 2% -12.2% 1328 ± 5% slabinfo.kmalloc-96.num_objs
> 1512 ± 2% -12.2% 1328 ± 5% slabinfo.kmalloc-96.active_objs
> 459 ± 2% -16.8% 382 ± 6% sched_debug.cfs_rq[0]:/.blocked_load_avg
> 478 ± 1% -15.6% 403 ± 5% sched_debug.cfs_rq[0]:/.tg_load_contrib
> 49 ± 7% -9.1% 45 ± 11% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
> 2337 ± 6% -9.9% 2106 ± 11% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
> 25079 ± 3% -10.1% 22542 ± 4% sched_debug.cfs_rq[3]:/.exec_clock
> 28 ± 17% -35.7% 18 ± 9% sched_debug.cfs_rq[3]:/.nr_spread_over
> 499222 ± 12% -59.7% 201330 ± 40% sched_debug.cpu#0.sched_goidle
> 1018389 ± 12% -58.2% 425747 ± 37% sched_debug.cpu#0.nr_switches
> 90529 ± 4% -17.6% 74603 ± 8% sched_debug.cpu#0.nr_load_updates
> 513982 ± 13% -56.9% 221386 ± 36% sched_debug.cpu#0.ttwu_count
> 1019018 ± 12% -58.2% 426387 ± 37% sched_debug.cpu#0.sched_count
> 320 ± 3% -14.3% 274 ± 13% sched_debug.cpu#0.load
> 323974 ± 21% +131.6% 750473 ± 12% sched_debug.cpu#1.sched_count
> 323373 ± 21% +131.9% 749837 ± 12% sched_debug.cpu#1.nr_switches
> 89555 ± 46% +230.0% 295518 ± 16% sched_debug.cpu#1.ttwu_local
> 13 ± 30% +59.0% 20 ± 8% sched_debug.cpu#1.cpu_load[2]
> 8 ± 41% +111.8% 18 ± 12% sched_debug.cpu#1.cpu_load[4]
> 68535 ± 4% +22.2% 83732 ± 1% sched_debug.cpu#1.nr_load_updates
> 10 ± 33% +84.1% 19 ± 9% sched_debug.cpu#1.cpu_load[3]
> 160382 ± 16% +122.7% 357238 ± 12% sched_debug.cpu#1.ttwu_count
> 151508 ± 22% +140.8% 364820 ± 12% sched_debug.cpu#1.sched_goidle
> 388481 ± 2% -42.8% 222376 ± 46% sched_debug.cpu#2.ttwu_local
> 87970 ± 3% -6.8% 81971 ± 6% sched_debug.cpu#2.nr_load_updates
> 452753 ± 3% -37.3% 283785 ± 36% sched_debug.cpu#2.ttwu_count
> 916511 ± 5% -36.7% 580526 ± 36% sched_debug.cpu#2.nr_switches
> 917118 ± 5% -36.6% 581168 ± 36% sched_debug.cpu#2.sched_count
> 448592 ± 5% -37.9% 278755 ± 38% sched_debug.cpu#2.sched_goidle
> 140376 ± 7% +179.3% 392097 ± 28% sched_debug.cpu#3.sched_goidle
> 68344 ± 2% +24.1% 84790 ± 8% sched_debug.cpu#3.nr_load_updates
> 78592 ± 33% +335.3% 342125 ± 33% sched_debug.cpu#3.ttwu_local
> 300663 ± 7% +168.1% 806077 ± 27% sched_debug.cpu#3.sched_count
> 149690 ± 12% +182.2% 422447 ± 26% sched_debug.cpu#3.ttwu_count
> 300054 ± 7% +168.5% 805498 ± 27% sched_debug.cpu#3.nr_switches
>
> testbox/testcase/testparams: bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndwr-sync
>
> f5c0a122800c301e 3a8b36f378060d20062a0918e9
> ---------------- --------------------------
> 1.32 ± 6% +8834.3% 117.71 ±171% fileio.request_latency_max_ms
> 44.70 ± 0% -56.7% 19.35 ± 0% fileio.requests_per_sec
> 156010 ± 0% +38.4% 215846 ± 0% fileio.time.voluntary_context_switches
> 2663864 ± 0% -6.2% 2499112 ± 0% fileio.time.file_system_outputs
> 156010 ± 0% +38.4% 215846 ± 0% time.voluntary_context_switches
> 39761 ± 1% +22.5% 48712 ± 2% softirqs.RCU
> 37048 ± 1% +17.1% 43380 ± 2% softirqs.SCHED
> 52147 ± 1% +15.3% 60140 ± 1% softirqs.TIMER
> 2142 ± 0% +4.6% 2239 ± 0% vmstat.system.in
> 4067 ± 0% +5.9% 4307 ± 0% vmstat.system.cs
> 315172 ± 0% -13.9% 271267 ± 0% meminfo.Active
> 296097 ± 0% -14.5% 253063 ± 0% meminfo.Active(file)
> 3678 ± 0% -42.1% 2131 ± 1% meminfo.Dirty
> 47204 ± 0% -14.5% 40351 ± 0% meminfo.SReclaimable
> 64571 ± 0% -10.7% 57634 ± 0% meminfo.Slab
> 74024 ± 0% -14.5% 63263 ± 0% proc-vmstat.nr_active_file
> 919 ± 0% -42.1% 532 ± 1% proc-vmstat.nr_dirty
> 11801 ± 0% -14.5% 10087 ± 0% proc-vmstat.nr_slab_reclaimable
> 59906 ± 0% -25.1% 44895 ± 0% proc-vmstat.pgactivate
> 1971 ± 5% -8.0% 1814 ± 3% slabinfo.anon_vma.active_objs
> 1971 ± 5% -8.0% 1814 ± 3% slabinfo.anon_vma.num_objs
> 1759 ± 7% -17.1% 1457 ± 5% slabinfo.btrfs_delayed_data_ref.active_objs
> 1768 ± 7% -17.2% 1464 ± 5% slabinfo.btrfs_delayed_data_ref.num_objs
> 1081 ± 15% -41.9% 628 ± 11% slabinfo.btrfs_delayed_tree_ref.active_objs
> 1082 ± 15% -41.9% 628 ± 11% slabinfo.btrfs_delayed_tree_ref.num_objs
> 2313 ± 1% -21.9% 1805 ± 0% slabinfo.btrfs_extent_buffer.num_objs
> 2301 ± 1% -22.1% 1792 ± 0% slabinfo.btrfs_extent_buffer.active_objs
> 13162 ± 0% -51.8% 6341 ± 0% slabinfo.btrfs_extent_state.num_objs
> 257 ± 0% -51.9% 123 ± 0% slabinfo.btrfs_extent_state.num_slabs
> 13152 ± 0% -51.8% 6341 ± 0% slabinfo.btrfs_extent_state.active_objs
> 257 ± 0% -51.9% 123 ± 0% slabinfo.btrfs_extent_state.active_slabs
> 715 ± 0% -46.2% 385 ± 5% slabinfo.btrfs_ordered_extent.active_objs
> 720 ± 0% -43.4% 408 ± 5% slabinfo.btrfs_ordered_extent.num_objs
> 26591 ± 0% -51.4% 12924 ± 0% slabinfo.btrfs_path.num_objs
> 949 ± 0% -51.4% 461 ± 0% slabinfo.btrfs_path.active_slabs
> 949 ± 0% -51.4% 461 ± 0% slabinfo.btrfs_path.num_slabs
> 26591 ± 0% -51.4% 12924 ± 0% slabinfo.btrfs_path.active_objs
> 670 ± 8% -39.2% 407 ± 0% slabinfo.ext4_extent_status.num_objs
> 670 ± 8% -39.2% 407 ± 0% slabinfo.ext4_extent_status.active_objs
> 503 ± 6% -13.2% 437 ± 8% slabinfo.mnt_cache.active_objs
> 522 ± 7% -13.4% 452 ± 6% slabinfo.mnt_cache.num_objs
> 26243 ± 0% -26.8% 19212 ± 0% slabinfo.radix_tree_node.num_objs
> 26243 ± 0% -26.8% 19212 ± 0% slabinfo.radix_tree_node.active_objs
> 937 ± 0% -26.8% 685 ± 0% slabinfo.radix_tree_node.active_slabs
> 937 ± 0% -26.8% 685 ± 0% slabinfo.radix_tree_node.num_slabs
>
> lkp-sb02: Sandy Bridge-EP
> Memory: 4G
>
> bay: Pentium D
> Memory: 2G
>
>
>
>
> fileio.requests_per_sec
>
> 50 ++---------------------------------------------------------------------+
> | |
> 45 *+.*..*...*..*..*..*...*..*..*..*...*..*..*..*..*...*..*..*..*...*..*..*
> 40 ++ |
> | |
> 35 ++ |
> | |
> 30 ++ |
> | |
> 25 ++ |
> 20 ++ |
> O O O O O |
> 15 ++ O O O O O O O O O O O O O |
> | |
> 10 ++--------------O------------------------------------------------------+
>
>
> fileio.time.file_system_outputs
>
> 2.6e+06 ++----------------------------------------------------------------+
> 2.5e+06 *+.*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*
> | |
> 2.4e+06 O+ O O O O O O O O O |
> 2.3e+06 ++ O O O O O O O O |
> 2.2e+06 ++ |
> 2.1e+06 ++ |
> | |
> 2e+06 ++ |
> 1.9e+06 ++ |
> 1.8e+06 ++ |
> 1.7e+06 ++ |
> | |
> 1.6e+06 ++ O |
> 1.5e+06 ++----------------------------------------------------------------+
>
>
> proc-vmstat.nr_active_file
>
> 75000 ++------------------------------------------------------------------+
> | |
> 70000 ++ .*..|
> *..*..*..*..*..*...*..*..*..*..*..*..*.. .*.. .*...*..*..*..*. *
> | *. *. |
> 65000 ++ |
> | |
> 60000 ++ |
> | O O |
> 55000 O+ O O O O O O O O O O O O O O O |
> | |
> | |
> 50000 ++ O |
> | |
> 45000 ++------------------------------------------------------------------+
>
>
> proc-vmstat.nr_dirty
>
> 1000 ++-------------------------------------------------------------------+
> *..*..*..*...*..*..*..*..*..*..*... .*.. .*.. .*..*..*...*..*..*..*
> 900 ++ *. *. *. |
> | |
> 800 ++ |
> | |
> 700 ++ |
> | |
> 600 ++ |
> | |
> 500 O+ O O O O O O O O O O O O O O O O O |
> | |
> 400 ++ O |
> | |
> 300 ++-------------------------------------------------------------------+
>
>
> proc-vmstat.nr_slab_reclaimable
>
> 12000 ++--------------------------*--*-----*-----*-----*---*-----*--*-----*
> | .. *. .. .. *. *. |
> *..*..*..*..*..*...*..*..* * * |
> 11500 ++ |
> | |
> | |
> 11000 ++ |
> | |
> 10500 ++ |
> | |
> | O O O O O O O O |
> 10000 ++ O O |
> O O O O O O O O |
> | |
> 9500 ++-------------O----------------------------------------------------+
>
>
> proc-vmstat.nr_dirtied
>
> 340000 *+-*--*--*--*--*--*--*--*--*--*---*--*--*--*--*--*--*--*--*--*--*--*
> | |
> 320000 ++ |
> | O |
> 300000 O+ O O O O O O O O O O O O O O O O |
> | |
> 280000 ++ |
> | |
> 260000 ++ |
> | |
> 240000 ++ |
> | |
> 220000 ++ |
> | |
> 200000 ++-------------O---------------------------------------------------+
>
>
> proc-vmstat.nr_written
>
> 340000 ++-*-----*--------*------------------*--*--*--*-----*--*--*--*-----*
> *. *. *..*. *..*..*..*...*. *. *. |
> 320000 ++ |
> | |
> 300000 O+ O O O O O O O O O O O O O O O O O |
> | |
> 280000 ++ |
> | |
> 260000 ++ |
> | |
> 240000 ++ |
> | |
> 220000 ++ |
> | |
> 200000 ++-------------O---------------------------------------------------+
>
>
> proc-vmstat.pgactivate
>
> 60000 ++---------------------------------------------------------------*--+
> | .. |
> 55000 *+.*..*..*..*..*...*..*..*..*..*..*..*..*..*..*..*...*..*..*..* *
> | |
> 50000 ++ |
> O O O O O O O O O O O O O O O O O |
> 45000 ++ O |
> | |
> 40000 ++ |
> | |
> 35000 ++ |
> | |
> 30000 ++ O |
> | |
> 25000 ++------------------------------------------------------------------+
>
>
> meminfo.Active
>
> 310000 ++-----------------------------------------------------------------+
> 300000 ++ *..|
> | .*..*..*..*.. .*... .*.. *.. .. *
> 290000 *+ *..*..*..*..*. *. .*.. .. *..*..*..* |
> 280000 ++ *. * |
> | |
> 270000 ++ |
> 260000 ++ |
> 250000 ++ |
> O O O O O O O O O O O O O O O O O |
> 240000 ++ O |
> 230000 ++ |
> | |
> 220000 ++ O |
> 210000 ++-----------------------------------------------------------------+
>
>
> meminfo.Active(file)
>
> 290000 ++-----------------------------------------------------------------+
> 280000 ++ *..|
> | .*..*..*..*..*..*..*..*..*..*... .*.. *..*.. .. *
> 270000 *+ *. *..*.. .. *..*..* |
> 260000 ++ * |
> | |
> 250000 ++ |
> 240000 ++ |
> 230000 ++ |
> O O O O O O O O O O O O O O O O O |
> 220000 ++ O |
> 210000 ++ |
> | |
> 200000 ++ O |
> 190000 ++-----------------------------------------------------------------+
>
>
> meminfo.Dirty
>
> 4000 ++-------------------------------------------------------------------+
> *..*..*..*...*..*..*..*..*..*..*... .*.. .*..*..*...*..*..*..*
> | *. *..*..*. |
> 3500 ++ |
> | |
> | |
> 3000 ++ |
> | |
> 2500 ++ |
> | |
> | |
> 2000 O+ O O O O O O O O O O O O O O O O O |
> | |
> | |
> 1500 ++--------------O----------------------------------------------------+
>
>
> meminfo.Slab
>
> 67000 ++------------------------------------------------------------------+
> 66000 ++ *..*.. .*.. *.. ..*.. .*.. .*..*
> | .. *. .. .*. *. *. |
> 65000 *+.*..*..*..*..*...*..*..* * *. |
> 64000 ++ |
> 63000 ++ |
> 62000 ++ |
> | |
> 61000 ++ |
> 60000 ++ |
> 59000 ++ O O O O O O O O O |
> 58000 ++ O O O O |
> O O O O O |
> 57000 ++ O |
> 56000 ++------------------------------------------------------------------+
>
>
> meminfo.SReclaimable
>
> 48000 ++--------------------------*--*-----*-----*-----*---*-----*--*-----*
> 47000 ++ .. *. .. .. *. *. |
> *..*..*..*..*..*...*..*..* * * |
> 46000 ++ |
> 45000 ++ |
> | |
> 44000 ++ |
> 43000 ++ |
> 42000 ++ |
> | |
> 41000 ++ O O O O O O O O |
> 40000 ++ O O |
> O O O O O O O O |
> 39000 ++ |
> 38000 ++-------------O----------------------------------------------------+
>
>
> [*] bisect-good sample
> [O] bisect-bad sample
>
> To reproduce:
>
> apt-get install ruby
> git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> cd lkp-tests
> bin/setup-local job.yaml # the job file attached in this email
> bin/run-local job.yaml
Hi, thanks for this.
However this doesn't make sense to me.
This commit only touches btrfs' fsync handler and the test uses sysbench
without passing --file-fsync-freq to it, which means sysbench will never
do fsyncs according to its man page (default for fsync frequency is 0).
Or maybe I missed something?
thanks
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> Thanks,
> Ying Huang
>
>
>
> _______________________________________________
> LKP mailing list
> LKP@...ux.intel.com
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists