[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0272b440907280251g2dd38df6ja1bf10f3fa38d333@mail.gmail.com>
Date: Tue, 28 Jul 2009 11:51:00 +0200
From: Ronald Moesbergen <intercommit@...il.com>
To: Vladislav Bolkhovitin <vst@...b.net>
Cc: fengguang.wu@...el.com, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, kosaki.motohiro@...fujitsu.com,
Alan.Brunelle@...com, linux-fsdevel@...r.kernel.org,
jens.axboe@...cle.com, randy.dunlap@...cle.com,
Bart Van Assche <bart.vanassche@...il.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
2009/7/27 Vladislav Bolkhovitin <vst@...b.net>:
>
> Hmm, it's really weird, why the case of 2 threads is faster. There must be
> some commands reordering somewhere in SCST, which I'm missing, like
> list_add() instead of list_add_tail().
>
> Can you apply the attached patch and repeat tests 5, 8 and 11 with 1 and 2
> threads, please. The patch will enable forced commands order protection,
> i.e. with it all the commands will be executed in exactly the same order as
> they were received.
The patched source doesn't compile. I changed the code to this:
@ line 3184:
case SCST_CMD_QUEUE_UNTAGGED:
#if 1 /* left for future performance investigations */
goto ordered;
#endif
The results:
Overall performance seems lower.
client kernel: 2.6.26-15lenny3 (debian)
server kernel: 2.6.29.5 with readahead-context, blk_run_backing_dev
and io_context, forced_order
With one IO thread:
5) client: default, server: default (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 16.484 16.417 16.068 62.741 0.706 0.980
33554432 15.684 16.348 16.011 63.961 1.083 1.999
16777216 16.044 16.239 15.938 63.710 0.493 3.982
8) client: default, server: 64 max_sectors_kb, RA 2MB (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 16.127 15.784 16.210 63.847 0.740 0.998
33554432 16.103 16.072 16.106 63.627 0.061 1.988
16777216 16.637 16.058 16.154 62.902 0.970 3.931
11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA 2MB (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 13.417 15.219 13.912 72.405 3.785 1.131
33554432 13.868 13.789 14.110 73.558 0.718 2.299
16777216 13.691 13.784 10.280 82.898 11.822 5.181
11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA
2MB (deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 13.604 13.532 13.978 74.733 1.055 1.168
33554432 13.523 13.166 13.504 76.443 0.945 2.389
16777216 13.434 13.409 13.632 75.902 0.557 4.744
With two threads:
5) client: default, server: default (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 16.206 16.001 15.908 63.851 0.493 0.998
33554432 16.927 16.033 15.991 62.799 1.631 1.962
16777216 16.566 15.968 16.212 63.035 0.950 3.940
8) client: default, server: 64 max_sectors_kb, RA 2MB (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 16.017 15.849 15.748 64.521 0.450 1.008
33554432 16.652 15.542 16.259 63.454 1.823 1.983
16777216 16.456 16.071 15.943 63.392 0.849 3.962
11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA 2MB (cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 14.109 9.985 13.548 83.572 13.478 1.306
33554432 13.698 14.236 13.754 73.711 1.267 2.303
16777216 13.610 12.090 14.136 77.458 5.244 4.841
11) client: 64 max_sectors_kb, 2MB. RA server: 64 max_sectors_kb, RA
2MB (deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 13.542 13.975 13.978 74.049 1.110 1.157
33554432 9.921 13.272 13.321 85.746 12.349 2.680
16777216 13.850 13.600 13.344 75.324 1.144 4.708
Ronald.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists