[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <48C15C02.7060504@hp.com>
Date: Fri, 05 Sep 2008 12:19:14 -0400
From: "Alan D. Brunelle" <Alan.Brunelle@...com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: Jens Axboe <jens.axboe@...cle.com>
Subject: Benchmarking results: DSS elapsed time values w/ rq_affinity=0/1
- Jens' for-2.6.28 tree
Some DSS results from a 32-way ia64 machine set up to try and analyze
Oracle OLTP & DSS loads (128GB RAM, >200 disks). The data collected was
the elapsed time for DSS runs w/ 128 MBRs and 128 Readers, running on a
kernel generated from Jens Axboe's origin/for-2.6.28 tree. I alternated
runs, setting rq_affinity to 0 and 1 for all disks at the beginning of
each run.
There are a total of 68 data points for each alternative, and the
overall results show a decided improvement for this type of load with
rq_affinity set to 1:
rq=0: min=27.440000 avg=27.980500 max=28.500000 sdev=0.296827
rq=1: min=26.900000 avg=27.071500 max=27.480000 sdev=0.125169
Not only do we see about a 3.25% improvement in reduced average time, we
also see that the run-to-run deviations are much smaller as well. For a
pictorial representation, check out the graph @
http://free.linux.hp.com/~adb/jens/08-09-05/dss.png
The red and green areas illustrate the delta from the average for all
the data points with that rq_affinity setting. (Red being rq_affinity=0,
green being rq_affinity=1.)
I collected some vmstat & iostat data, and will be evaluating that as
well, and perhaps looking into lockstat & profiling data (time permitting).
The system has been set up as part of a collaboration between HP & Red
Hat's Linux performance teams, and we've been using it to analyze
performance characteristics of Oracle loads on large-ish systems, as
well as for evaluating potential code changes.
Alan D. Brunelle
HP Linux Kernel Technology Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists