[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091207150906.GC14743@csn.ul.ie>
Date: Mon, 7 Dec 2009 15:09:06 +0000
From: Mel Gorman <mel@....ul.ie>
To: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Cc: arayananu Gopalakrishnan <narayanan.g@...sung.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
epasch@...ibm.com, SCHILLIG@...ibm.com,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
christof.schmitt@...ibm.com
Subject: Re: Performance regression in scsi sequential throughput (iozone)
due to "e084b - page-allocator: preserve PFN ordering when
__GFP_COLD is set"
On Mon, Dec 07, 2009 at 03:39:49PM +0100, Christian Ehrhardt wrote:
> Hi,
> I tracked a huge performance regression for a while and got it bisected
> down to commit "e084b2d95e48b31aa45f9c49ffc6cdae8bdb21d4 -
> page-allocator: preserve PFN ordering when __GFP_COLD is set".
>
Darn. That is related to IO controllers being able to automatically merge
requests. The problem it was fixing was that pages were arriving in reverse
PFN order, the controller was unable to merge and performance was impaired. Any
controller that can merge should be faster as a result of the patch.
> The scenario I'm running is a low memory system (256M total), that does
> sequential I/O with parallel iozone processes.
> One process per disk, each process reading a 2Gb file. The disks I use
> are fcp scsi disks attached to a s390 host. File system is ext2.
>
I don't know what controller is in use there but does it
opportunistically merge requests if they are on physically contiguous
pages? If so, can it be disabled?
> The regression appears as up to 76% loss in throughput at 16 processes
> (processes are scaled from 1 to 64, performance is bad everywhere - 16
> is just the peak - avg loss is about 40% throughput).
> I already know that giving the system just a bit (~64m+) more memory
> solves the issue almost completely, probably because there is almost no
> "memory pressure" left in that cases.
> I also know that using direct-I/O instead of I/O through page cache
> doesn't have the problem at all.
This makes sense because it's a sequentual read load, so readahead is a
factor and that is why __GFP_COLD is used - the data is not for
immediate use so doesn't need to be cache hot.
> Comparing sysstat data taken while running with the kernels just with &
> without the bisected patch shows nothing obvious except that I/O seems
> to take much longer (lower interrupt ratio etc).
>
Maybe the controller is spending an age trying to merge requests because
it should be able to but takes a long time figuring it out?
> The patch alone looks very reasonable, so I'd prefer understanding and
> fixing the real issue instead of getting it eventually reverted due to
> this regression being larger than the one it was intended to fix.
> In the patch it is clear that hot pages (cold==0) freed in rmqueue_bulk
> should behave like before. So maybe the question is "are our pages cold
> while they shouldn't be"?
> Well I don't really have a clue yet to explain how patch e084b exactly
> causes that big regression, ideas welcome :-)
>
Only theory I have at the moment is that the controller notices it can
merge requests and either spends a long time figuring out how to do the
merging or performs worse with merged IO requests.
If the problem is in the driver, oprofile might show where the problem lies.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists