[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1316256305.24257.25.camel@shinybook.infradead.org>
Date: Sat, 17 Sep 2011 10:45:12 +0000
From: "Woodhouse, David" <david.woodhouse@...el.com>
To: Chris Boot <bootc@...tc.net>
CC: lkml <linux-kernel@...r.kernel.org>
Subject: Re: iommu_iova leak
On Fri, 2011-09-16 at 13:43 +0100, Chris Boot wrote:
> In the very short term the number is up and down by a few hundred
> objects but the general trend is constantly upwards. After about 5 days'
> uptime I have some very serious IO slowdowns (narrowed down by a friend
> to SCSI command queueing) with a lot of time spent in
> alloc_iova() and rb_prev() according to 'perf top'. Eventually these
> translate into softlockups and the machine becomes almost unusable.
If you're seeing it spend ages in rb_prev() that implies that the
mappings are still *active* and in the rbtree, rather than just the the
iommu_iova data structure has been leaked.
I suppose it's vaguely possible that we're leaking them in such a way
that they remain on the rbtree, perhaps if the deferred unmap is never
actually happening... but I think it's a whole lot more likely that the
PCI driver is just never bothering to unmap the pages it maps.
If you boot with 'intel_iommu=strict' that will avoid the deferred unmap
which is the only likely culprit in the IOMMU code...
--
Sent with MeeGo's ActiveSync support.
David Woodhouse Open Source Technology Centre
David.Woodhouse@...el.com Intel Corporation
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (4370 bytes)
Powered by blists - more mailing lists