[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1321636512.26410.132.camel@bling.home>
Date: Fri, 18 Nov 2011 10:15:12 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: David Woodhouse <dwmw2@...radead.org>
Cc: rajesh.sankaran@...el.com, iommu@...ts.linux-foundation.org,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
chrisw@...s-sol.org, ddutile@...hat.com
Subject: Re: [PATCH] intel-iommu: Manage iommu_coherency globally
On Fri, 2011-11-18 at 17:00 +0000, David Woodhouse wrote:
> On Fri, 2011-11-18 at 09:03 -0700, Alex Williamson wrote:
> > I can't help but thinking we're just taking the easy, lazy path for VM
> > domains when it sounds like we really would prefer to keep the coherency
> > set to the least common denominator of the domain rather than the
> > platform. Couldn't we instead force a flush of the domain when we
> > transition from coherent to non-coherent? Not sure I'm qualified to
> > write that, but seems like it would keep the efficiency of VM domains
> > with no effect to native DMA domains since they'd never trigger such a
> > transition. The below appears as if it would work and probably be OK
> > since the unnecessary cache flushes are rare, but they're still
> > unnecessary... and the comments/commit log are now wrong. Thanks,
>
> Yeah, that would make some sense. I was about to knock up some code
> which would walk the page tables and use clflush to flush every one...
> but wouldn't it be saner just to use wbinvd?
A bit heavy handed, but obviously easier. It feels like we could safely
be more strategic, but maybe we'd end up trashing the cache anyway in a
drawn out attempt to flush the context and all page tables. However, do
we actually need a wbinvd_on_all_cpus()? Probably better to trash one
cache than all of them. Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists