[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150323122910.GO4441@8bytes.org>
Date: Mon, 23 Mar 2015 13:29:10 +0100
From: Joerg Roedel <joro@...tes.org>
To: Tomasz Figa <tfiga@...omium.org>
Cc: iommu@...ts.linux-foundation.org,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"open list:ARM/Rockchip SoC..." <linux-rockchip@...ts.infradead.org>,
Heiko Stuebner <heiko@...ech.de>,
Daniel Kurtz <djkurtz@...omium.org>
Subject: Re: [PATCH] CHROMIUM: iommu: rockchip: Make sure that page table
state is coherent
Hi Tomasz,
On Mon, Mar 23, 2015 at 05:38:45PM +0900, Tomasz Figa wrote:
> While unmapping, the driver zaps all iovas belonging to the mapping,
> so the page tables not used by any mapping won't be cached. Now when
> the driver creates a mapping it might end up occupying several page
> tables. However, since the mapping area is virtually contiguous, only
> the first and last page table can be shared with different mappings.
> This means that only first and last iovas can be already cached. In
> fact, we could detect if first and last page tables are shared and do
> not zap at all, but this wouldn't really optimize too much. Why
> invalidating one iova is enough to invalidate the whole page table is
> unclear to me as well, but it seems to be the correct way on this
> hardware.
>
> As for the race, it's also kind of explained by the above. The already
> running hardware can trigger page table look-ups in the IOMMU and so
> caching of the page table between our zapping and updating its
> contents. With this patch zapping is performed after updating the page
> table so the race is gone.
Okay, this makes sense. Can you add this information to the patch
changelog and resend please?
Thanks,
Joerg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists