lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAFQd5Abk6X7AVTFaNuUSiShn31pzwwTE3VjfLnE4kyziAjy2A@mail.gmail.com>
Date:	Mon, 23 Mar 2015 17:38:45 +0900
From:	Tomasz Figa <tfiga@...omium.org>
To:	Joerg Roedel <joro@...tes.org>
Cc:	iommu@...ts.linux-foundation.org,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"open list:ARM/Rockchip SoC..." <linux-rockchip@...ts.infradead.org>,
	Heiko Stuebner <heiko@...ech.de>,
	Daniel Kurtz <djkurtz@...omium.org>
Subject: Re: [PATCH] CHROMIUM: iommu: rockchip: Make sure that page table
 state is coherent

Sorry, I had to dig my way out through my backlog.

On Tue, Mar 3, 2015 at 10:36 PM, Joerg Roedel <joro@...tes.org> wrote:
> On Mon, Feb 09, 2015 at 08:19:21PM +0900, Tomasz Figa wrote:
>> Even though the code uses the dt_lock spin lock to serialize mapping
>> operation from different threads, it does not protect from IOMMU
>> accesses that might be already taking place and thus altering state
>> of the IOTLB. This means that current mapping code which first zaps
>> the page table and only then updates it with new mapping which is
>> prone to mentioned race.
>
> Could you elabortate a bit on the race and why it is sufficient to zap
> only the first and the last iova? From the description and the comments
> in the patch this is not clear to me.

Let's start with why it's sufficient to zap only first and last iova.

While unmapping, the driver zaps all iovas belonging to the mapping,
so the page tables not used by any mapping won't be cached. Now when
the driver creates a mapping it might end up occupying several page
tables. However, since the mapping area is virtually contiguous, only
the first and last page table can be shared with different mappings.
This means that only first and last iovas can be already cached. In
fact, we could detect if first and last page tables are shared and do
not zap at all, but this wouldn't really optimize too much. Why
invalidating one iova is enough to invalidate the whole page table is
unclear to me as well, but it seems to be the correct way on this
hardware.

As for the race, it's also kind of explained by the above. The already
running hardware can trigger page table look-ups in the IOMMU and so
caching of the page table between our zapping and updating its
contents. With this patch zapping is performed after updating the page
table so the race is gone.

Best regards,
Tomasz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ