lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Mar 2011 16:39:02 -0700
From:	Chris Wright <chrisw@...s-sol.org>
To:	Chris Wright <chrisw@...s-sol.org>
Cc:	Mike Travis <travis@....com>, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	iommu@...ts.linux-foundation.org, Mike Habeck <habeck@....com>,
	David Woodhouse <dwmw2@...radead.org>
Subject: Re: [PATCH 3/4] Intel pci: Limit dmar_init_reserved_ranges

* Chris Wright (chrisw@...s-sol.org) wrote:
> > Mike Travis wrote:
> > >	  Region 1: Memory at f8200000000 (64-bit, prefetchable) [size=256M]
> > >	  Region 3: Memory at 90000000 (64-bit, non-prefetchable) [size=32M]
> > >
> > >    So this 44bit MMIO address 0xf8200000000 ends up in the rbtree.  As DMA
> > >    maps get added and deleted from the rbtree we can end up getting a cached
> > >    entry to this 0xf8200000000 entry... this is what results in the code
> > >    handing out the invalid DMA map of 0xf81fffff000:
> > >
> > >	    [ 0xf8200000000-1 >> PAGE_SIZE << PAGE_SIZE ]
> > >
> > >    The IOVA code needs to better honor the "limit_pfn" when allocating
> > >    these maps.
> 
> This means we could get the MMIO address range (it's no longer reserved).
> It seems to me the DMA transaction would then become a peer to peer
> transaction if ACS is not enabled, which could show up as random register
> write in that GPUs 256M BAR (i.e. broken).
> 
> The iova allocation should not hand out an address bigger than the
> dma_mask.  What is the device's dma_mask?

Ah, looks like this is a bad interaction with the way the cached entry
is handled.  I think the iova lookup should skip down the the limit_pfn
rather than assume that rb_last's pfn_lo/hi is ok just because it's in
the tree.  Because you'll never hit the limit_pfn == 32bit_pfn case, it
just goes straight to rb_last in __get_cached_rbnode.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ