lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Mar 2011 16:25:06 -0700
From:	Mike Travis <travis@....com>
To:	Chris Wright <chrisw@...s-sol.org>
Cc:	David Woodhouse <dwmw2@...radead.org>,
	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Mike Habeck <habeck@....com>, iommu@...ts.linux-foundation.org,
	linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] Intel pci: Limit dmar_init_reserved_ranges

I'll probably need help from our Hardware PCI Engineer to help explain
this further, though here's a pointer to an earlier email thread:

http://marc.info/?l=linux-kernel&m=129259816925973&w=2

I'll also dig out the specs you're asking for.

Thanks,
Mike

Chris Wright wrote:
> * Mike Travis (travis@....com) wrote:
>> Chris - did you have any comment on this patch?
> 
> It doesn't actually look right to me.  It means that particular range
> is no longer reserved.  But perhaps I've misunderstood something.
> 
>> Mike Travis wrote:
>>>    dmar_init_reserved_ranges() reserves the card's MMIO ranges to
>>>    prevent handing out a DMA map that would overlap with the MMIO range.
>>>    The problem while the Nvidia GPU has 64bit BARs, it's capable of
>>>    receiving > 40bit PIOs, but can't generate > 40bit DMAs.
> 
> I don't undertand what you mean here.
> 
>>>    So when the iommu code reserves these MMIO ranges a > 40bit
>>>    entry ends up getting in the rbtree.  On a UV test system with
>>>    the Nvidia cards, the BARs are:
>>>
>>>      0001:36:00.0 VGA compatible controller: nVidia Corporation
>>> GT200GL 	  Region 0: Memory at 92000000 (32-bit, non-prefetchable)
>>> [size=16M]
>>> 	  Region 1: Memory at f8200000000 (64-bit, prefetchable) [size=256M]
>>> 	  Region 3: Memory at 90000000 (64-bit, non-prefetchable) [size=32M]
>>>
>>>    So this 44bit MMIO address 0xf8200000000 ends up in the rbtree.  As DMA
>>>    maps get added and deleted from the rbtree we can end up getting a cached
>>>    entry to this 0xf8200000000 entry... this is what results in the code
>>>    handing out the invalid DMA map of 0xf81fffff000:
>>>
>>> 	    [ 0xf8200000000-1 >> PAGE_SIZE << PAGE_SIZE ]
>>>
>>>    The IOVA code needs to better honor the "limit_pfn" when allocating
>>>    these maps.
> 
> This means we could get the MMIO address range (it's no longer reserved).
> It seems to me the DMA transaction would then become a peer to peer
> transaction if ACS is not enabled, which could show up as random register
> write in that GPUs 256M BAR (i.e. broken).
> 
> The iova allocation should not hand out an address bigger than the
> dma_mask.  What is the device's dma_mask?
> 
> thanks,
> -chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ