lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0ca6dd974be42878a8f51b0a7bbe00f@huawei.com>
Date:   Thu, 18 Mar 2021 04:46:40 +0000
From:   "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" 
        <longpeng2@...wei.com>
To:     Lu Baolu <baolu.lu@...ux.intel.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Nadav Amit <nadav.amit@...il.com>
CC:     "dwmw2@...radead.org" <dwmw2@...radead.org>,
        "joro@...tes.org" <joro@...tes.org>,
        "will@...nel.org" <will@...nel.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "Gonglei (Arei)" <arei.gonglei@...wei.com>,
        chenjiashang <chenjiashang@...wei.com>,
        "Subo (Subo, Cloud Infrastructure Service Product Dept.)" 
        <subo7@...wei.com>
Subject: RE: A problem of Intel IOMMU hardware ?

Hi guys,

I provide more information, please see below

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@...ux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson <alex.williamson@...hat.com>
> Cc: baolu.lu@...ux.intel.com; Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) <longpeng2@...wei.com>; dwmw2@...radead.org; joro@...tes.org;
> will@...nel.org; iommu@...ts.linux-foundation.org; LKML
> <linux-kernel@...r.kernel.org>; Gonglei (Arei) <arei.gonglei@...wei.com>;
> chenjiashang <chenjiashang@...wei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >>>                   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
    DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a)
    DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b)
          Operation-1 : dump DMAR table
	DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c)
          Operation-2 :
             use GDB to pause at here, then DMA read IOVA=0,
             sometimes DMA success (as expected),
             but sometimes DMA error (report not-present).
          Operation-3 : dump DMAR table
          Operation-4 (when DMA error) : please see below
    DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

And the table of Operation-3 is:

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ