lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Mar 2021 00:27:52 +0000
From:   "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" 
        <longpeng2@...wei.com>
To:     Nadav Amit <nadav.amit@...il.com>
CC:     "Tian, Kevin" <kevin.tian@...el.com>,
        chenjiashang <chenjiashang@...wei.com>,
        David Woodhouse <dwmw2@...radead.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "alex.williamson@...hat.com" <alex.williamson@...hat.com>,
        "Gonglei (Arei)" <arei.gonglei@...wei.com>,
        "will@...nel.org" <will@...nel.org>,
        Lu Baolu <baolu.lu@...ux.intel.com>,
        Joerg Roedel <joro@...tes.org>
Subject: RE: A problem of Intel IOMMU hardware ?



> -----Original Message-----
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> Sent: Monday, March 22, 2021 7:51 AM
> To: 'Nadav Amit' <nadav.amit@...il.com>
> Cc: Tian, Kevin <kevin.tian@...el.com>; chenjiashang
> <chenjiashang@...wei.com>; David Woodhouse <dwmw2@...radead.org>;
> iommu@...ts.linux-foundation.org; LKML <linux-kernel@...r.kernel.org>;
> alex.williamson@...hat.com; Gonglei (Arei) <arei.gonglei@...wei.com>;
> will@...nel.org; 'Lu Baolu' <baolu.lu@...ux.intel.com>; 'Joerg Roedel'
> <joro@...tes.org>
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> Hi Nadav,
> 
> > -----Original Message-----
> > From: Nadav Amit [mailto:nadav.amit@...il.com]
> > Sent: Friday, March 19, 2021 12:46 AM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@...wei.com>
> > Cc: Tian, Kevin <kevin.tian@...el.com>; chenjiashang
> > <chenjiashang@...wei.com>; David Woodhouse <dwmw2@...radead.org>;
> > iommu@...ts.linux-foundation.org; LKML <linux-kernel@...r.kernel.org>;
> > alex.williamson@...hat.com; Gonglei (Arei) <arei.gonglei@...wei.com>;
> > will@...nel.org
> > Subject: Re: A problem of Intel IOMMU hardware ?
> >
> >
> >
> > > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure
> > > Service
> > Product Dept.) <longpeng2@...wei.com> wrote:
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Tian, Kevin [mailto:kevin.tian@...el.com]
> > >> Sent: Thursday, March 18, 2021 4:56 PM
> > >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >> <longpeng2@...wei.com>; Nadav Amit <nadav.amit@...il.com>
> > >> Cc: chenjiashang <chenjiashang@...wei.com>; David Woodhouse
> > >> <dwmw2@...radead.org>; iommu@...ts.linux-foundation.org; LKML
> > >> <linux-kernel@...r.kernel.org>; alex.williamson@...hat.com; Gonglei
> > >> (Arei) <arei.gonglei@...wei.com>; will@...nel.org
> > >> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>
> > >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>> <longpeng2@...wei.com>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Tian, Kevin [mailto:kevin.tian@...el.com]
> > >>>> Sent: Thursday, March 18, 2021 4:27 PM
> > >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>> <longpeng2@...wei.com>; Nadav Amit <nadav.amit@...il.com>
> > >>>> Cc: chenjiashang <chenjiashang@...wei.com>; David Woodhouse
> > >>>> <dwmw2@...radead.org>; iommu@...ts.linux-foundation.org; LKML
> > >>>> <linux-kernel@...r.kernel.org>; alex.williamson@...hat.com;
> > >>>> Gonglei
> > >>> (Arei)
> > >>>> <arei.gonglei@...wei.com>; will@...nel.org
> > >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>>>
> > >>>>> From: iommu <iommu-bounces@...ts.linux-foundation.org> On Behalf
> > >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>>>
> > >>>>>> 2. Consider ensuring that the problem is not somehow related to
> > >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
> > >>>>>> of
> > >>>> qi_flush_iotlb().
> > >>>>>>
> > >>>>>
> > >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe
> > >>>>> something wrong, the system crashed, so I prefer to lower the
> > >>>>> priority of this
> > >>> operation.
> > >>>>>
> > >>>>
> > >>>> The VT-d spec clearly says that register-based invalidation can
> > >>>> be used only
> > >>> when
> > >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > >>>> provide
> > >>> an
> > >>>> option to disable queued-invalidation though, when the hardware
> > >>>> is
> > >>> capable. If you
> > >>>> really want to try, tweak the code in intel_iommu_init_qi.
> > >>>>
> > >>>
> > >>> Hi Kevin,
> > >>>
> > >>> Thanks to point out this. Do you have any ideas about this problem ?
> > >>> I tried to descript the problem much clear in my reply to Alex,
> > >>> hope you could have a look if you're interested.
> > >>>
> > >>
> > >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> > >>
> > >
> > > Not test yet. It's hard to upgrade kernel in our environment.
> > >
> > >> Also one way to separate sw/hw bug is to trace the low level
> > >> interface (e.g.,
> > >> qi_flush_iotlb) which actually sends invalidation descriptors to
> > >> the IOMMU hardware. Check the window between b) and c) and see
> > >> whether the software does the right thing as expected there.
> > >>
> > >
> > > We add some log in iommu driver these days, the software seems fine.
> > > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> >
> > So here is my guess:
> >
> > Intel probably used as a basis for the IOTLB an implementation of some
> > other
> > (regular) TLB design.
> >
> > Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> >
> > "Software wishing to prevent this uncertainty should not write to a
> > paging-structure entry in a way that would change, for any linear
> > address, both the page size and either the page frame, access rights, or other
> attributes.”
> >
> >
> > Now the aforementioned uncertainty is a bit different (multiple
> > *valid* translations of a single address). Yet, perhaps this is yet
> > another thing that might happen.
> >
> > From a brief look on the handling of MMU (not IOMMU) hugepages in
> > Linux, indeed the PMD is first cleared and flushed before a new valid
> > PMD is set. This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully.
> > This is not the case for the IOMMU though (without PRI).
> >
> 
> But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is
> unmapped, I've no idea why this invalidation isn't effective except I've not look
> inside the qi yet, but there is no complaints from the driver.
> 
> Could you please point out the code of MMU you mentioned above? In MMU code,
> is it possible that all the entries of the PTE are all not-present but the PMD entry is
> still present?
> 

Oh, I see the following MMU code:
unmap_pmd_range
  __unmap_pmd_range
    unmap_pte_range
      try_to_free_pmd_page (if all of the PTEs are pmd_none)

So the MMU code won't keep the PMD entry as present if all of its PTE entries
are not-present.


> *Page table after (0x0, 0xa0000) is unmapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x      1a34fbf003
>     PTE: 0x               0
> 
> *Page table after (0x0, 0xc0000000) is mapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x       15ec00883
> 
> > Not sure this explains everything though. If that is the problem, then
> > during a mapping that changes page-sizes, a TLB flush is needed,
> > similarly to the one Longpeng did manually.
> >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ