lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5AEA74A3.8000909@linux.intel.com>
Date:   Thu, 3 May 2018 10:32:03 +0800
From:   Lu Baolu <baolu.lu@...ux.intel.com>
To:     Dmitry Safonov <dima@...sta.com>, linux-kernel@...r.kernel.org,
        joro@...tes.org, "Raj, Ashok" <ashok.raj@...el.com>
Cc:     0x7f454c46@...il.com, Alex Williamson <alex.williamson@...hat.com>,
        David Woodhouse <dwmw2@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        iommu@...ts.linux-foundation.org
Subject: Re: [PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq
 handler

Hi,

On 05/03/2018 10:16 AM, Lu Baolu wrote:
> Hi,
>
> On 05/03/2018 09:59 AM, Dmitry Safonov wrote:
>> On Thu, 2018-05-03 at 09:32 +0800, Lu Baolu wrote:
>>> Hi,
>>>
>>> On 05/03/2018 08:52 AM, Dmitry Safonov wrote:
>>>> AFAICS, we're doing fault-clearing in a loop inside irq handler.
>>>> That means that while we're clearing if a fault raises, it'll make
>>>> an irq level triggered (or on edge) on lapic. So, whenever we
>>>> return
>>>> from the irq handler, irq will raise again.
>>>>
>>> Uhm, double checked with the spec. Interrupts should be generated
>>> since we always clear the fault overflow bit.
>>>
>>> Anyway, we can't clear faults in a limited loop, as the spec says in
>>> 7.3.1:
>> Mind to elaborate?
>> ITOW, I do not see a contradiction. We're still clearing faults in FIFO
>> fashion. There is no limitation to do some spare work in between
>> clearings (return from interrupt, then fault again and continue).
> Hardware maintains an internal index to reference the fault recording
> register in which the next fault can be recorded. When a fault comes,
> hardware will check the Fault bit (bit 31 of the 4th 32-bit register recording
> register) referenced by the internal index. If this bit is set, hardware will
> not record the fault.
>
> Since we now don't clear the F bit until a register entry which has the F bit
> cleared, we might exit the fault handling with some register entries still
> have the F bit set.
>
>   F
> | 0 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|  <--- Fault record index in fault status register

Forgot to mention, this fault record index that software reads from
the fault status register is also maintained by hardware. It means
the index of the first fault recording register that hardware records
the faults last time.

Software doesn't maintains its own index, right? So there might some
registers left there with F bit set.

Best regards,
Lu Baolu

> | 0 |  xxxxxxxxxxxxx|
> | 1 |  xxxxxxxxxxxxx|  <--- hardware maintained index
> | 1 |  xxxxxxxxxxxxx|
> | 1 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|
> | 0 |  xxxxxxxxxxxxx|
>
> Take an example as above, hardware could only record 2 more faults with
> others all dropped.
>
> Best regards,
> Lu Baolu
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ