lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b8d47f9180e43a7bdb01f9d8754c9f6@AUSX13MPC107.AMER.DELL.COM>
Date:   Wed, 11 Mar 2020 17:27:35 +0000
From:   <Austin.Bolen@...l.com>
To:     <helgaas@...nel.org>, <Austin.Bolen@...l.com>
CC:     <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        <linux-pci@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <ashok.raj@...el.com>
Subject: Re: [PATCH v17 09/12] PCI/AER: Allow clearing Error Status Register
 in FF mode

On 3/11/2020 12:12 PM, Bjorn Helgaas wrote:
> 
> [EXTERNAL EMAIL]
> 
<SNIP>
> 
> I'm probably missing your intent, but that sounds like "the OS can
> read/write AER bits whenever it wants, regardless of ownership."
> 
> That doesn't sound practical to me, and I don't think it's really
> similar to DPC, where it's pretty clear that the OS can touch DPC bits
> it doesn't own but only *during the EDR processing window*.

Yes, by treating AER bits like DPC bits I meant I'd define the specific 
time windows when OS can touch the AER status bits similar to how it's 
done for DPC in the current ECN.

> 
>>>> For the normative text describing when OS clears the AER bits
>>>> following the informative flow chart, it could say that OS clears
>>>> AER as soon as possible after OST returns and before OS processes
>>>> _HPX and loading drivers.  Open to other suggestions as well.
>>>
>>> I'm not sure what to do with "as soon as possible" either.  That
>>> doesn't seem like something firmware and the OS can agree on.
>>
>> I can just state that it's done after OST returns but before _HPX or
>> driver is loaded. Any time in that range is fine. I can't get super
>> specific here because different OSes do different things.  Even for
>> a given OS they change over time. And I need something generic
>> enough to support a wide variety of OS implementations.
> 
> Yeah.  I don't know how to solve this.
> 
> Linux doesn't actually unload and reload drivers for the child devices
> (Sathy, correct me if I'm wrong here) even though DPC containment
> takes the link down and effectively unplugs and replugs the device.  I
> would *like* to handle it like hotplug, but some higher-level software
> doesn't deal well with things like storage devices disappearing and
> reappearing.
> 
> Since Linux doesn't actually re-enumerate the child devices, it
> wouldn't evaluate _HPX again.  It would probably be cleaner if it did,
> but it's all tied up with the whole unplug/replug problem.

DPC resets everything below it and so to get it back up and running it 
would mean that all buses and resources need to be assigned, _HPX 
evaluated, and drivers reloaded. If those things don't happen then the 
whole hierarchy below the port that triggered DPC will be inaccessible.

For higher level software not handling storage device disappearing due 
to hot-plug, they will have the same problem with DPC since DPC holds 
the port in the disabled state (and hence will be inaccessible). And 
once DPC is released the devices will be unconfigured and so still 
inaccessible to upper-level software.  A lot of upper-level storage 
software I've seen can already handle this gracefully.

> 
>>> For child devices of that port, obviously it's impossible to
>>> access AER registers until DPC Trigger Status is cleared, and the
>>> flowchart says the OS shouldn't access them until after _OST.
>>>
>>> I'm actually not sure we currently do *anything* with child device
>>> AER info in the EDR path.  pcie_do_recovery() does walk the
>>> sub-hierarchy of child devices, but it only calls error handling
>>> callbacks in the child drivers; it doesn't do anything with the
>>> child AER registers itself.  And of course, this happens before
>>> _OST, so it would be too early in any case.  But maybe I'm missing
>>> something here.
>>
>> My understanding is that the OS read/clears AER in the case where OS
>> has native control of AER.  Feedback from OSVs is they wanted to
>> continue to do that to keep the native OS controlled AER and FF
>> mechanism similar.  The other way we could have done it would be to
>> have the firmware read/clear AER and report them to OS via APEI.
> 
> When Linux has native control of AER, it reads/clears AER status.
> The flowchart is for the case where firmware has AER control, so I
> guess Linux would not field AER interrupts and wouldn't expect to
> read/clear AER status.  So I *guess* Linux would assume APEI?  But
> that doesn't seem to be what the flowchart assumes.

Correct on the flowchart.  The OSVs we talked with did not want to use 
APEI.  They wanted to read and clear AER themselves and hence the 
flowchart is written that way.

> 
>>> BTW, if/when this is updated, I have another question: the _OSC
>>> DPC control bit currently allows the OS to write DPC Control
>>> during that window.  I understand the OS writing the RW1C *Status*
>>> bits to clear them, but it seems like writing the DPC Control
>>> register is likely to cause issues.  The same question would apply
>>> to the AER access we're talking about.
>>
>> We could specify which particular bits can and can't be touched.
>> But it's hard to maintain as new bits are added.  Probably better to
>> add some guidance that OS should read/clear error status, DPC
>> Trigger Status, etc. but shouldn't change masks/severity/control
>> bits/etc.
> 
> Yeah.  I didn't mean at the level of individual bits; I was thinking
> more of status/log/etc vs control registers.  But maybe even that is
> hard, I dunno.
> 
I'll see if I can break it out by register.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ