[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGvdsqhxGrLEDJ-fT72Rd1ANDYwdU08bYt26Gyr_BO=kuQ@mail.gmail.com>
Date: Wed, 27 Sep 2017 12:14:08 -0400
From: Rob Clark <robdclark@...il.com>
To: Jean-Philippe Brucker <jean-philippe.brucker@....com>
Cc: Joerg Roedel <joro@...tes.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Will Deacon <Will.Deacon@....com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [RFC] iommu: arm-smmu: stall support
On Wed, Sep 27, 2017 at 9:49 AM, Jean-Philippe Brucker
<jean-philippe.brucker@....com> wrote:
> Hi Joerg,
>
> On 27/09/17 13:15, Joerg Roedel wrote:
>> Hi Rob, Jean,
>>
>> On Fri, Sep 22, 2017 at 02:42:44PM -0400, Rob Clark wrote:
>>> I'm in favour if splitting the reporting *somehow*.. the two
>>> approaches that seemed sane are:
>>>
>>> 1) call fault handler from irq and having separate domain->resume()
>>> called by the driver, potentially from a wq
>>> 2) or having two fault callbacks, first called before wq and then
>>> based on returned value, optionally 2nd callback called from wq
>>>
>>> The first seemed less intrusive to me, but I'm flexible.
>>
>> How about adding a flag to the fault-handler call-back that tells us
>> whether it wants to sleep or not. If it wants, we call it from a wq, if
>> not we call call it directly like we do today in the
>> report_iommu_fault() function.
>>
>> In any case we call iommu_ops->resume() when set on completion of the
>> fault-handler either from the workqueue or report_iommu_fault itself.
>
> I like this approach. When the device driver registers a fault handler,
> it also tells when it would like to be called (either in atomic context,
> blocking context, or both).
What I have in mind is still a case-by-case decision. Ie. I'd decide
if it is the first fault from a particular submit (job), in which case
I'd want to schedule the wq, or if it is one of the 999 following
faults from the same submit (in which case, skip the wq).
So a static decision when registering the fault handler doesn't work.
BR,
-R
> Then the handler itself receives a flag that says which context it's
> being called from. It returns a value telling the IOMMU how to proceed.
> Depending on this value we either resume/abort immediately, or add the
> fault to the workqueue if necessary.
>
> How about using the following return values:
>
> /**
> * enum iommu_fault_status - Return status of fault handlers, telling the IOMMU
> * driver how to proceed with the fault.
> *
> * @IOMMU_FAULT_STATUS_NONE: Fault was not handled. Call the next handler, or
> * terminate.
> * @IOMMU_FAULT_STATUS_FAILURE: General error. Drop all subsequent faults from
> * this device if possible. This is "Response Failure" in PCI PRI.
> * @IOMMU_FAULT_STATUS_INVALID: Could not handle this fault, don't retry the
> * access. This is "Invalid Request" in PCI PRI.
> * @IOMMU_FAULT_STATUS_HANDLED: Fault has been handled and the page tables
> * populated, retry the access.
> * @IOMMU_FAULT_STATUS_IGNORE: Stop processing the fault, and do not send a
> * reply to the device.
> *
> * For unrecoverable faults, the only valid status is IOMMU_FAULT_STATUS_NONE
> * For a recoverable fault, if no one handled the fault, treat as
> * IOMMU_FAULT_STATUS_INVALID.
> */
> enum iommu_fault_status {
> IOMMU_FAULT_STATUS_NONE = 0,
> IOMMU_FAULT_STATUS_FAILURE,
> IOMMU_FAULT_STATUS_INVALID,
> IOMMU_FAULT_STATUS_HANDLED,
> IOMMU_FAULT_STATUS_IGNORE,
> };
>
> This would probably cover the two use-cases of reporting faults to
> device drivers, and injecting them into the guest with VFIO, as well as
> handling PPRs internally. I'm also working on providing more details
> (pasid for instance) in the fault callback.
>
> We could also use the fault handler for invalid PRI Page Requests
> (currently specialized by amd_iommu_set_invalid_ppr_cb). It's just a
> matter of adding a registration flag to iommu_set_fault_handler.
>
> Thanks,
> Jean
Powered by blists - more mailing lists