[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB52762F2AF16AA5833D61AFF68CEC2@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Wed, 15 May 2024 07:43:28 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Jason Gunthorpe <jgg@...pe.ca>,
"Joerg Roedel" <joro@...tes.org>, Will Deacon <will@...nel.org>, Robin Murphy
<robin.murphy@....com>, Jean-Philippe Brucker <jean-philippe@...aro.org>,
Nicolin Chen <nicolinc@...dia.com>, "Liu, Yi L" <yi.l.liu@...el.com>, "Jacob
Pan" <jacob.jun.pan@...ux.intel.com>, Joel Granados <j.granados@...sung.com>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v5 4/9] iommufd: Add fault and response message
definitions
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> iommu_hwpt_pgfaults represent fault messages that the userspace can
> retrieve. Multiple iommu_hwpt_pgfaults might be put in an iopf group,
> with the IOMMU_PGFAULT_FLAGS_LAST_PAGE flag set only for the last
> iommu_hwpt_pgfault.
Do you envision extending the same structure to report unrecoverable
fault in the future?
If yes this could be named more neutral e.g. iommu_hwpt_faults with
flags to indicate it's a recoverable PRI request.
If it's only for PRI probably iommu_hwpt_pgreqs is clearer.
> +
> +/**
> + * struct iommu_hwpt_pgfault - iommu page fault data
> + * @size: sizeof(struct iommu_hwpt_pgfault)
> + * @flags: Combination of enum iommu_hwpt_pgfault_flags
> + * @dev_id: id of the originated device
> + * @pasid: Process Address Space ID
> + * @grpid: Page Request Group Index
> + * @perm: Combination of enum iommu_hwpt_pgfault_perm
> + * @addr: Page address
'Fault address'
> + * @length: a hint of how much data the requestor is expecting to fetch. For
> + * example, if the PRI initiator knows it is going to do a 10MB
> + * transfer, it could fill in 10MB and the OS could pre-fault in
> + * 10MB of IOVA. It's default to 0 if there's no such hint.
This is not clear to me and I don't remember PCIe spec defines such
mechanism.
> +/**
> + * enum iommufd_page_response_code - Return status of fault handlers
> + * @IOMMUFD_PAGE_RESP_SUCCESS: Fault has been handled and the page
> tables
> + * populated, retry the access. This is the
> + * "Success" defined in PCI 10.4.2.1.
> + * @IOMMUFD_PAGE_RESP_INVALID: General error. Drop all subsequent
> faults
> + * from this device if possible. This is the
> + * "Response Failure" in PCI 10.4.2.1.
> + * @IOMMUFD_PAGE_RESP_FAILURE: Could not handle this fault, don't
> retry the
> + * access. This is the "Invalid Request" in PCI
> + * 10.4.2.1.
the comment for 'INVALID' and 'FAILURE' are misplaced. Also I'd more
use the spec words to be accurate.
> + */
> +enum iommufd_page_response_code {
> + IOMMUFD_PAGE_RESP_SUCCESS = 0,
> + IOMMUFD_PAGE_RESP_INVALID,
> + IOMMUFD_PAGE_RESP_FAILURE,
> +};
> +
Powered by blists - more mailing lists