lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <447e7e1b-97ef-4570-80dc-72df618a0b3a@amd.com>
Date: Fri, 5 Jan 2024 20:39:14 +0700
From: "Suthikulpanit, Suravee" <suravee.suthikulpanit@....com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev, joro@...tes.org,
 yi.l.liu@...el.com, kevin.tian@...el.com, nicolinc@...dia.com,
 eric.auger@...hat.com, vasant.hegde@....com, jon.grimm@....com,
 santosh.shukla@....com, Dhaval.Giani@....com, pandoh@...gle.com,
 loganodell@...gle.com
Subject: Re: [RFC PATCH 4/6] iommufd: Introduce data struct for AMD nested
 domain allocation

Hi Jason

On 12/13/2023 9:03 PM, Jason Gunthorpe wrote:
> On Tue, Dec 12, 2023 at 10:01:37AM -0600, Suravee Suthikulpanit wrote:
>> Introduce IOMMU_HWPT_DATA_AMD_V2 data type for AMD IOMMU v2 page table,
>> which is used for stage-1 in nested translation. The data structure
>> contains information necessary for setting up the AMD HW-vIOMMU support.
>>
>> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
>> ---
>>   include/uapi/linux/iommufd.h | 23 +++++++++++++++++++++++
>>   1 file changed, 23 insertions(+)
>>
>> diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
>> index bf4a1f8ab748..e2240d430dd1 100644
>> --- a/include/uapi/linux/iommufd.h
>> +++ b/include/uapi/linux/iommufd.h
>> @@ -389,14 +389,37 @@ struct iommu_hwpt_vtd_s1 {
>>   	__u32 __reserved;
>>   };
>>   
>> +/**
>> + * struct iommu_hwpt_amd_v2 - AMD IOMMU specific user-managed
>> + *                            v2 I/O page table data
>> + * @gcr3: GCR3 guest physical ddress
>> + * @gid: Guest ID
>> + * @iommu_id: IOMMU host device ID
>> + * @gdev_id: Guest device ID
>> + * @gdom_id: Guest domain ID
>> + * @glx: GCR3 table levels
>> + * @guest_paging_mode: Guest v2 page table paging mode
>> + */
>> +struct iommu_hwpt_amd_v2 {
>> +	__aligned_u64 gcr3;
>> +	__u32 gid;
>> +	__u32 iommu_id;
>> +	__u16 gdev_id;
>> +	__u16 gdom_id;
>> +	__u16 glx;
>> +	__u16 guest_paging_mode;
> 
> Add explicit padding please

OK

> Also, I'm pretty sure that most of these IDs cannot be here.
> 
> These are Ok:
> 
> 	__aligned_u64 gcr3; // table top pointer
> 	__u16 gdom_id;      // virtual cache tag
>          __u16 glx;          // configuration of radix
>          __u16 guest_paging_mode; // configuration of radix
> 
> These are confusing, probably incorrect.
> 
>   +	__u32 gid;
>   +	__u32 iommu_id;
>   +	__u16 gdev_id;

> iommu_id is the RID of the IOMMU, so definately not. The iommu
> instance to work on is specifed by the generic dev_id which becomes a
> struct device * in the driver callback. Access the target iommu
> instance via dev_iommu_priv_get()/etc
>
> The translation of gdev_id to pdev_dev needs to be connected to some
> future viommu object, so this shouldn't be part of this series, and
> will eventually be provided through some new viommu object API - see
> my long outline email

Got it.

> The viommu object should hold the GID. I'm not sure you need a GID
> right now (can you just issue invalidation on the physical side?), but
> if you do need GID to bridge until the viommu is ready it should
> probably be allocated by and stored in the nesting parent.

Currently, the GID is needed when setting up domain ID mapping table 
(and device ID mapping table in future series), which will be moved to 
when attaching domain to a device. By that time, we should already have 
GID information already stored in other struct (e.g. struct 
iommu_dev_data). So it should be alright to get rid of GID from the 
struct iommu_hwpt_amd_v2.

Thanks,
Suravee

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ