lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 10 Jan 2019 13:47:27 +0100
From:   Pierre Morel <pmorel@...ux.ibm.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        walling@...ux.ibm.com, cohuck@...hat.com, david@...hat.com,
        pasic@...ux.ibm.com, thuth@...hat.com,
        Shameerali Kolothum Thodi 
        <shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH v1 1/2] vfio:iommu: Use capabilities do report IOMMU
 informations

On 09/01/2019 20:43, Alex Williamson wrote:
> On Wed, 9 Jan 2019 18:07:19 +0100
> Pierre Morel <pmorel@...ux.ibm.com> wrote:
> 
>> On 09/01/2019 16:37, Alex Williamson wrote:
>>> On Wed,  9 Jan 2019 13:41:53 +0100
>>> Pierre Morel <pmorel@...ux.ibm.com> wrote:
>>>    
>>>> We add a new flag, VFIO_IOMMU_INFO_CAPABILITIES, inside the
>>>> vfio_iommu_type1_info to specify the support for capabilities.
>>>>
>>>> We add a new capability, with id VFIO_IOMMU_INFO_CAP_DMA
>>>> in the capability list of the VFIO_IOMMU_GET_INFO ioctl.
>>>>
>>>> Signed-off-by: Pierre Morel <pmorel@...ux.ibm.com>
>>>> ---
>>>>    include/uapi/linux/vfio.h | 9 +++++++++
>>>>    1 file changed, 9 insertions(+)
>>>>
>>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>>> index 8131028..54c4fcb 100644
>>>> --- a/include/uapi/linux/vfio.h
>>>> +++ b/include/uapi/linux/vfio.h
>>>> @@ -669,6 +669,15 @@ struct vfio_iommu_type1_info {
>>>>    	__u32	flags;
>>>>    #define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
>>>>    	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
>>>> +#define VFIO_IOMMU_INFO_CAPABILITIES (1 << 1)  /* support capabilities info */
>>>> +	__u64   cap_offset;     /* Offset within info struct of first cap */
>>>> +};
>>>> +
>>>> +#define VFIO_IOMMU_INFO_CAP_DMA 1
>>>> +struct vfio_iommu_cap_dma {
>>>> +	struct vfio_info_cap_header header;
>>>> +	__u64   dma_start;
>>>> +	__u64   dma_end;
>>>>    };
>>>>    
>>>>    #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
>>>
>>> Unfortunately for most systems, a simple start and end is not really
>>> sufficient to describe the available IOVA space, there are often
>>> reserved regions intermixed, so this is not really a complete
>>> solution.  Shameer tried to solve this last year[1] but we ran into a
>>> road block that Intel IGD devices impose a reserved range of IOVA
>>> spaces reported to the user that conflict with existing assignment of
>>> this device and we haven't figured out yet how to be more selective of
>>> the enforcement of those reserved ranges.  Thanks,
>>>
>>> Alex
>>>
>>> [1] https://lkml.org/lkml/2018/4/18/293
>>>    
>>
>> I understand that some architecture may be more complex and have special
>> needs.
>> However the IOMMU geometry is a constant for all IOMMU devices and
>> is reported by the geometry in the iommu operations.
>>
>> This makes the IOMMU geometry a special case.
> 
> I'm not so sure that the geometry is a constant for all IOMMU devices,

Sorry, I did not express myself correctly, what I mean is that the IOMMU 
geometry is independent of the system memory map but is a constant of 
the IOMMU device.
May be different for another IOMMU device, as we can have several 
different IOMMU device, typically one per PCI device, in the Z architecture.

> nor am I sure how if that were true and it's part of an in-kernel
> interface that it automatically qualifies it as the right way to expose
> it to userspace.  The fact that we have a reserved region interface to
> augment a basic contiguous range suggests it's known to be insufficient
> even for in-kernel use.
> 
>> It is also a special case because it is an inclusive description of
>> available memory, to oppose to the exclusive description given by the
>> windows.
> 
> Geometry doesn't really have anything to do with available memory, it's
> the minimum and maximum IOVA aperture.  Shameer's proposal gave us an
> IOVA list, which is based on the IOMMU geometry, from which it excludes
> various reserved ranges.  So if you have a less complex architecture,
> you might only have one entry in the list, which gives you the start
> and end of the base geometry.  Move complex architectures might have
> more entries, but the geometry can still be deduced from the absolute
> highest and lowest addresses within the list.  Therefore a basic
> geometry capability is automatically redundant to the interface that's
> already been proposed.
> 
>> Isn't it possible to separate the IOMMU geometry, which is really
>> related to the IOMMU chip, from other windows exclusion related to the
>> system memory mapping?
> 
> Why would we ever have both given the description above?

My idea for this is based on that restrictions are comming from two 
different address spaces:
- One address space is the device view before IOMMU (aperture)
- The other is the physical system view (reserved windows)

which, I thought, makes things difficult to merge.
However reading you and Shameer and the pointers you both sent, I 
understand that there are much more for me to learn, especially about 
other architectures (Intel / RMRR)

> 
>> Retrieving the IOMMU geometry is very important for us because the
>> driver inside the guest must get it and program the IOMMU based on these
>> values.
> 
> So you have motivation to help move the IOVA list proposal forward,
> or some equally inclusive proposal that isn't just a stop-gap ;)
> Thanks,
> 
> Alex
> 

I will be happy to help.
I will need some time to catch up however.
Thanks for your comments.

Regards,
Pierre

-- 
Pierre Morel
Linux/KVM/QEMU in Böblingen - Germany

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ