lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b296128d-be96-8683-b0c0-1eac0a7f18ca@ozlabs.ru>
Date:   Fri, 21 Dec 2018 12:50:00 +1100
From:   Alexey Kardashevskiy <aik@...abs.ru>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     linuxppc-dev@...ts.ozlabs.org,
        David Gibson <david@...son.dropbear.id.au>,
        kvm-ppc@...r.kernel.org, kvm@...r.kernel.org,
        Alistair Popple <alistair@...ple.id.au>,
        Reza Arbab <arbab@...ux.ibm.com>,
        Sam Bobroff <sbobroff@...ux.ibm.com>,
        Piotr Jaroszynski <pjaroszynski@...dia.com>,
        Leonardo Augusto GuimarĂ£es Garcia 
        <lagarcia@...ibm.com>, Jose Ricardo Ziviani <joserz@...ux.ibm.com>,
        Daniel Henrique Barboza <danielhb413@...il.com>,
        Paul Mackerras <paulus@...abs.org>,
        linux-kernel@...r.kernel.org, Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH kernel v7 20/20] vfio_pci: Add NVIDIA GV100GL [Tesla V100
 SXM2] subdriver



On 21/12/2018 12:37, Alex Williamson wrote:
> On Fri, 21 Dec 2018 12:23:16 +1100
> Alexey Kardashevskiy <aik@...abs.ru> wrote:
> 
>> On 21/12/2018 03:46, Alex Williamson wrote:
>>> On Thu, 20 Dec 2018 19:23:50 +1100
>>> Alexey Kardashevskiy <aik@...abs.ru> wrote:
>>>   
>>>> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
>>>> pluggable PCIe devices but still have PCIe links which are used
>>>> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
>>>> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
>>>> have a special unit on a die called an NPU which is an NVLink2 host bus
>>>> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
>>>> These systems also support ATS (address translation services) which is
>>>> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
>>>> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
>>>> cache-coherent access to a GPU RAM.
>>>>
>>>> This exports GPU RAM to the userspace as a new VFIO device region. This
>>>> preregisters the new memory as device memory as it might be used for DMA.
>>>> This inserts pfns from the fault handler as the GPU memory is not onlined
>>>> until the vendor driver is loaded and trained the NVLinks so doing this
>>>> earlier causes low level errors which we fence in the firmware so
>>>> it does not hurt the host system but still better be avoided; for the same
>>>> reason this does not map GPU RAM into the host kernel (usual thing for
>>>> emulated access otherwise).
>>>>
>>>> This exports an ATSD (Address Translation Shootdown) register of NPU which
>>>> allows TLB invalidations inside GPU for an operating system. The register
>>>> conveniently occupies a single 64k page. It is also presented to
>>>> the userspace as a new VFIO device region. One NPU has 8 ATSD registers,
>>>> each of them can be used for TLB invalidation in a GPU linked to this NPU.
>>>> This allocates one ATSD register per an NVLink bridge allowing passing
>>>> up to 6 registers. Due to the host firmware bug (just recently fixed),
>>>> only 1 ATSD register per NPU was actually advertised to the host system
>>>> so this passes that alone register via the first NVLink bridge device in
>>>> the group which is still enough as QEMU collects them all back and
>>>> presents to the guest via vPHB to mimic the emulated NPU PHB on the host.
>>>>
>>>> In order to provide the userspace with the information about GPU-to-NVLink
>>>> connections, this exports an additional capability called "tgt"
>>>> (which is an abbreviated host system bus address). The "tgt" property
>>>> tells the GPU its own system address and allows the guest driver to
>>>> conglomerate the routing information so each GPU knows how to get directly
>>>> to the other GPUs.
>>>>
>>>> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
>>>> know LPID (a logical partition ID or a KVM guest hardware ID in other
>>>> words) and PID (a memory context ID of a userspace process, not to be
>>>> confused with a linux pid). This assigns a GPU to LPID in the NPU and
>>>> this is why this adds a listener for KVM on an IOMMU group. A PID comes
>>>> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
>>>>
>>>> This requires coherent memory and ATSD to be available on the host as
>>>> the GPU vendor only supports configurations with both features enabled
>>>> and other configurations are known not to work. Because of this and
>>>> because of the ways the features are advertised to the host system
>>>> (which is a device tree with very platform specific properties),
>>>> this requires enabled POWERNV platform.
>>>>
>>>> The V100 GPUs do not advertise any of these capabilities via the config
>>>> space and there are more than just one device ID so this relies on
>>>> the platform to tell whether these GPUs have special abilities such as
>>>> NVLinks.
>>>>
>>>> Signed-off-by: Alexey Kardashevskiy <aik@...abs.ru>
>>>> ---
>>>> Changes:
>>>> v6.1:
>>>> * fixed outdated comment about VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD
>>>>
>>>> v6:
>>>> * reworked capabilities - tgt for nvlink and gpu and link-speed
>>>> for nvlink only
>>>>
>>>> v5:
>>>> * do not memremap GPU RAM for emulation, map it only when it is needed
>>>> * allocate 1 ATSD register per NVLink bridge, if none left, then expose
>>>> the region with a zero size
>>>> * separate caps per device type
>>>> * addressed AW review comments
>>>>
>>>> v4:
>>>> * added nvlink-speed to the NPU bridge capability as this turned out to
>>>> be not a constant value
>>>> * instead of looking at the exact device ID (which also changes from system
>>>> to system), now this (indirectly) looks at the device tree to know
>>>> if GPU and NPU support NVLink
>>>>
>>>> v3:
>>>> * reworded the commit log about tgt
>>>> * added tracepoints (do we want them enabled for entire vfio-pci?)
>>>> * added code comments
>>>> * added write|mmap flags to the new regions
>>>> * auto enabled VFIO_PCI_NVLINK2 config option
>>>> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
>>>> references; there are required by the NVIDIA driver
>>>> * keep notifier registered only for short time
>>>> ---
>>>>  drivers/vfio/pci/Makefile           |   1 +
>>>>  drivers/vfio/pci/trace.h            | 102 ++++++
>>>>  drivers/vfio/pci/vfio_pci_private.h |  14 +
>>>>  include/uapi/linux/vfio.h           |  37 +++
>>>>  drivers/vfio/pci/vfio_pci.c         |  27 +-
>>>>  drivers/vfio/pci/vfio_pci_nvlink2.c | 482 ++++++++++++++++++++++++++++
>>>>  drivers/vfio/pci/Kconfig            |   6 +
>>>>  7 files changed, 667 insertions(+), 2 deletions(-)
>>>>  create mode 100644 drivers/vfio/pci/trace.h
>>>>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
>>>>  
>>> ...  
>>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>>> index 8131028..5562587 100644
>>>> --- a/include/uapi/linux/vfio.h
>>>> +++ b/include/uapi/linux/vfio.h
>>>> @@ -353,6 +353,21 @@ struct vfio_region_gfx_edid {
>>>>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
>>>>  };
>>>>  
>>>> +/*
>>>> + * 10de vendor sub-type
>>>> + *
>>>> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
>>>> + */
>>>> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
>>>> +
>>>> +/*
>>>> + * 1014 vendor sub-type
>>>> + *
>>>> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
>>>> + * to do TLB invalidation on a GPU.
>>>> + */
>>>> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
>>>> +
>>>>  /*
>>>>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>>>>   * which allows direct access to non-MSIX registers which happened to be within
>>>> @@ -363,6 +378,28 @@ struct vfio_region_gfx_edid {
>>>>   */
>>>>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
>>>>  
>>>> +/*
>>>> + * Capability with compressed real address (aka SSA - small system address)
>>>> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
>>>> + */
>>>> +#define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT	4
>>>> +
>>>> +struct vfio_region_info_cap_nvlink2_ssatgt {
>>>> +	struct vfio_info_cap_header header;
>>>> +	__u64 tgt;
>>>> +};
>>>> +
>>>> +/*
>>>> + * Capability with an NVLink link speed.
>>>> + */  
>>>
>>> I was really hoping for something more like SSATGT above indicating the
>>> intended users and purpose, and an update to SSATGT since it's now used
>>> by both the GPU and NPU2.  This comment is correct, but it's basically
>>> useless, it doesn't provide any information that isn't readily apparent
>>> from the structure definition.  AIUI, SSATGT is used not only for the
>>> GPU to determine where its RAM is mapped on the system bus, but also by
>>> the NPU2 to associate itself to a GPU, right?  
>>
>> Correct. It could be improved by
>>
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 5562587..ff238ef9c 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -380,7 +380,8 @@ struct vfio_region_gfx_edid {
>>
>>  /*
>>   * Capability with compressed real address (aka SSA - small system address)
>> - * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
>> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing
>> + * and by the userspace to associate a NVLink bridge with a GPU.
>>   */
>>  #define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT    4
>>
>>
>>
>>> And the link speed here
>>> is consumed by the NPU2 in order to fill in DT information for the
>>> guest for compatibility and possibly routing optimizations?  
>>
>>
>> It is just some speed number, 8 or 9, one works and the other does not,
>> depending on the actual system. The NVIDIA driver handles it in the
>> binary blob. The existing comment is not much use but I am really not
>> sure what other comment could be useful in here.
> 
> So why do we need to expose it?  "Exposed on NPU2 devices for userspace
> to export to guest VM via DT(?) or else <something bad happens/doesn't
> work> in the guest".  Work with me, there must be some justification
> for why it gets exposed, not just what it is.  Thanks,


How about this?

/*
 * Capability with an NVLink link speed. The value is read by
 * the NVlink2 bridge driver from the bridge's "ibm,nvlink-speed"
 * property in the device tree. The value is fixed in the hardware
 * and failing to provide the correct value results in the link
 * not working with no indication from the driver why.
 */

Thanks,


-- 
Alexey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ