lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Feb 2021 11:42:11 +1100
From:   Alexey Kardashevskiy <aik@...abs.ru>
To:     Jason Gunthorpe <jgg@...dia.com>
Cc:     Max Gurtovoy <mgurtovoy@...dia.com>,
        Cornelia Huck <cohuck@...hat.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Matthew Rosato <mjrosato@...ux.ibm.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, liranl@...dia.com, oren@...dia.com,
        tzahio@...dia.com, leonro@...dia.com, yarong@...dia.com,
        aviadye@...dia.com, shahafs@...dia.com, artemp@...dia.com,
        kwankhede@...dia.com, ACurrid@...dia.com, gmataev@...dia.com,
        cjia@...dia.com, yishaih@...dia.com
Subject: Re: [PATCH 8/9] vfio/pci: use x86 naming instead of igd



On 04/02/2021 23:51, Jason Gunthorpe wrote:
> On Thu, Feb 04, 2021 at 12:05:22PM +1100, Alexey Kardashevskiy wrote:
> 
>> It is system firmware (==bios) which puts stuff in the device tree. The
>> stuff is:
>> 1. emulated pci devices (custom pci bridges), one per nvlink, emulated by
>> the firmware, the driver is "ibmnpu" and it is a part on the nvidia driver;
>> these are basically config space proxies to the cpu's side of nvlink.
>> 2. interconnect information - which of 6 gpus nvlinks connected to which
>> nvlink on the cpu side, and memory ranges.
> 
> So what is this vfio_nvlink driver supposed to be bound to?
> 
> The "emulated pci devices"?

Yes.

> A real GPU function?

Yes.

> A real nvswitch function?

What do you mean by this exactly? The cpu side of nvlink is "emulated 
pci devices", the gpu side is not in pci space at all, the nvidia driver 
manages it via the gpu's mmio or/and cfg space.

> Something else?

Nope :)
In this new scheme which you are proposing it should be 2 drivers, I guess.

> 
> Jason
> 

-- 
Alexey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ