lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIn/EHnCg444LJ3i@nvidia.com>
Date:   Wed, 14 Jun 2023 14:55:28 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     ankita@...dia.com, aniketa@...dia.com, cjia@...dia.com,
        kwankhede@...dia.com, targupta@...dia.com, vsethi@...dia.com,
        acurrid@...dia.com, apopple@...dia.com, jhubbard@...dia.com,
        danw@...dia.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/1] vfio/nvgpu: Add vfio pci variant module for grace
 hopper

On Tue, Jun 13, 2023 at 01:24:02PM -0600, Alex Williamson wrote:

> I'd even forgotten about the sparse mmap solution here, that's even
> better than trying to do something clever with the mmap.

Okay, Ankit please try this, it sounds good

> > You may be right, I think this patch is trying to make things
> > automatic for user, but a dedicated machine type might make more
> > sense.
> 
> Juan and I discussed this with Ankit last week, there are a lot of down
> sides with another machine type, but the automatic manipulation of the
> machine is still problematic.  Another option we have is to use QEMU
> command line options for each feature.  For example we already support
> NUMA VM configurations and loading command line ACPI tables, hopefully
> also associating devices to nodes.  Do we end up with just a
> configuration spec for the VM to satisfy the in-guest drivers?
> Potentially guest driver requirements may changes over time, so a hard
> coded recipe built-in to QEMU might not be the best solution anyway.

Let's have those discussions settle then, I know there are a few
different ideas here people are looking at.

> I think NVIDIA might have an interest in enabling Atomic Ops support in
> VMs as well, so please comment in the series thread if there are
> concerns here or if anyone can definitively says that another guest OS
> we might care about does cache root port capability bits.  Thanks,

I expect we do - I haven't heard of atomic ops specifically yet
though.

We just did a big exercise on relaxed ordering which is similarly
troubled.

Here we deciced to just not use the VM's config space at all. The
device itself knows if it can do relaxed ordering and it just reports
this directly to the driver.

In many ways I would prefer to do the same for atomic.. I haven't
checked fully but I think we do this anyhow as you can see mlx5 simply
tries to enable PCI atomics but doesn't appear to do anything with the
result of it. I expect the actual success/fail is looped back through
the device interface itself.

So, for mlx5, it probably already works in most real cases. Passing a
PF might not work I guess.

It is not a satisfying answer from a VMM design perspective..

Some qemu command line to say what root ports with what atomic caps to
create seems like a reasonable thing to do.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ