lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
 <SA1PR12MB719977EDD0733752960615F4B0132@SA1PR12MB7199.namprd12.prod.outlook.com>
Date: Thu, 9 Jan 2025 22:57:06 +0000
From: Ankit Agrawal <ankita@...dia.com>
To: Alex Williamson <alex.williamson@...hat.com>
CC: Jason Gunthorpe <jgg@...dia.com>, Yishai Hadas <yishaih@...dia.com>,
	"shameerali.kolothum.thodi@...wei.com"
	<shameerali.kolothum.thodi@...wei.com>, "kevin.tian@...el.com"
	<kevin.tian@...el.com>, Zhi Wang <zhiw@...dia.com>, Aniket Agashe
	<aniketa@...dia.com>, Neo Jia <cjia@...dia.com>, Kirti Wankhede
	<kwankhede@...dia.com>, "Tarun Gupta (SW-GPU)" <targupta@...dia.com>, Vikram
 Sethi <vsethi@...dia.com>, Andy Currid <acurrid@...dia.com>, Alistair Popple
	<apopple@...dia.com>, John Hubbard <jhubbard@...dia.com>, Dan Williams
	<danw@...dia.com>, "Anuj Aggarwal (SW-GPU)" <anuaggarwal@...dia.com>, Matt
 Ochs <mochs@...dia.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] vfio/nvgrace-gpu: Expose the blackwell device PF
 BAR1 to the VM

> Doesn't this work out much more naturally if we just do something like:
> 
> diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c
> b/drivers/vfio/pci/nvgrace-gpu/main.c index 85eacafaffdf..43a9457442ff
> 100644 --- a/drivers/vfio/pci/nvgrace-gpu/main.c
> +++ b/drivers/vfio/pci/nvgrace-gpu/main.c
> @@ -17,9 +17,6 @@
> #define RESMEM_REGION_INDEX VFIO_PCI_BAR2_REGION_INDEX
> #define USEMEM_REGION_INDEX VFIO_PCI_BAR4_REGION_INDEX
> 
> -/* Memory size expected as non cached and reserved by the VM driver */
> -#define RESMEM_SIZE SZ_1G
> -
> /* A hardwired and constant ABI value between the GPU FW and VFIO
> driver. */ #define MEMBLK_SIZE SZ_512M
> 
> @@ -72,7 +69,7 @@ nvgrace_gpu_memregion(int index,
>         if (index == USEMEM_REGION_INDEX)
>                 return &nvdev->usemem;
> 
> -       if (index == RESMEM_REGION_INDEX)
> +       if (nvdev->resmem.memlength && index == RESMEM_REGION_INDEX)
>                 return &nvdev->resmem;
> 
>         return NULL;
> @@ -757,6 +754,13 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
>                               u64 memphys, u64 memlength)
> {
>          int ret = 0;
> +       u64 resmem_size = 0;
> +
> +       /*
> +        * Comment about the GH bug that requires this and fix in GB
> +        */
> +       if (!nvdev->has_mig_hw_bug_fix)
> +               resmem_size = SZ_1G;
> 
>         /*
>          * The VM GPU device driver needs a non-cacheable region to
> support @@ -780,7 +784,7 @@ nvgrace_gpu_init_nvdev_struct(struct
> pci_dev *pdev,
>          * memory (usemem) is added to the kernel for usage by the VM
>          * workloads. Make the usable memory size memblock aligned.
>          */
> -       if (check_sub_overflow(memlength, RESMEM_SIZE,
> +       if (check_sub_overflow(memlength, resmem_size,
>                                &nvdev->usemem.memlength)) {
>                 ret = -EOVERFLOW;
>                 goto done;
> @@ -813,7 +817,9 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
>          * the BAR size for them.
>          */
>         nvdev->usemem.bar_size =
> roundup_pow_of_two(nvdev->usemem.memlength);
> -       nvdev->resmem.bar_size =
> roundup_pow_of_two(nvdev->resmem.memlength);
> +       if (nvdev->resmem.memlength)
> +               nvdev->resmem.bar_size =
> +                       roundup_pow_of_two(nvdev->resmem.memlength);
>  done:
>         return ret;
>  }
> 

Thanks Alex, you suggestion does looks simpler and better.

I'll test that out and send out an updated version of the patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ