[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220725143303.GC3747@nvidia.com>
Date: Mon, 25 Jul 2022 11:33:03 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: "Tian, Kevin" <kevin.tian@...el.com>
Cc: Yishai Hadas <yishaih@...dia.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"saeedm@...dia.com" <saeedm@...dia.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"kuba@...nel.org" <kuba@...nel.org>,
"Martins, Joao" <joao.m.martins@...cle.com>,
"leonro@...dia.com" <leonro@...dia.com>,
"maorg@...dia.com" <maorg@...dia.com>,
"cohuck@...hat.com" <cohuck@...hat.com>
Subject: Re: [PATCH V2 vfio 03/11] vfio: Introduce DMA logging uAPIs
On Mon, Jul 25, 2022 at 07:20:16AM +0000, Tian, Kevin wrote:
> I got that point. But my question is slightly different.
>
> A practical flow would like below:
>
> 1) Qemu first requests to start dirty tracking in 4KB page size.
> Underlying trackers may start tracking in 4KB, 256KB, 2MB,
> etc. based on their own constraints.
>
> 2) Qemu then reads back dirty reports in a shared bitmap in
> 4KB page size. All trackers must update dirty bitmap in 4KB
> granular regardless of the actual size each tracker selects.
>
> Is there a real usage where Qemu would want to attempt
> different page sizes between above two steps?
If you multi-thread the tracker reads it will be efficient to populate
a single bitmap and then copy that single bitmapt to the dirty
transfer. In this case you want the page size conversion.
If qemu is just going to read sequentially then perhaps it doesn't.
But forcing a fixed page size just denies userspace this choice, and
it doesn't make the kernel any simpler because the kernel always must
have this code to adapt different page sizes to support the real iommu
with huge pages/etc.
Jason
Powered by blists - more mailing lists