[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB5276DD07EC06631237354D678C7A9@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Fri, 2 Sep 2022 09:09:24 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: David Hildenbrand <david@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"lpivarc@...hat.com" <lpivarc@...hat.com>,
"Liu, Jingqi" <jingqi.liu@...el.com>,
"Lu, Baolu" <baolu.lu@...el.com>
Subject: RE: [PATCH] vfio/type1: Unpin zero pages
> From: David Hildenbrand <david@...hat.com>
> Sent: Friday, September 2, 2022 4:32 PM
>
> On 02.09.22 10:24, Tian, Kevin wrote:
> > Hi, Alex,
> >
> >> From: Alex Williamson <alex.williamson@...hat.com>
> >> Sent: Tuesday, August 30, 2022 11:06 AM
> >>
> >> There's currently a reference count leak on the zero page. We increment
> >> the reference via pin_user_pages_remote(), but the page is later handled
> >> as an invalid/reserved page, therefore it's not accounted against the
> >> user and not unpinned by our put_pfn().
> >>
> >> Introducing special zero page handling in put_pfn() would resolve the
> >> leak, but without accounting of the zero page, a single user could
> >> still create enough mappings to generate a reference count overflow.
> >>
> >> The zero page is always resident, so for our purposes there's no reason
> >> to keep it pinned. Therefore, add a loop to walk pages returned from
> >> pin_user_pages_remote() and unpin any zero pages.
> >>
> >
> > We found an interesting issue on zero page and wonder whether we
> > should instead find a way to not use zero page in vfio pinning path.
> >
> > The observation - the 'pc.bios' region (0xfffc0000) is always mapped
> > RO to zero page in the IOMMU page table even after the mapping in
> > the CPU page table has been changed after Qemu loads the guest
> > bios image into that region (which is mmap'ed as RW).
> >
> > In reality this may not cause real problem as I don't expect any sane
> > usage would want to DMA read from the bios region. This is probably
> > the reason why nobody ever notes it.
> >
> > But in concept it is incorrect.
> >
> > Fixing Qemu to update/setup the VFIO mapping after loading the bios
> > image could mitigate this problem. But we never document such ABI
> > restriction on RO mappings and in concept the pinning semantics
> > should apply to all paths (RO in DMA and RW in CPU) which the
> > application uses to access the pinned memory hence the sequence
> > shouldn't matter from user p.o.v
> >
> > And old Qemu/VMM still have this issue.
> >
> > Having a notifier to implicitly fix the IOMMU mapping within the
> > kernel violates the semantics of pinning, and makes vfio page
> > accounting even more tricky.
> >
> > So I wonder instead of continuing to fix trickiness around the zero
> > page whether it is a better idea to pursue allocating a normal
> > page from the beginning for pinned RO mappings?
>
> That's precisely what I am working. For example, that's required to get
> rid of FOLL_FORCE|FOLL_WRITE for taking a R/O pin as done by RDMA:
>
> See
> https://lore.kernel.org/all/5593cbb7-eb29-82f0-490e-
> dd72ceafff9b@...hat.com/
> for some more details.
Yes, this is exactly what I'm looking for!
>
>
> The concerns I discussed with Peter and Alex offline for VFIO is that we
> might end up charging more anon pages with that change to the MEMLOCK
> limit of the user and might degrade existing setups.
>
> I do wonder if that's a real issue, though. One approach would be to
> warn the VFIO users and allow for slightly exceeding the MEMLOCK limit
> for a while. Of course, that only works if we assume that such pinned
> zeropages are only extremely rarely longterm-pinned for a single VM
> instance by VFIO.
>
For now I believe native VFIO applications like dpdk don't use RO mapping.
For Qemu the only relevant usage is for virtual bios which is in 100's kilo-
or several mega-bytes.
And for future Qemu it may implement an option to skip adding RO
mappings if MEMLOCK limit becomes a concern. So far I haven't seen
a real VFIO usage requiring such RO mappings in DMA path, probably
until cross-VM p2p where the exporter VM wants to restrict the access
permission on the certain MMIO range accessed by the importer VM.
But in that case it's not about memory hence no impact on MEMLOCK
limit. 😊
Thanks
Kevin
Powered by blists - more mailing lists