[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1743144.c4ng0vEeQp@nvdebian>
Date: Wed, 26 May 2021 23:30:06 +1000
From: Alistair Popple <apopple@...dia.com>
To: John Hubbard <jhubbard@...dia.com>
CC: Balbir Singh <bsingharora@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>, <nouveau@...ts.freedesktop.org>,
<bskeggs@...hat.com>, <rcampbell@...dia.com>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<dri-devel@...ts.freedesktop.org>, <hch@...radead.org>,
<jglisse@...hat.com>, <willy@...radead.org>, <jgg@...dia.com>,
<peterx@...hat.com>, <hughd@...gle.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access
On Wednesday, 26 May 2021 5:17:18 PM AEST John Hubbard wrote:
> On 5/25/21 4:51 AM, Balbir Singh wrote:
> ...
>
> >> How beneficial is this code to nouveau users? I see that it permits a
> >> part of OpenCL to be implemented, but how useful/important is this in
> >> the real world?
> >
> > That is a very good question! I've not reviewed the code, but a sample
> > program with the described use case would make things easy to parse.
> > I suspect that is not easy to build at the moment?
>
> The cover letter says this:
>
> This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program
> which checks that GPU atomic accesses to system memory are atomic. Without
> this series the test fails as there is no way of write-protecting the page
> mapping which results in the device clobbering CPU writes. For reference
> the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
>
> Further testing has been performed by adding support for testing exclusive
> access to the hmm-tests kselftests.
>
> ...so that seems to cover the "sample program" request, at least.
It is also sufficiently easy to build, assuming of course you have the
appropriate Mesa/LLVM/OpenCL libraries installed :-)
If you are interested I have some scripts which may help with building Mesa,
etc. Not that that is especially hard either, it's just there are a couple of
different dependencies required.
> > I wonder how we co-ordinate all the work the mm is doing, page migration,
> > reclaim with device exclusive access? Do we have any numbers for the worst
> > case page fault latency when something is marked away for exclusive
> > access?
>
> CPU page fault latency is approximately "terrible", if a page is resident on
> the GPU. We have to spin up a DMA engine on the GPU and have it copy the
> page over the PCIe bus, after all.
Although for clarity that describes latency for CPU faults to device private
pages which are always resident on the GPU. A CPU fault to a page being
exclusively accessed will be slightly less terrible as it only requires the
GPU MMU/TLB mappings to be taken down in much the same as for any other MMU
notifier callback as the page is mapped by the GPU rather than resident there.
> > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE
> > would
>
> Yes, for now.
>
> > only impact the address space of programs using the GPU. Should the
> > exclusively marked range live in the unreclaimable list and recycled back
> > to active/in-active to account for the fact that
> >
> > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > 2. It ages the page correctly or at-least allows for that possibility when
> > the>
> > page is used by the GPU.
>
> I'm not sure that that is *necessarily* something we can conclude. It
> depends upon access patterns of each program. For example, a "reduction"
> parallel program sends over lots of data to the GPU, and only a tiny bit of
> (reduced!) data comes back to the CPU. In that case, freeing the physical
> page on the CPU is actually the best decision for the OS to make (if the OS
> is sufficiently prescient).
>
> thanks,
Powered by blists - more mailing lists