[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553917F4.4080300@redhat.com>
Date: Thu, 23 Apr 2015 12:04:04 -0400
From: Rik van Riel <riel@...hat.com>
To: Christoph Lameter <cl@...ux.com>,
Jerome Glisse <j.glisse@...il.com>
CC: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
jglisse@...hat.com, mgorman@...e.de, aarcange@...hat.com,
airlied@...hat.com, benh@...nel.crashing.org,
aneesh.kumar@...ux.vnet.ibm.com,
Cameron Buschardt <cabuschardt@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Geoffrey Gerfin <ggerfin@...dia.com>,
John McKenna <jmckenna@...dia.com>, akpm@...ux-foundation.org
Subject: Re: Interacting with coherent memory on external devices
On 04/21/2015 08:50 PM, Christoph Lameter wrote:
> On Tue, 21 Apr 2015, Jerome Glisse wrote:
>> So big use case here, let say you have an application that rely on a
>> scientific library that do matrix computation. Your application simply
>> use malloc and give pointer to this scientific library. Now let say
>> the good folks working on this scientific library wants to leverage
>> the GPU, they could do it by allocating GPU memory through GPU specific
>> API and copy data in and out. For matrix that can be easy enough, but
>> still inefficient. What you really want is the GPU directly accessing
>> this malloced chunk of memory, eventualy migrating it to device memory
>> while performing the computation and migrating it back to system memory
>> once done. Which means that you do not want some kind of filesystem or
>> anything like that.
>
> With a filesystem the migration can be controlled by the application.
Which is absolutely the wrong thing to do when using the "GPU"
(or whatever co-processor it is) transparently from libraries,
without the applications having to know about it.
Your use case is legitimate, but so is this other case.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists