[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <804dabb00903122102s3d91dff9jd36958a5c7bf0843@mail.gmail.com>
Date: Fri, 13 Mar 2009 12:02:04 +0800
From: Peter Teoh <htmldeveloper@...il.com>
To: LKML <linux-kernel@...r.kernel.org>
Subject: GPU scheduling
Firstly I am a newbie in this area of GPU. Does it make sense to
talk about scheduling tasks on the GPU? Not the graphics processing
types of tasks, nor the parallel matrix calculation types, but those
normally executed by CPU, so as to share workload along with the CPU?
I supposed the machine codes will be different, right? But if the
compiler can generate the opcodes for the GPU, and the GPU and CPU can
detect concurrent use of memory bus address (for the implementation of
spinlock - CPU/GPU synchronization), which is synonymous to GPU and
CPU accessing the same memory for data/instructions, then I don't see
there is any problem with the "GPU scheduler" implementation concept,
right? Or perhaps the hardware does not meet all these requirements?
Or any other problems which u can forsee? (eg, GPU being
prorietary, the opcodes may not be revealed/available, and therefore
the internal schedulers is not known to us as well?)
Just an idea. But searching for "GPU scheduling" on Google does
reveal a number of articles - but targetted at the Windows platform.
One particular looks similar to the present idea:
http://portal.acm.org/citation.cfm?id=1444484, but I have no access to
read it.
--
Regards,
Peter Teoh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists