lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CC9647A.50108@collabora.co.uk>
Date:	Thu, 28 Oct 2010 12:54:34 +0100
From:	Ian Molton <ian.molton@...labora.co.uk>
To:	Avi Kivity <avi@...hat.com>
CC:	linux-kernel@...r.kernel.org,
	QEMU Developers <qemu-devel@...gnu.org>,
	virtualization@...ts.osdl.org
Subject: Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport

On 28/10/10 10:27, Avi Kivity wrote:
> On 10/27/2010 03:00 PM, Ian Molton wrote:
>> On 19/10/10 11:39, Avi Kivity wrote:
>>> On 10/19/2010 12:31 PM, Ian Molton wrote:
>>
>>>>> 2. should start with a patch to the virtio-pci spec to document what
>>>>> you're doing
>>>>
>>>> Where can I find that spec?
>>>
>>> http://ozlabs.org/~rusty/virtio-spec/
>>
>> Ok, but I'm not patching that until theres been some review.
>
> Well, I like to review an implementation against a spec.

True, but then all that would prove is that I can write a spec to match 
the code.

The code is proof of concept. the kernel bit is pretty simple, but I'd 
like to get some idea of whether the rest of the code will be accepted 
given that theres not much point in having any one (or two) of these 
components exist without the other.

> Better, but still unsatisfying. If the server is busy, the caller would
> block. I guess it's expected since it's called from ->fsync(). I'm not
> sure whether that's the best interface, perhaps aio_writev is better.

The caller is intended to block as the host must perform GL rendering 
before allowing the guests process to continue.

The only real bottleneck is that processes will block trying to submit 
data if another process is performing rendering, but that will only be 
solved when the renderer is made multithreaded. The same would happen on 
a real GPU if it had only one queue too.

If you look at the host code, you can see that the data is already 
buffered per-process, in a pretty sensible way. if the renderer itself 
were made a seperate thread, then this problem magically disappears (the 
queuing code on the host is pretty fast).

In testing, the overhead of this was pretty small anyway. Running a few 
dozen glxgears and a copy of ioquake3 simultaneously on an intel video 
card managed the same framerate with the same CPU utilisation, both with 
the old code and the version I just posted. Contention during rendering 
just isn't much of an issue.

-Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ