lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Oct 2006 14:17:18 -0500
From:	Anthony Liguori <aliguori@...ibm.com>
To:	Avi Kivity <avi@...ranet.com>
CC:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	John Stoffel <john@...ffel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/7] KVM: userspace interface

Avi Kivity wrote:
> Anthony Liguori wrote:
>>
>> ioctls are probably wrong here though.  Ideally, you would want to be 
>> able to support an SMP guest.  This means you need to have two 
>> virtual processors executing in kernel space.  If you use ioctls, it 
>> forces you to have two separate threads in userspace.  This would be 
>> hard for something like QEMU which is currently single threaded (and 
>> not at all thread safe).
>>
>
> Since we're using the Linux scheduler, we need a task per virtual cpu 
> anyway, so a thread per vcpu is not a problem.
>

You miss my point I think.  Using ioctls *requires* a thread per-vcpu in 
userspace.  This is unnecessary since you could simply provide a 
char-device based read/write interface.  You could then multiplex events 
and poll.

If for nothing else, you have to be able to run timers in userspace and 
interrupt the kernel execution (to signal DMA completion for instance).  
Even in the UP case, this gets ugly quickly.

read/write is really just a much cleaner interface for anything that has 
blocking semantics.

Regards,

Anthony Liguori

>> If you used a read/write interface, you could poll for any number of 
>> processors and handle IO emulation in a single userspace thread 
>> (which seems closer to how hardware really works anyway).
>>
>
> We can still do that by having the thread write an I/O request to 
> hardware service thread, and read back the response.  However that 
> will not be too good for scheduling.  For now the smp plan is to slap 
> a single lock on the qemu device model, and later fine-grain the 
> locking on individual devices as necessary.
>
> Qemu's transition to aio will probably help in reducing the amount of 
> work done under lock.
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ