lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120402105417.GG15713@amit.redhat.com>
Date:	Mon, 2 Apr 2012 16:24:17 +0530
From:	Amit Shah <amit.shah@...hat.com>
To:	Wen Congyang <wency@...fujitsu.com>
Cc:	kvm list <kvm@...r.kernel.org>, qemu-devel <qemu-devel@...gnu.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Avi Kivity <avi@...hat.com>,
	"Daniel P. Berrange" <berrange@...hat.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Jan Kiszka <jan.kiszka@...mens.com>,
	Gleb Natapov <gleb@...hat.com>
Subject: Re: [PATCH 0/2 v3] kvm: notify host when guest panicked

On (Mon) 02 Apr 2012 [18:05:45], Wen Congyang wrote:
> At 03/19/2012 03:33 PM, Wen Congyang Wrote:
> > At 03/08/2012 03:57 PM, Wen Congyang Wrote:
> >> We can know the guest is paniced when the guest runs on xen.
> >> But we do not have such feature on kvm.
> >>
> >> Another purpose of this feature is: management app(for example:
> >> libvirt) can do auto dump when the guest is crashed. If management
> >> app does not do auto dump, the guest's user can do dump by hand if
> >> he sees the guest is paniced.
> >>
> >> I touch the hypervisor instead of using virtio-serial, because
> >> 1. it is simple
> >> 2. the virtio-serial is an optional device, and the guest may
> >>    not have such device.
> >>
> >> Changes from v2 to v3:
> >> 1. correct spelling
> >>
> >> Changes from v1 to v2:
> >> 1. split up host and guest-side changes
> >> 2. introduce new request flag to avoid changing return values.
> > 
> > Hi all:
> > 
> > we neet this feature, but we don't decide how to implement it.
> > We have two solution:
> > 1. use vmcall
> > 2. use virtio-serial.
> 
> Hi, all
> 
> There are three solutions now:
> 1. use vmcall
> 2. use I/O port
> 3. use virtio-serial.
> 
> I think 1 and 2 are more simple than 3.
> 
> I am reading virtio-serial's driver recent days. It seems that
> if the virtio serial port is not opened at the host side, the
> data writen into this port will be discarded, and we will have
> no chance to know the guest is panicked.

The qemu-side implementation should exist within qemu itself; i.e. the
consumer of the data from the kernel-side will always have a
listener.  In this case, you don't have to worry about port not being
opened on the host side.

> To Amit:
> 
> Can we write message to a virtio serial port like this directly in
> the kernel space?

As I had mentioned earlier, an in-kernel API is currently missing, but
it will be very simple to add one.

> send_buf(panicked_port, message, message's length, true);
> 
> if port->outvq_full is true, is it OK to do this?

port->outvq_full means guest has sent out data to the host, but the
host has not consumed the data yet, and has not released the buffers
back to the guest.

If you indeed reach such a situation (essentially you should never,
there are enough buffers already to account for scheduling delays),
then newer data will be discarded, or you will have to wait to write
the newer data till the host-side frees up buffers in the virtqueues.

This isn't really different from other approaches -- if you use a
shared buffer between guest and host, and if the guest has new data
before the host has had a chance to read off the older buffer
contents, you either overwrite or wait for the host to read the older
stuff.


		Amit
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ