lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120727094320.GK18651@amit.redhat.com>
Date:	Fri, 27 Jul 2012 15:13:20 +0530
From:	Amit Shah <amit.shah@...hat.com>
To:	Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@...achi.com>
Cc:	linux-kernel@...r.kernel.org,
	Herbert Xu <herbert@...dor.hengli.com.au>,
	Arnd Bergmann <arnd@...db.de>,
	Frederic Weisbecker <fweisbec@...il.com>,
	yrl.pp-manager.tt@...achi.com, qemu-devel@...gnu.org,
	Borislav Petkov <bp@...64.org>,
	virtualization@...ts.linux-foundation.org,
	"Franch Ch. Eigler" <fche@...hat.com>,
	Ingo Molnar <mingo@...hat.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Anthony Liguori <anthony@...emonkey.ws>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
Subject: Re: Re: [RFC PATCH 0/6] virtio-trace: Support virtio-trace

On (Fri) 27 Jul 2012 [17:55:11], Yoshihiro YUNOMAE wrote:
> Hi Amit,
> 
> Thank you for commenting on our work.
> 
> (2012/07/26 20:35), Amit Shah wrote:
> >On (Tue) 24 Jul 2012 [11:36:57], Yoshihiro YUNOMAE wrote:
> 
> [...]
> 
> >>
> >>Therefore, we propose a new system "virtio-trace", which uses enhanced
> >>virtio-serial and existing ring-buffer of ftrace, for collecting guest kernel
> >>tracing data. In this system, there are 5 main components:
> >>  (1) Ring-buffer of ftrace in a guest
> >>      - When trace agent reads ring-buffer, a page is removed from ring-buffer.
> >>  (2) Trace agent in the guest
> >>      - Splice the page of ring-buffer to read_pipe using splice() without
> >>        memory copying. Then, the page is spliced from write_pipe to virtio
> >>        without memory copying.
> >
> >I really like the splicing idea.
> 
> Thanks. We will improve this patch set.
> 
> >>  (3) Virtio-console driver in the guest
> >>      - Pass the page to virtio-ring
> >>  (4) Virtio-serial bus in QEMU
> >>      - Copy the page to kernel pipe
> >>  (5) Reader in the host
> >>      - Read guest tracing data via FIFO(named pipe)
> >
> >So will this be useful only if guest and host run the same kernel?
> >
> >I'd like to see the host kernel not being used at all -- collect all
> >relevant info from the guest and send it out to qemu, where it can be
> >consumed directly by apps driving the tracing.
> 
> No, this patch set is used only for guest kernels, so guest and host
> don't need to run the same kernel.

OK - that's good to know.

> >>***Evaluation***
> >>When a host collects tracing data of a guest, the performance of using
> >>virtio-trace is compared with that of using native(just running ftrace),
> >>IVRing, and virtio-serial(normal method of read/write).
> >
> >Why is tracing performance-sensitive?  i.e. why try to optimise this
> >at all?
> 
> To minimize effects for applications on guests when a host collects
> tracing data of guests.
> For example, we assume the situation where guests A and B are running
> on a host sharing I/O device. An I/O delay problem occur in guest A,
> but it doesn't for the requirement in guest B. In this case, we need to
> collect tracing data of guests A and B, but a usual method using
> network takes high load for applications of guest B even if guest B is
> normally running. Therefore, we try to decrease the load on guests.
> We also use this feature for performance analysis on production
> virtualization systems.

OK, got it.

> 
> [...]
> 
> >>
> >>***Just enhancement ideas***
> >>  - Support for trace-cmd
> >>  - Support for 9pfs protocol
> >>  - Support for non-blocking mode in QEMU
> >
> >There were patches long back (by me) to make chardevs non-blocking but
> >they didn't make it upstream.  Fedora carries them, if you want to try
> >out.  Though we want to converge on a reasonable solution that's
> >acceptable upstream as well.  Just that no one's working on it
> >currently.  Any help here will be appreciated.
> 
> Thanks! In this case, since a guest will stop to run when host reads
> trace data of the guest, char device is needed to add a non-blocking
> mode. I'll read your patch series. Is the latest version 8?
> http://lists.gnu.org/archive/html/qemu-devel/2010-12/msg00035.html

I suppose the latest version on-list is what you quote above.  The
objections to the patch series are mentioned in Anthony's mails.

Hans maintains a rebased version of the patches in his tree at

http://cgit.freedesktop.org/~jwrdegoede/qemu/

those patches are included in Fedora's qemu-kvm, so you can try that
out if it improves performance for you.

> >>  - Make "vhost-serial"
> >
> >I need to understand a) why it's perf-critical, and b) why should the
> >host be involved at all, to comment on these.
> 
> a) To make collecting overhead decrease for application on a guest.
>    (see above)
> b) Trace data of host kernel is not involved even if we introduce this
>    patch set.

I see, so you suggested vhost-serial only because you saw the guest
stopping problem due to the absence of non-blocking code?  If so, it
now makes sense.  I don't think we need vhost-serial in any way yet.

BTW where do you parse the trace data obtained from guests?  On a
remote host?

Thanks,
		Amit
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ