lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Mar 2007 13:53:14 +0200
From:	Blaisorblade <blaisorblade@...oo.it>
To:	user-mode-linux-devel@...ts.sourceforge.net
Cc:	Jeff Dike <jdike@...toit.com>, Andrew Morton <akpm@...l.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [uml-devel] [PATCH] UML - fix I/O hang when multiple devices are in use

On giovedì 29 marzo 2007, Jeff Dike wrote:
> On Thu, Mar 29, 2007 at 02:36:43AM +0200, Blaisorblade wrote:
> > > Sometimes you need to. I'd probably just remove the do_ubd check and
> > > always recall the request function when handling completions, it's
> > > easier and safe.
>
> If I'm understanding this correctly, this is what happens now.  There
> is still the flag check and return if the queue is being run, but I
> don't see the advantage of removing that.
>
> > Anyway, the main speedups to do on the UBD driver are:
> > * implement write barriers (so much less fsync) - this is performance
> > killer n.1
>
> You mean preventing the upper layers from calling fsync?

No. Since we don't know when the upper layers (including the journaling layer) 
wants to fsync, we call it everytime. But they pass this information. Chris 
Lightfoot implemented write barriers just before the API was changed, 
together with much of the other stuff I'm talking about.

It's impressive to check his original mail - the scenario with create a 32M 
file + delete it, where delete takes a minute on vanilla and 1 second on his 
patched code. I've downloaded the patch for future reference, even if I don't 
know when I'll have time to look at it.

> > * possibly to use the new 2.6 request layout with scatter/gather I/O, and
> > vectorized I/O on the host
>
> Yeah, this is something I've thought about on occassion but never
> done.
>
> > * while at vectorizing I/O using async I/O
>
> I have that, but haven't merged it since I see no performance benefit
> for some reason.
>
> > * to avoid passing requests on pipes (n.2) - on fast disk I/O becomes
> > cpu-bound.
>
> Right - I cooked up a scheme a while ago that had the requests on a
> list, being removed from one end and added to the other, with some
> minimal number of bytes going across the pipe to ensure a wakeup if
> the other side was possibly asleep.  But I never implemented it.
>
> > * using futexes instead of pipes for synchronization (required for
> > previous one).
>
> Yup - for this, we either need to test the host for futuxes and use
> pipes as a fallback or give up on 2.4 as the host.
>
> > I forgot one thing: remember ubd=mmap? Something like that could have
> > been done using MAP_PRIVATE, so that write had still to be called
> > explicitly but unchanged data was shared with the host.
> >
> > Once a page gets dirty but is then cleaned, sharing it back is
> > difficult - but even without that good savings could be
> > achievable. That's to explore for the very future though.
>
> Interesting idea.  That does avoid the formerly fatal mmap problem.
> If you unmap it, the private copy goes away because it lost its last
> reference, and if you map it again, you get the shared version.
>
> That's a lot of mapping and unmapping though.  I wonder if just
> calling mmap would cause the COWed page to be dropped...
>
> 				Jeff



-- 
Inform me of my mistakes, so I can add them to my list!
Paolo Giarrusso, aka Blaisorblade
http://www.user-mode-linux.org/~blaisorblade
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ