[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1243382763.13930.112.camel@nigel-laptop>
Date: Wed, 27 May 2009 10:06:03 +1000
From: Nigel Cunningham <nigel@...onice.net>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: linux-pm@...ts.linux-foundation.org,
tuxonice-devel@...ts.tuxonice.net, linux-kernel@...r.kernel.org,
Pavel Machek <pavel@....cz>
Subject: Re: [TuxOnIce-devel] [RFC] TuxOnIce
Hi.
On Wed, 2009-05-27 at 00:37 +0200, Rafael J. Wysocki wrote:
> Well, first, our multithreaded I/O is probably not the same thing as you think
> of, because we have an option to use multiple user space threads that process
> image data (compress, encrypt and feed them to the kernel).
Well, I'm using multiple kernel threads, so it's probably not that
different, but okay.
> Now, we found that it only improves performance substantially if both
> compression and encryption are used.
>
> As far as the numbers are concerned, I don't have the raw hdparm numbers
> handy, but image writing speed with compression alone usually is in the
> 60 MB/s - 100 MB/s range, sometimes more than 100 MB/s, but that really depends
> on the system. If encryption is added, it drops substantially and multiple
> threads allow us to restore the previous performance (not 100%, but close
> enough to be worth the additional complexity IMO).
Okay. That's still a significant improvement over 20 or 30 MB/s. It
depends, of course, also on the compression ratio that's achieved.
> Still, one should always take the total length of hibernate-resume cycle into
> accout and the image writing/reading time need not be the greatest part of it.
Yes, the total time is what matters. I'm focussing on the I/O speed
because - especially with larger images - it tends to be the biggest
portion.
Regards,
Nigel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists