lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100216210728.GO29569@tux1.beaverton.ibm.com>
Date:	Tue, 16 Feb 2010 13:07:28 -0800
From:	"Darrick J. Wong" <djwong@...ibm.com>
To:	"Theodore Ts'o" <tytso@....edu>
Cc:	Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH v4 0/3] dioread_nolock patch

On Fri, Jan 15, 2010 at 02:30:09PM -0500, Theodore Ts'o wrote:

> The plan is to merge this for 2.6.34.  I've looked this over pretty
> carefully, but another pair of eyes would be appreciated, especially if

I don't have a high speed disk but it was suggested that I give this patchset a
whirl anyway, so down the rabbit hole I went.  I created a 16GB ext4 image in
an equally big tmpfs, then ran the read/readall directio tests in ffsb to see
if I could observe any difference.  The kernel is 2.6.33-rc8, and the machine
in question has 2 Xeon E5335 processors and 24GB of RAM.  I reran the test
several times, with varying thread counts, to produce the table below.  The
units are MB/s.

For the dio_lock case, mount options were: rw,relatime,barrier=1,data=ordered.
For the dio_nolock case, they were: rw,relatime,barrier=1,data=ordered,dioread_nolock.

	dio_nolock	dio_lock
threads	read	readall	read	readall
1	37.6	149	39	159
2	59.2	245	62.4	246
4	114	453	112	445
8	111	444	115	459
16	109	442	113	448
32	114	443	121	484
64	106	422	108	434
128	104	417	101	393
256	101	412	90.5	366
512	93.3	377	84.8	349
1000	87.1	353	88.7	348

It would seem that the old code paths are faster with a small number of
threads, but the new patch seems to be faster when the thread counts become
very high.  That said, I'm not all that familiar with what exactly tmpfs does,
or how well it mimicks an SSD (though I wouldn't be surprised to hear
"poorly").  This of course makes me wonder--do other people see results like
this, or is this particular to my harebrained setup?

For that matter, do I need to have more patches than just 2.6.33-rc8 and the
four posted in this thread?

I also observed that I could make the kernel spit up "Process hung for more
than 120s!" messages if I happened to be running ffsb on a real disk during a
heavy directio write load.  I'll poke around on that a little more and write
back when I have more details.

For poweroff testing, could one simulate a power failure by running IO
workloads in a VM and then SIGKILLing the VM?  I don't remember seeing any sort
of powerfail test suite from the Googlers, but my mail client has been drinking
out of firehoses lately. ;)

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ