lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49006758.9020901@redhat.com>
Date:	Thu, 23 Oct 2008 08:00:24 -0400
From:	Ric Wheeler <rwheeler@...hat.com>
To:	Solofo.Ramangalahy@...l.net,
	Arjan De Ven <arjan.van.de.ven@...el.com>
CC:	Eric Sandeen <sandeen@...hat.com>,
	Ric Wheeler <rwheeler@...hat.com>,
	"Theodore Ts'o" <tytso@....edu>, linux-ext4@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: jbd/jbd2 performance improvements

Solofo.Ramangalahy@...l.net wrote:
>>>>>> On Thu, 16 Oct 2008 07:39:04 -0500, Eric Sandeen <sandeen@...hat.com> said:
>>>>>>             
>     >> A very thorough test, but the results don't seem to point to a
>     >> consistent winner.
>     >> 
>     >> I agree that running without KVM in the picture might be very
>     >> interesting. Eric has some similar tests underway, I think that
>     >> his results were also inconclusive so far...
>
>     Eric> Yep, I've yet to find an fs_mark invocation, at least, which
>     Eric> shows a clear winner.  I also ran w/ akpm's suggested
>     Eric> io_schedule watcher patch and never see us waiting on this
>     Eric> lock (I did set it to 1s though, which is probably too long
>     Eric> for my storage).
>
> I've redone the tests without kvm. Still no clear winner
>
> To sum up:
> . kernel ext4-stable
> . mkfs (1.41.3) default options
> . mount options: default, akpm, akpm_lock_hack
> . scheduler default (cfq)
> . 8 cpus, single 15K rpm disk.
> . without the high latency detection patch
> . a broad range of fs_mark (all the sync strategies, from 1 to 32
>   threads, up to 10000 files/thread, several directories).
> . a "tangled synchrony" workload as mentionned in the "Analysis and
>   evolution of journaling file systems" paper discussed monday.
>
> First things first, maybe I should have spent more time
> reproducing Arjan behavior before testing.
>
> This was not a complete waste of time though, as the following errors
> were spotted during the runs:
> 1. EXT4-fs error (device sdb): ext4_free_inode: bit already cleared for inode 32769
> 2. EXT4-fs error (device sdb): ext4_init_inode_bitmap: Checksum bad for group 8
> 3. BUG: spinlock wrong CPU on CPU#3, fs_mark/1975
>  lock: ffff88015a44f480, .magic: dead4ead, .owner: fs_mark/1975, .owner_cpu: 1
> Pid: 1975, comm: fs_mark Not tainted 2.6.27.1-ext4-stable-gcov #1
>
> Call Trace:
>  [<ffffffff811a47a2>] spin_bug+0xa2/0xaa
>  [<ffffffff811a481f>] _raw_spin_unlock+0x75/0x8a
>  [<ffffffff814552c1>] _spin_unlock+0x26/0x2a
>  [<ffffffffa00d4fd3>] ext4_read_inode_bitmap+0xfa/0x14e [ext4]
>  [<ffffffffa00d564b>] ext4_new_inode+0x5d4/0xec4 [ext4]
>  [<ffffffff810562db>] ? __lock_acquire+0x481/0x7d8
>  [<ffffffffa00c2430>] ? jbd2_journal_start+0xef/0x11a [jbd2]
>  [<ffffffffa00c2430>] ? jbd2_journal_start+0xef/0x11a [jbd2]
>  [<ffffffffa00deb99>] ext4_create+0xc7/0x144 [ext4]
>  [<ffffffff810b6734>] vfs_create+0xdf/0x155
>  [<ffffffff810b8905>] do_filp_open+0x220/0x7fc
>  [<ffffffff814552c1>] ? _spin_unlock+0x26/0x2a
>  [<ffffffff810abe5a>] do_sys_open+0x53/0xd3
>  [<ffffffff810abf03>] sys_open+0x1b/0x1d
>  [<ffffffff8100bf8b>] system_call_fastpath+0x16/0x1b
>  
> Anybody seen this in their logs?
>
> The "bit already cleared for inode" is triggered by:
> fs_mark -v -d /mnt/test-ext4 -n10000 -D10 -N1000 -t8 -s4096 -S0
>
>   
Arjan,

Do you have any details on the test case that you ran that showed a 
clear improvement? What kind of storage & IO pattern did you use?

Regards,

Ric

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ