lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <515FD0C6.5050001@suse.cz> Date: Sat, 06 Apr 2013 09:37:42 +0200 From: Jiri Slaby <jslaby@...e.cz> To: Theodore Ts'o <tytso@....edu>, Mel Gorman <mgorman@...e.de>, linux-ext4@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org> Subject: Re: Excessive stall times on ext4 in 3.9-rc2 On 04/06/2013 09:29 AM, Jiri Slaby wrote: > On 04/06/2013 01:16 AM, Theodore Ts'o wrote: >> On Sat, Apr 06, 2013 at 12:18:11AM +0200, Jiri Slaby wrote: >>> Ok, so now I'm runnning 3.9.0-rc5-next-20130404, it's not that bad, but >>> it still sucks. Updating a kernel in a VM still results in "Your system >>> is too SLOW to play this!" by mplayer and frame dropping. >> >> What was the first kernel where you didn't have the problem? Were you >> using the 3.8 kernel earlier, and did you see the interactivity >> problems there? > > I'm not sure, as I am using -next like for ever. But sure, there was a > kernel which didn't ahve this problem. > >> What else was running in on your desktop at the same time? > > Nothing, just VM (kernel update from console) and mplayer2 on the host. > This is more-or-less reproducible with these two. Ok, dd if=/dev/zero of=xxx is enough instead of "kernel update". Writeback mount doesn't help. >> How was >> the file system mounted, > > Both are actually a single device /dev/sda5: > /dev/sda5 on /win type ext4 (rw,noatime,data=ordered) > > Should I try writeback? > >> and can you send me the output of dumpe2fs -h >> /dev/XXX? > > dumpe2fs 1.42.7 (21-Jan-2013) > Filesystem volume name: <none> > Last mounted on: /win > Filesystem UUID: cd4bf4d2-bc32-4777-a437-ee24c4ee5f1b > Filesystem magic number: 0xEF53 > Filesystem revision #: 1 (dynamic) > Filesystem features: has_journal ext_attr resize_inode dir_index > filetype needs_recovery extent flex_bg sparse_super large_file huge_file > uninit_bg dir_nlink extra_isize > Filesystem flags: signed_directory_hash > Default mount options: user_xattr acl > Filesystem state: clean > Errors behavior: Continue > Filesystem OS type: Linux > Inode count: 30507008 > Block count: 122012416 > Reserved block count: 0 > Free blocks: 72021328 > Free inodes: 30474619 > First block: 0 > Block size: 4096 > Fragment size: 4096 > Reserved GDT blocks: 994 > Blocks per group: 32768 > Fragments per group: 32768 > Inodes per group: 8192 > Inode blocks per group: 512 > RAID stride: 32747 > Flex block group size: 16 > Filesystem created: Fri Sep 7 20:44:21 2012 > Last mount time: Thu Apr 4 12:22:01 2013 > Last write time: Thu Apr 4 12:22:01 2013 > Mount count: 256 > Maximum mount count: -1 > Last checked: Sat Sep 8 21:13:28 2012 > Check interval: 0 (<none>) > Lifetime writes: 1011 GB > Reserved blocks uid: 0 (user root) > Reserved blocks gid: 0 (group root) > First inode: 11 > Inode size: 256 > Required extra isize: 28 > Desired extra isize: 28 > Journal inode: 8 > Default directory hash: half_md4 > Directory Hash Seed: b6ad3f8b-72ce-49d6-92cb-abccd7dbe98e > Journal backup: inode blocks > Journal features: journal_incompat_revoke > Journal size: 128M > Journal length: 32768 > Journal sequence: 0x00054dc7 > Journal start: 8193 > >> Oh, and what options were you using to when you kicked off >> the VM? > > qemu-kvm -k en-us -smp 2 -m 1200 -soundhw hda -usb -usbdevice tablet > -net user -net nic,model=e1000 -serial pty -balloon virtio -hda x.img > >> The other thing that would be useful was to enable the jbd2_run_stats >> tracepoint and to send the output of the trace log when you notice the >> interactivity problems. > > Ok, I will try. > > thanks, > -- js suse labs -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists