lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 01 Jun 2012 13:26:26 +0100
From:	Steven Whitehouse <swhiteho@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	linux-kernel@...r.kernel.org, cluster-devel@...hat.com,
	Al Viro <viro@...iv.linux.org.uk>, nstraz@...hat.com
Subject: Re: seq_file: Use larger buffer to reduce time traversing lists

Hi,

On Fri, 2012-06-01 at 14:14 +0200, Eric Dumazet wrote:
> On Fri, 2012-06-01 at 14:10 +0200, Eric Dumazet wrote:
> > On Fri, 2012-06-01 at 11:39 +0100, Steven Whitehouse wrote:
> > > I've just been taking a look at the seq_read() code, since we've noticed
> > > that dumping files with large numbers of records can take some
> > > considerable time. This is due to seq_read() using a buffer which, at
> > > most is a single page in size, and that it has to find its place again
> > > on every call to seq_read(). That makes it rather inefficient.
> > > 
> > > As an example, I created a GFS2 filesystem with 100k inodes in it, and
> > > then ran ls -l to get a decent number of cached inodes. This result in
> > > there being approx 400k lines in the debugfs file containing GFS2's
> > > glocks. I then timed how long it takes to read this file:
> > > 
> > > [root@...woon mnt]# time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks
> > > of=/dev/null bs=1M
> > > 0+5769 records in
> > > 0+5769 records out
> > > 23273958 bytes (23 MB) copied, 63.3681 s, 367 kB/s
> > 
> > What time do you get if you do
> > 
> > time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks of=/dev/null bs=4k
> > 
> > This patch seems the wrong way to me.
> > 
> > seq_read(size = 1MB) should perform many copy_to_user() calls instead of a single one.
> > 
> > Instead of doing kmalloc(m->size <<= 1, GFP_KERNEL) each time we overflow the buffer,
> > we should flush its content to user space.
> > 
> > 
> 
> by the way, is the following command even working ?
> 
> time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks of=/dev/null bs=16M
> 
> I guess not, it probably returns -ENOMEM
> 
> 
> 

Why would it return -ENOMEM? It works for me, at worst it will fall back
to a single page buffer size unless we are really stuck for memory, and
in that case, all bets are off,

Steve.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ