lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5130DA05.4070601@redhat.com>
Date:	Fri, 01 Mar 2013 10:40:37 -0600
From:	Eric Sandeen <sandeen@...hat.com>
To:	"Theodore Ts'o" <tytso@....edu>
CC:	Dave Jones <davej@...hat.com>,
	"gnehzuil.liu" <gnehzuil.liu@...il.com>,
	Zheng Liu <wenqing.lz@...bao.com>,
	"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH] ext4: optimize ext4_es_shrink()

On 2/28/13 11:00 PM, Theodore Ts'o wrote:
> When the system is under memory pressure, ext4_es_srhink() will get
> called very often.  So optimize returning the number of items in the
> file system's extent status cache by keeping a per-filesystem count,
> instead of calculating it each time by scanning all of the inodes in
> the extent status cache.
> 
> Also rename the slab used for the extent status cache to be
> "ext4_extent_status" so it's obviousl the slab in question is created
> by ext4.

Certainly better than walking an arbitrarily long list.  :)
So:

Reviewed-by: Eric Sandeen <sandeen@...hat.com>

I was wondering a couple things, though - 

1) should this one be scaled by the vfs_cache_pressure sysctl?

2) Also, given that this is only for shrinker accounting, do we need the
precision of the atomic counter? I see that quota uses a per-cpu counter.
Would a percpu counter be any more efficient?  I'll follow
w/ a patch.

-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ