lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 28 Jul 2007 21:33:59 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Mike Galbraith <efault@....de>, Ingo Molnar <mingo@...e.hu>,
	Frank Kingswood <frank@...gswood-consulting.co.uk>,
	Andi Kleen <andi@...stfloor.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Ray Lee <ray-lk@...rabbit.org>,
	Jesper Juhl <jesper.juhl@...il.com>,
	ck list <ck@....kolivas.org>, Paul Jackson <pj@....com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: RFT: updatedb "morning after" problem [was: Re: -mm merge plans
 for 2.6.23]

Andrew Morton wrote:

> What I think is killing us here is the blockdev pagecache: the pagecache
> which backs those directory entries and inodes.  These pages get read
> multiple times because they hold multiple directory entries and multiple
> inodes.  These multiple touches will put those pages onto the active list
> so they stick around for a long time and everything else gets evicted.
> 
> I've never been very sure about this policy for the metadata pagecache.  We
> read the filesystem objects into the dcache and icache and then we won't
> read from that page again for a long time (I expect).  But the page will
> still hang around for a long time.
> 
> It could be that we should leave those pages inactive.

Good idea for updatedb.

However, it may be a bad idea for files that are often
written to.  Turning an inode write into a read plus a
write does not sound like such a hot idea, we really
want to keep those in the cache.

I think what you need is to ignore multiple references
to the same page when they all happen in one time
interval, counting them only if they happen in multiple
time intervals.

The use-once cleanup (which takes a page flag for PG_new,
I know...) would solve that problem.

However, it would introduce the problem of having to scan
all the pages on the list before a page becomes freeable.
We would have to add some background scanning (or a separate
list for PG_new pages) to make the initial pageout run use
an acceptable amount of CPU time.

Not sure that complexity will be worth it...

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ