lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Jan 2012 14:05:47 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	kosaki.motohiro@...il.com
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	David Rientjes <rientjes@...gle.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Mel Gorman <mel@....ul.ie>,
	Johannes Weiner <jweiner@...hat.com>
Subject: Re: [PATCH 1/2] mm,mlock: drain pagevecs asynchronously

On Sun,  1 Jan 2012 02:30:24 -0500
kosaki.motohiro@...il.com wrote:

> Because lru_add_drain_all() spent much time.

Those LRU pagevecs are horrid things.  They add high code and
conceptual complexity, they add pointless uniprocessor overhead and the
way in which they leave LRU pages floating around not on an LRU is
rather maddening.

So the best way to fix all of this as well as this problem we're
observing is, I hope, to completely remove them.

They've been in there for ~10 years and at the time they were quite
beneficial in reducing lru_lock contention, hold times, acquisition
frequency, etc.

The approach to take here is to prepare the patches which eliminate
lru_*_pvecs then identify the problems which occur as a result, via
code inspection and runtime testing.  Then fix those up.

Many sites which take lru_lock are already batching the operation. 
It's a matter of hunting down those sites which take the lock
once-per-page and, if they have high frequency, batch them up.

Converting readahead to batch the locking will be pretty simple
(read_pages(), mpage_readpages(), others).  That will fix pagefaults
too.  

rotate_reclaimable_page() can be batched by batching
end_page_writeback(): a bio contains many pages already.

deactivate_page() can be batched too - invalidate_mapping_pages() is
already working on large chunks of pages.

Those three cases are fairly simple - we just didn't try, because the
lru_*_pvecs were there to do the work for us.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ