lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 09 Jan 2012 15:42:46 -0500
From:	KOSAKI Motohiro <kosaki.motohiro@...il.com>
To:	Hugh Dickins <hughd@...gle.com>
CC:	kosaki.motohiro@...il.com,
	Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan.kim@...il.com>,
	Rik van Riel <riel@...hat.com>,
	Shaohua Li <shaohua.li@...el.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michel Lespinasse <walken@...gle.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/2] SHM_UNLOCK: fix Unevictable pages stranded after
 swap



2012/1/6 Hugh Dickins <hughd@...gle.com>:
> Commit cc39c6a9bbde "mm: account skipped entries to avoid looping in
> find_get_pages" correctly fixed an infinite loop; but left a problem
> that find_get_pages() on shmem would return 0 (appearing to callers
> to mean end of tree) when it meets a run of nr_pages swap entries.
>
> The only uses of find_get_pages() on shmem are via pagevec_lookup(),
> called from invalidate_mapping_pages(), and from shmctl SHM_UNLOCK's
> scan_mapping_unevictable_pages().  The first is already commented,
> and not worth worrying about; but the second can leave pages on the
> Unevictable list after an unusual sequence of swapping and locking.
>
> Fix that by using shmem_find_get_pages_and_swap() (then ignoring
> the swap) instead of pagevec_lookup().
>
> But I don't want to contaminate vmscan.c with shmem internals, nor
> shmem.c with LRU locking.  So move scan_mapping_unevictable_pages()
> into shmem.c, renaming it shmem_unlock_mapping(); and rename
> check_move_unevictable_page() to check_move_unevictable_pages(),
> looping down an array of pages, oftentimes under the same lock.
>
> Leave out the "rotate unevictable list" block: that's a leftover
> from when this was used for /proc/sys/vm/scan_unevictable_pages,
> whose flawed handling involved looking at pages at tail of LRU.
>
> Was there significance to the sequence first ClearPageUnevictable,
> then test page_evictable, then SetPageUnevictable here?  I think
> not, we're under LRU lock, and have no barriers between those.

If I understand correctly, this is not exactly correct. Because of,
PG_mlocked operation is not protected by LRU lock. So, I think we
have three choice.

1) check_move_unevictable_pages() aimed retry logic and put pages back
    into correct lru.
2) check_move_unevictable_pages() unconditionally move the pages into
    evictable lru, and vmacan put them back into correct lru later.
3) To protect PG_mlock operation by lru lock.


other parts looks fine to me.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ