lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120118143718.663b8cf5.akpm@linux-foundation.org>
Date:	Wed, 18 Jan 2012 14:37:18 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Hugh Dickins <hughd@...gle.com>
Cc:	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Rik van Riel <riel@...hat.com>,
	Shaohua Li <shaohua.li@...el.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michel Lespinasse <walken@...gle.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	stable@...r.kernel.org
Subject: Re: [PATCH 1/2] SHM_UNLOCK: fix long unpreemptible section

On Sat, 14 Jan 2012 16:18:43 -0800 (PST)
Hugh Dickins <hughd@...gle.com> wrote:

> scan_mapping_unevictable_pages() is used to make SysV SHM_LOCKed pages
> evictable again once the shared memory is unlocked.  It does this with
> pagevec_lookup()s across the whole object (which might occupy most of
> memory), and takes 300ms to unlock 7GB here.  A cond_resched() every
> PAGEVEC_SIZE pages would be good.
> 
> However, KOSAKI-san points out that this is called under shmem.c's
> info->lock, and it's also under shm.c's shm_lock(), both spinlocks.
> There is no strong reason for that: we need to take these pages off
> the unevictable list soonish, but those locks are not required for it.
> 
> So move the call to scan_mapping_unevictable_pages() from shmem.c's
> unlock handling up to shm.c's unlock handling.  Remove the recently
> added barrier, not needed now we have spin_unlock() before the scan.
> 
> Use get_file(), with subsequent fput(), to make sure we have a
> reference to mapping throughout scan_mapping_unevictable_pages():
> that's something that was previously guaranteed by the shm_lock().
> 
> Remove shmctl's lru_add_drain_all(): we don't fault in pages at
> SHM_LOCK time, and we lazily discover them to be Unevictable later,
> so it serves no purpose for SHM_LOCK; and serves no purpose for
> SHM_UNLOCK, since pages still on pagevec are not marked Unevictable.
> 
> The original code avoided redundant rescans by checking VM_LOCKED
> flag at its level: now avoid them by checking shp's SHM_LOCKED.
> 
> The original code called scan_mapping_unevictable_pages() on a
> locked area at shm_destroy() time: perhaps we once had accounting
> cross-checks which required that, but not now, so skip the overhead
> and just let inode eviction deal with them.
> 
> Put check_move_unevictable_page() and scan_mapping_unevictable_pages()
> under CONFIG_SHMEM (with stub for the TINY case when ramfs is used),
> more as comment than to save space; comment them used for SHM_UNLOCK.
> 
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> Cc: stable@...r.kernel.org [back to 2.6.32 but will need respins]

Is -stable backporting really warranted?  AFAICT the only thing we're
fixing here is a long latency glitch during a rare operation on large
machines.  Usually it will be on only one CPU, too.

"[PATCH 2/2] SHM_UNLOCK: fix Unevictable pages stranded after swap"
does loko like -stable material, so omitting 1/1 will probably screw
things up :(


> Resend in the hope that it can get into 3.3.

That we can do ;)

>
> ...
>
> --- mmotm.orig/mm/vmscan.c	2012-01-06 10:04:54.000000000 -0800
> +++ mmotm/mm/vmscan.c	2012-01-06 10:06:13.941943604 -0800
> @@ -3499,6 +3499,7 @@ int page_evictable(struct page *page, st
>  	return 1;
>  }
>  
> +#ifdef CONFIG_SHMEM
>  /**
>   * check_move_unevictable_page - check page for evictability and move to appropriate zone lru list
>   * @page: page to check evictability and move to appropriate lru list
> @@ -3509,6 +3510,8 @@ int page_evictable(struct page *page, st
>   *
>   * Restrictions: zone->lru_lock must be held, page must be on LRU and must
>   * have PageUnevictable set.
> + *
> + * This function is only used for SysV IPC SHM_UNLOCK.
>   */
>  static void check_move_unevictable_page(struct page *page, struct zone *zone)
>  {
> @@ -3545,6 +3548,8 @@ retry:
>   *
>   * Scan all pages in mapping.  Check unevictable pages for
>   * evictability and move them to the appropriate zone lru list.
> + *
> + * This function is only used for SysV IPC SHM_UNLOCK.
>   */
>  void scan_mapping_unevictable_pages(struct address_space *mapping)
>  {
> @@ -3590,9 +3595,14 @@ void scan_mapping_unevictable_pages(stru
>  		pagevec_release(&pvec);
>  
>  		count_vm_events(UNEVICTABLE_PGSCANNED, pg_scanned);
> +		cond_resched();
>  	}
> -
>  }
> +#else
> +void scan_mapping_unevictable_pages(struct address_space *mapping)
> +{
> +}
> +#endif /* CONFIG_SHMEM */

Inlining the CONFIG_SHMEM=n stub would have been mroe efficient.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ