lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Jan 2014 14:07:28 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Andi Kleen <andi@...stfloor.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Bob Liu <bob.liu@...cle.com>,
	Christoph Hellwig <hch@...radead.org>,
	Dave Chinner <david@...morbit.com>,
	Greg Thelen <gthelen@...gle.com>,
	Hugh Dickins <hughd@...gle.com>, Jan Kara <jack@...e.cz>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Luigi Semenzato <semenzato@...gle.com>,
	Mel Gorman <mgorman@...e.de>,
	Metin Doslu <metin@...usdata.com>,
	Michel Lespinasse <walken@...gle.com>,
	Ozgun Erdogan <ozgun@...usdata.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Rik van Riel <riel@...hat.com>,
	Roman Gushchin <klamm@...dex-team.ru>,
	Ryan Mallon <rmallon@...il.com>, Tejun Heo <tj@...nel.org>,
	Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 5/9] mm + fs: prepare for non-page entries in page cache
 radix trees

Hi Hannes,

On Wed, Jan 22, 2014 at 12:47:44PM -0500, Johannes Weiner wrote:
> On Mon, Jan 13, 2014 at 11:01:32AM +0900, Minchan Kim wrote:
> > On Fri, Jan 10, 2014 at 01:10:39PM -0500, Johannes Weiner wrote:
> > > shmem mappings already contain exceptional entries where swap slot
> > > information is remembered.
> > > 
> > > To be able to store eviction information for regular page cache,
> > > prepare every site dealing with the radix trees directly to handle
> > > entries other than pages.
> > > 
> > > The common lookup functions will filter out non-page entries and
> > > return NULL for page cache holes, just as before.  But provide a raw
> > > version of the API which returns non-page entries as well, and switch
> > > shmem over to use it.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > Reviewed-by: Minchan Kim <minchan@...nel.org>
> 
> Thanks, Minchan!
> 
> > > @@ -890,6 +973,73 @@ repeat:
> > >  EXPORT_SYMBOL(find_or_create_page);
> > >  
> > >  /**
> > > + * __find_get_pages - gang pagecache lookup
> > > + * @mapping:	The address_space to search
> > > + * @start:	The starting page index
> > > + * @nr_pages:	The maximum number of pages
> > > + * @pages:	Where the resulting pages are placed
> > 
> > where is @indices?
> 
> Fixed :)
> 
> > > @@ -894,6 +894,53 @@ EXPORT_SYMBOL(__pagevec_lru_add);
> > >  
> > >  /**
> > >   * pagevec_lookup - gang pagecache lookup
> > 
> >       __pagevec_lookup?
> > 
> > > + * @pvec:	Where the resulting entries are placed
> > > + * @mapping:	The address_space to search
> > > + * @start:	The starting entry index
> > > + * @nr_pages:	The maximum number of entries
> > 
> >       missing @indices?
> > 
> > > + *
> > > + * pagevec_lookup() will search for and return a group of up to
> > > + * @nr_pages pages and shadow entries in the mapping.  All entries are
> > > + * placed in @pvec.  pagevec_lookup() takes a reference against actual
> > > + * pages in @pvec.
> > > + *
> > > + * The search returns a group of mapping-contiguous entries with
> > > + * ascending indexes.  There may be holes in the indices due to
> > > + * not-present entries.
> > > + *
> > > + * pagevec_lookup() returns the number of entries which were found.
> > 
> >       __pagevec_lookup
> 
> Yikes, all three fixed.
> 
> > > @@ -22,6 +22,22 @@
> > >  #include <linux/cleancache.h>
> > >  #include "internal.h"
> > >  
> > > +static void clear_exceptional_entry(struct address_space *mapping,
> > > +				    pgoff_t index, void *entry)
> > > +{
> > > +	/* Handled by shmem itself */
> > > +	if (shmem_mapping(mapping))
> > > +		return;
> > > +
> > > +	spin_lock_irq(&mapping->tree_lock);
> > > +	/*
> > > +	 * Regular page slots are stabilized by the page lock even
> > > +	 * without the tree itself locked.  These unlocked entries
> > > +	 * need verification under the tree lock.
> > > +	 */
> > 
> > Could you explain why repeated spin_lock with irq disabled isn't problem
> > in truncation path?
> 
> To modify the cache tree, we have to take the IRQ-safe tree_lock, this
> is no different than removing a page (see truncate_complete_page).

I meant we can do batch irq_[lock|unlock] part with periodic irq release
because clear_exceptional_entry is always called with gang pagecache
lookup.

Just a comment about optimiztation so it shouldn't be critical for merging
and we could do in future if it's really problem for scalability.

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ