[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51227FDA.7040000@linux.vnet.ibm.com>
Date: Mon, 18 Feb 2013 13:24:10 -0600
From: Seth Jennings <sjenning@...ux.vnet.ibm.com>
To: Ric Mason <ric.masonn@...il.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Nitin Gupta <ngupta@...are.org>,
Minchan Kim <minchan@...nel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Dan Magenheimer <dan.magenheimer@...cle.com>,
Robert Jennings <rcj@...ux.vnet.ibm.com>,
Jenifer Hopper <jhopper@...ibm.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <jweiner@...hat.com>,
Rik van Riel <riel@...hat.com>,
Larry Woodman <lwoodman@...hat.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
Joe Perches <joe@...ches.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org
Subject: Re: [PATCHv5 4/8] zswap: add to mm/
On 02/15/2013 10:04 PM, Ric Mason wrote:
> On 02/14/2013 02:38 AM, Seth Jennings wrote:
<snip>
>> + * The statistics below are not protected from concurrent access for
>> + * performance reasons so they may not be a 100% accurate. However,
>> + * the do provide useful information on roughly how many times a
>
> s/the/they
Ah yes, thanks :)
>
>> + * certain event is occurring.
>> +*/
>> +static u64 zswap_pool_limit_hit;
>> +static u64 zswap_reject_compress_poor;
>> +static u64 zswap_reject_zsmalloc_fail;
>> +static u64 zswap_reject_kmemcache_fail;
>> +static u64 zswap_duplicate_entry;
>> +
>> +/*********************************
>> +* tunables
>> +**********************************/
>> +/* Enable/disable zswap (disabled by default, fixed at boot for
>> now) */
>> +static bool zswap_enabled;
>> +module_param_named(enabled, zswap_enabled, bool, 0);
>
> please document in Documentation/kernel-parameters.txt.
Will do.
>
>> +
>> +/* Compressor to be used by zswap (fixed at boot for now) */
>> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
>> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
>> +module_param_named(compressor, zswap_compressor, charp, 0);
>
> ditto
ditto
>
>> +
<snip>
>> +/* invalidates all pages for the given swap type */
>> +static void zswap_frontswap_invalidate_area(unsigned type)
>> +{
>> + struct zswap_tree *tree = zswap_trees[type];
>> + struct rb_node *node, *next;
>> + struct zswap_entry *entry;
>> +
>> + if (!tree)
>> + return;
>> +
>> + /* walk the tree and free everything */
>> + spin_lock(&tree->lock);
>> + node = rb_first(&tree->rbroot);
>> + while (node) {
>> + entry = rb_entry(node, struct zswap_entry, rbnode);
>> + zs_free(tree->pool, entry->handle);
>> + next = rb_next(node);
>> + zswap_entry_cache_free(entry);
>> + node = next;
>> + }
>> + tree->rbroot = RB_ROOT;
>
> Why don't need rb_erase for every nodes?
We are freeing the entire tree here. try_to_unuse() in the swapoff
syscall should have already emptied the tree, but this is here for
completeness.
rb_erase() will do things like rebalancing the tree; something that
just wastes time since we are in the process of freeing the whole
tree. We are holding the tree lock here so we are sure that no one
else is accessing the tree while it is in this transient broken state.
Thanks,
Seth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists