lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFLCcBoUGPDuzZMo8OGKWK2z8=405VUypjU0k82g_eWYsMSnyg@mail.gmail.com>
Date:	Tue, 14 Jan 2014 15:26:25 +0800
From:	Cai Liu <liucai.lfn@...il.com>
To:	Minchan Kim <minchan@...nel.org>, Bob Liu <bob.liu@...cle.com>
Cc:	Cai Liu <cai.liu@...sung.com>, sjenning@...ux.vnet.ibm.com,
	akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH] mm/zswap: Check all pool pages instead of one pool pages

Hello, Kim

2014/1/14 Minchan Kim <minchan@...nel.org>:
> Hello Bob,
>
> On Tue, Jan 14, 2014 at 09:19:23AM +0800, Bob Liu wrote:
>>
>> On 01/14/2014 07:35 AM, Minchan Kim wrote:
>> > Hello,
>> >
>> > On Sat, Jan 11, 2014 at 03:43:07PM +0800, Cai Liu wrote:
>> >> zswap can support multiple swapfiles. So we need to check
>> >> all zbud pool pages in zswap.
>> >
>> > True but this patch is rather costly that we should iterate
>> > zswap_tree[MAX_SWAPFILES] to check it. SIGH.
>> >
>> > How about defining zswap_tress as linked list instead of static
>> > array? Then, we could reduce unnecessary iteration too much.
>> >
>>
>> But if use linked list, it might not easy to access the tree like this:
>> struct zswap_tree *tree = zswap_trees[type];
>
> struct zswap_tree {
>     ..
>     ..
>     struct list_head list;
> }
>
> zswap_frontswap_init()
> {
>     ..
>     ..
>     zswap_trees[type] = tree;
>     list_add(&tree->list, &zswap_list);
> }
>
> get_zswap_pool_pages(void)
> {
>     struct zswap_tree *cur;
>     list_for_each_entry(cur, &zswap_list, list) {
>         pool_pages += zbud_get_pool_size(cur->pool);
>     }
>     return pool_pages;
> }
>
>
>>
>> BTW: I'm still prefer to use dynamic pool size, instead of use
>> zswap_is_full(). AFAIR, Seth has a plan to replace the rbtree with radix
>> which will be more flexible to support this feature and page migration
>> as well.
>>
>> > Other question:
>> > Why do we need to update zswap_pool_pages too frequently?
>> > As I read the code, I think it's okay to update it only when user
>> > want to see it by debugfs and zswap_is_full is called.
>> > So could we optimize it out?
>> >
>> >>
>> >> Signed-off-by: Cai Liu <cai.liu@...sung.com>
>>
>> Reviewed-by: Bob Liu <bob.liu@...cle.com>
>
> Hmm, I really suprised you are okay in this code piece where we have
> unnecessary cost most of case(ie, most system has a swap device) in
> *mm* part.
>
> Anyway, I don't want to merge this patchset.
> If Andrew merge it and anybody doesn't do right work, I will send a patch.
> Cai, Could you redo a patch?

Yes, Unnecessary iteration is not good design.
I will redo this patch.

Thanks!

> I don't want to intercept your credit.
>
> Even, we could optimize to reduce the the number of call as I said in
> previous reply.
>
> Thanks.
>
>>
>> >> ---
>> >>  mm/zswap.c |   18 +++++++++++++++---
>> >>  1 file changed, 15 insertions(+), 3 deletions(-)
>> >>
>> >> diff --git a/mm/zswap.c b/mm/zswap.c
>> >> index d93afa6..2438344 100644
>> >> --- a/mm/zswap.c
>> >> +++ b/mm/zswap.c
>> >> @@ -291,7 +291,6 @@ static void zswap_free_entry(struct zswap_tree *tree,
>> >>    zbud_free(tree->pool, entry->handle);
>> >>    zswap_entry_cache_free(entry);
>> >>    atomic_dec(&zswap_stored_pages);
>> >> -  zswap_pool_pages = zbud_get_pool_size(tree->pool);
>> >>  }
>> >>
>> >>  /* caller must hold the tree lock */
>> >> @@ -405,10 +404,24 @@ cleanup:
>> >>  /*********************************
>> >>  * helpers
>> >>  **********************************/
>> >> +static u64 get_zswap_pool_pages(void)
>> >> +{
>> >> +  int i;
>> >> +  u64 pool_pages = 0;
>> >> +
>> >> +  for (i = 0; i < MAX_SWAPFILES; i++) {
>> >> +          if (zswap_trees[i])
>> >> +                  pool_pages += zbud_get_pool_size(zswap_trees[i]->pool);
>> >> +  }
>> >> +  zswap_pool_pages = pool_pages;
>> >> +
>> >> +  return pool_pages;
>> >> +}
>> >> +
>> >>  static bool zswap_is_full(void)
>> >>  {
>> >>    return (totalram_pages * zswap_max_pool_percent / 100 <
>> >> -          zswap_pool_pages);
>> >> +          get_zswap_pool_pages());
>> >>  }
>> >>
>> >>  /*********************************
>> >> @@ -716,7 +729,6 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
>> >>
>> >>    /* update stats */
>> >>    atomic_inc(&zswap_stored_pages);
>> >> -  zswap_pool_pages = zbud_get_pool_size(tree->pool);
>> >>
>> >>    return 0;
>> >>
>> >> --
>> >> 1.7.10.4
>> --
>> Regards,
>> -Bob
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@...ck.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>
> --
> Kind regards,
> Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ