lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc634240-bfa0-43c5-b34a-257411d0e6a1@gmail.com>
Date: Thu, 2 May 2024 20:05:36 +0100
From: Usama Arif <usamaarif642@...il.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, yosryahmed@...gle.com,
 chengming.zhou@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 kernel-team@...a.com
Subject: Re: [PATCH] selftests: cgroup: add tests to verify the zswap
 writeback path


On 01/05/2024 16:44, Nhat Pham wrote:
> On Wed, May 1, 2024 at 3:04 AM Usama Arif <usamaarif642@...il.com> wrote:
>> The condition for writeback can be triggered by allocating random
>> memory more than memory.high to push memory into zswap, more than
>> zswap.max to trigger writeback if enabled, but less than memory.max
>> so that OOM is not triggered. Both values of memory.zswap.writeback
>> are tested.
> Thanks for adding the test, Usama :) A couple of suggestions below.
>
>> Signed-off-by: Usama Arif <usamaarif642@...il.com>
>> ---
>>   tools/testing/selftests/cgroup/test_zswap.c | 83 +++++++++++++++++++++
>>   1 file changed, 83 insertions(+)
>>
>> diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
>> index f0e488ed90d8..fe0e7221525c 100644
>> --- a/tools/testing/selftests/cgroup/test_zswap.c
>> +++ b/tools/testing/selftests/cgroup/test_zswap.c
>> @@ -94,6 +94,19 @@ static int allocate_bytes(const char *cgroup, void *arg)
>>          return 0;
>>   }
>>
>> +static int allocate_random_bytes(const char *cgroup, void *arg)
>> +{
>> +       size_t size = (size_t)arg;
>> +       char *mem = (char *)malloc(size);
>> +
>> +       if (!mem)
>> +               return -1;
>> +       for (int i = 0; i < size; i++)
>> +               mem[i] = rand() % 128;
>> +       free(mem);
>> +       return 0;
>> +}
>> +
>>   static char *setup_test_group_1M(const char *root, const char *name)
>>   {
>>          char *group_name = cg_name(root, name);
>> @@ -248,6 +261,74 @@ static int test_zswapin(const char *root)
>>          return ret;
>>   }
>>
>> +/* Test to verify the zswap writeback path */
>> +static int test_zswap_writeback(const char *root, bool wb)
>> +{
>> +       int ret = KSFT_FAIL;
>> +       char *test_group;
>> +       long zswpwb_before, zswpwb_after;
>> +
>> +       test_group = cg_name(root,
>> +               wb ? "zswap_writeback_enabled_test" : "zswap_writeback_disabled_test");
>> +       if (!test_group)
>> +               goto out;
>> +       if (cg_create(test_group))
>> +               goto out;
>> +       if (cg_write(test_group, "memory.max", "8M"))
>> +               goto out;
>> +       if (cg_write(test_group, "memory.high", "2M"))
>> +               goto out;
>> +       if (cg_write(test_group, "memory.zswap.max", "2M"))
>> +               goto out;
>> +       if (cg_write(test_group, "memory.zswap.writeback", wb ? "1" : "0"))
>> +               goto out;
>> +
>> +       zswpwb_before = cg_read_key_long(test_group, "memory.stat", "zswpwb ");
>> +       if (zswpwb_before < 0) {
>> +               ksft_print_msg("failed to get zswpwb_before\n");
>> +               goto out;
>> +       }
>> +
>> +       /*
>> +        * Allocate more than memory.high to push memory into zswap,
>> +        * more than zswap.max to trigger writeback if enabled,
>> +        * but less than memory.max so that OOM is not triggered
>> +        */
>> +       if (cg_run(test_group, allocate_random_bytes, (void *)MB(3)))
>> +               goto out;
> I think we should document better why we allocate random bytes (rather
> than just using the existing allocation helper).
>
> This random allocation pattern (rand() % 128) is probably still
> compressible by zswap, albeit poorly. I assume this is so that zswap
> would not be able to just absorb all the swapped out pages?

Thanks for the review! I have added doc in v2 explaining why random 
memory is used.


>> +
>> +       /* Verify that zswap writeback occurred only if writeback was enabled */
>> +       zswpwb_after = cg_read_key_long(test_group, "memory.stat", "zswpwb ");
>> +       if (wb) {
>> +               if (zswpwb_after <= zswpwb_before) {
>> +                       ksft_print_msg("writeback enabled and zswpwb_after <= zswpwb_before\n");
>> +                       goto out;
>> +               }
>> +       } else {
>> +               if (zswpwb_after != zswpwb_before) {
>> +                       ksft_print_msg("writeback disabled and zswpwb_after != zswpwb_before\n");
>> +                       goto out;
>> +               }
> It'd be nice if we can check that in this case, the number of pages
> that are "swapped out" matches the cgroup's zswpout stats :)

I think with the method in v2, this might not be easily tracked as some 
metrics are all time (zswpout) while others are current.

>
>> +       }
>> +
>> +       ret = KSFT_PASS;
>> +
>> +out:
>> +       cg_destroy(test_group);
>> +       free(test_group);
>> +       return ret;
>> +}
>> +
>> +static int test_zswap_writeback_enabled(const char *root)
>> +{
>> +       return test_zswap_writeback(root, true);
>> +}
>> +
>> +static int test_zswap_writeback_disabled(const char *root)
>> +{
>> +       return test_zswap_writeback(root, false);
>> +}
>> +
>>   /*
>>    * When trying to store a memcg page in zswap, if the memcg hits its memory
>>    * limit in zswap, writeback should affect only the zswapped pages of that
>> @@ -425,6 +506,8 @@ struct zswap_test {
>>          T(test_zswap_usage),
>>          T(test_swapin_nozswap),
>>          T(test_zswapin),
>> +       T(test_zswap_writeback_enabled),
>> +       T(test_zswap_writeback_disabled),
>>          T(test_no_kmem_bypass),
>>          T(test_no_invasive_cgroup_shrink),
>>   };
>> --
>> 2.43.0
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ