lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznHq15QN5Dn6_QfbAm7jS9OPCV4TVqn2_9RxUBx0V9v78w@mail.gmail.com>
Date:   Wed, 1 Dec 2021 19:12:30 +0800
From:   Zhaoyang Huang <huangzhaoyang@...il.com>
To:     Nitin Gupta <ngupta@...are.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Jens Axboe <axboe@...nel.dk>,
        Johannes Weiner <hannes@...xchg.org>,
        Minchan Kim <minchan@...nel.org>,
        Zhaoyang Huang <zhaoyang.huang@...soc.com>,
        "open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm: count zram read/write into PSI_IO_WAIT

There is no chance for zram reading/writing to be counted in
PSI_IO_WAIT so far as zram will deal with the request just in current
context without invoking submit_bio and io_schedule.

On Wed, Dec 1, 2021 at 6:59 PM Huangzhaoyang <huangzhaoyang@...il.com> wrote:
>
> From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>
> Have zram reading/writing be counted in PSI_IO_WAIT.
>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> ---
>  drivers/block/zram/zram_drv.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index fcaf275..b0e4766 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -34,6 +34,7 @@
>  #include <linux/debugfs.h>
>  #include <linux/cpuhotplug.h>
>  #include <linux/part_stat.h>
> +#include <linux/psi.h>
>
>  #include "zram_drv.h"
>
> @@ -1246,7 +1247,9 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
>                                 zram_get_element(zram, index),
>                                 bio, partial_io);
>         }
> -
> +#ifdef CONFIG_PSI
> +       psi_task_change(current, 0, TSK_IOWAIT);
> +#endif
>         handle = zram_get_handle(zram, index);
>         if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) {
>                 unsigned long value;
> @@ -1257,6 +1260,9 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
>                 zram_fill_page(mem, PAGE_SIZE, value);
>                 kunmap_atomic(mem);
>                 zram_slot_unlock(zram, index);
> +#ifdef CONFIG_PSI
> +               psi_task_change(current, TSK_IOWAIT, 0);
> +#endif
>                 return 0;
>         }
>
> @@ -1284,6 +1290,9 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
>         if (WARN_ON(ret))
>                 pr_err("Decompression failed! err=%d, page=%u\n", ret, index);
>
> +#ifdef CONFIG_PSI
> +       psi_task_change(current, TSK_IOWAIT, 0);
> +#endif
>         return ret;
>  }
>
> @@ -1471,7 +1480,13 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
>                 vec.bv_offset = 0;
>         }
>
> +#ifdef CONFIG_PSI
> +       psi_task_change(current, 0, TSK_IOWAIT);
> +#endif
>         ret = __zram_bvec_write(zram, &vec, index, bio);
> +#ifdef CONFIG_PSI
> +       psi_task_change(current, TSK_IOWAIT, 0);
> +#endif
>  out:
>         if (is_partial_io(bvec))
>                 __free_page(page);
> @@ -1607,7 +1622,6 @@ static blk_qc_t zram_submit_bio(struct bio *bio)
>                 atomic64_inc(&zram->stats.invalid_io);
>                 goto error;
>         }
> -
>         __zram_make_request(zram, bio);
>         return BLK_QC_T_NONE;
>
> --
> 1.9.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ