lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGis_TWtWMK93nVBa_D_Y2D3Su8x_dDNwNw9h=v=8zoaHuAXBA@mail.gmail.com>
Date: Wed, 23 Apr 2025 11:51:49 +0100
From: Matt Fleming <mfleming@...udflare.com>
To: Yu Kuai <yukuai1@...weicloud.com>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org, 
	linux-kernel@...r.kernel.org, kernel-team <kernel-team@...udflare.com>, 
	"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: 10x I/O await times in 6.12

On Mon, 21 Apr 2025 at 13:21, Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> Can you drop this expensive bpftrace script which might affect IO
> performance, and replace __submit_bio directly to __blk_flush_plug? If
> nsecs - plug->cur_ktime is still milliseconds, can you check if the
> following patch can fix your problem?

Yep, the below patch fixes the regression and restores I/O wait times
that are comparable to 6.6. Thanks!

> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ae8494d88897..37197502147e 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1095,7 +1095,9 @@ static inline void blk_account_io_start(struct
> request *req)
>                  return;
>
>          req->rq_flags |= RQF_IO_STAT;
> -       req->start_time_ns = blk_time_get_ns();
> +
> +       if (!current->plug)
> +               req->start_time_ns = blk_time_get_ns();
>
>          /*
>           * All non-passthrough requests are created from a bio with one
> @@ -2874,6 +2876,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug,
> bool from_schedule)
>   {
>          struct request *rq;
>          unsigned int depth;
> +       u64 now;
>
>          /*
>           * We may have been called recursively midway through handling
> @@ -2887,6 +2890,10 @@ void blk_mq_flush_plug_list(struct blk_plug
> *plug, bool from_schedule)
>          depth = plug->rq_count;
>          plug->rq_count = 0;
>
> +       now = ktime_get_ns();
> +       rq_list_for_each(&plug->mq_list, rq)
> +               rq->start_time_ns = now;
> +
>          if (!plug->multiple_queues && !plug->has_elevator &&
> !from_schedule) {
>                  struct request_queue *q;
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ