lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 May 2023 17:50:24 -0700
From:   Song Liu <song@...nel.org>
To:     Yu Kuai <yukuai1@...weicloud.com>
Cc:     akpm@...l.org, neilb@...e.de, linux-raid@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
        yangerkun@...wei.com, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next v2 0/7] limit the number of plugged bio

On Fri, May 12, 2023 at 2:43 AM Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> Hi,
>
> 在 2023/04/26 16:20, Yu Kuai 写道:
> > From: Yu Kuai <yukuai3@...wei.com>
> >
> > Changes in v2:
> >   - remove the patch to rename raid1-10.c
> >
> > This patchset tries to limit the number of plugged bio for raid1 and
> > raid10, which is done in the last patch, other patches are some refactor
> > and optimizations.
> >
> > This patchset is tested with a new test [1], this test triggers dirty
> > pages writeback for 10s, and in the meantime checks disk inflight.
> >
> > Before this patchset, test will fail because inflight exceed threshold
> > (threshold is set to 4096 in the test, in theory this can be mutch
> >   greater as long as there are enough dirty pages and memory).
> >
> > After this patchset, inflight is within 96 (MAX_PLUG_BIO * copies).
> >
> > [1] https://lore.kernel.org/linux-raid/20230426073447.1294916-1-yukuai1@huaweicloud.com/
>
> Friendly ping...

I am sorry for the delay.

The set looks good overall, but I will need some more time to take a closer
look. A few comments/questions:

1. For functions in raid1-10.c, let's prefix them with raid1_ instead of md_*.
2. Do we need unplug_wq to be per-bitmap? Would a shared queue work?

Thanks,
Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ