lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7xYJfRLSMYk9tj9@slm.duckdns.org>
Date:   Mon, 9 Jan 2023 08:08:37 -1000
From:   Tejun Heo <tj@...nel.org>
To:     hanjinke <hanjinke.666@...edance.com>
Cc:     Jan Kara <jack@...e.cz>,
        Michal Koutný <mkoutny@...e.com>,
        josef@...icpanda.com, axboe@...nel.dk, cgroups@...r.kernel.org,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yinxin.x@...edance.com
Subject: Re: [External] Re: [PATCH v3] blk-throtl: Introduce sync and async
 queues for blk-throtl

Hello,

On Sat, Jan 07, 2023 at 12:44:35PM +0800, hanjinke wrote:
> For cost.model setting, We first use the tools iocost provided to test the
> benchmark model parameters of different types of disks online, and then save
> these benchmark parameters to a parametric Model Table. During the
> deployment process, pull and set the corresponding model parameters
> according to the type of disk.
> 
> The setting of cost.qos should be considered slightly more,we need to make
> some compromises between overall disk throughput and io latency.
> The average disk utilization of the entire disk on a specific business and
> the RLA(if it is io sensitive) of key businesses will be taken as
> important input considerations. The cost.qos will be dynamically fine-tuned
> according to the health status monitoring of key businesses.

Ah, I see. Do you use the latency targets and min/max ranges or just fixate
the vrate by setting min == max?

> For cost.weight setting, high-priority services  will gain greater
> advantages through weight settings to deal with a large number of io
> requests in a short period of time. It works fine as work-conservation
> of iocost works well according to our observation.

Glad to hear.

> These practices can be done better and I look forward to your better
> suggestions.

It's still in progress but resctl-bench's iocost-tune benchmark is what
we're starting to use:

 https://github.com/facebookexperimental/resctl-demo/blob/main/resctl-bench/doc/iocost-tune.md

The benchmark takes like 6 hours and what it does is probing the whole vrate
range looking for behavior inflection points given the scenario of
protecting a latency sensitive workload against memory leak. On completion,
it provides several solutions based on the behavior observed.

The benchmark is destructive (to the content on the target ssd) and can be
tricky to set up. There's installable image to help setting up and running
the benchmark:

 https://github.com/iocost-benchmark/resctl-demo-image-recipe/actions

The eventual goal is collecting these benchmark results in the following git
repo:

 https://github.com/iocost-benchmark/iocost-benchmarks

which generates hwdb files describing all the found solution and make
systemd apply the appropriate configuration on boot automatically.

It's still all a work in progress but hopefully we should be able to
configure iocost reasonably on boot on most SSDs.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ