[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eef1f655-4fff-618d-4b8e-447230ec8ed9@huaweicloud.com>
Date: Tue, 13 Aug 2024 14:39:32 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Lance Yang <ioworker0@...il.com>, Yu Kuai <yukuai1@...weicloud.com>
Cc: Michal Koutný <mkoutny@...e.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
cgroups@...r.kernel.org, josef@...icpanda.com, tj@...nel.org,
fujita.tomonori@....ntt.co.jp, boqun.feng@...il.com, a.hindborg@...sung.com,
paolo.valente@...more.it, axboe@...nel.dk, vbabka@...nel.org,
david@...hat.com, 21cnbao@...il.com, baolin.wang@...ux.alibaba.com,
libang.li@...group.com, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [BUG] cgroupv2/blk: inconsistent I/O behavior in Cgroup v2 with
set device wbps and wiops
Hi,
在 2024/08/13 13:00, Lance Yang 写道:
> Hi Kuai,
>
> Thanks a lot for jumping in!
>
> On Tue, Aug 13, 2024 at 9:37 AM Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/08/12 23:43, Michal Koutný 写道:
>>> +Cc Kuai
>>>
>>> On Mon, Aug 12, 2024 at 11:00:30PM GMT, Lance Yang <ioworker0@...il.com> wrote:
>>>> Hi all,
>>>>
>>>> I've run into a problem with Cgroup v2 where it doesn't seem to correctly limit
>>>> I/O operations when I set both wbps and wiops for a device. However, if I only
>>>> set wbps, then everything works as expected.
>>>>
>>>> To reproduce the problem, we can follow these command-based steps:
>>>>
>>>> 1. **System Information:**
>>>> - Kernel Version and OS Release:
>>>> ```
>>>> $ uname -r
>>>> 6.10.0-rc5+
>>>>
>>>> $ cat /etc/os-release
>>>> PRETTY_NAME="Ubuntu 24.04 LTS"
>>>> NAME="Ubuntu"
>>>> VERSION_ID="24.04"
>>>> VERSION="24.04 LTS (Noble Numbat)"
>>>> VERSION_CODENAME=noble
>>>> ID=ubuntu
>>>> ID_LIKE=debian
>>>> HOME_URL="https://www.ubuntu.com/"
>>>> SUPPORT_URL="https://help.ubuntu.com/"
>>>> BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
>>>> PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
>>>> UBUNTU_CODENAME=noble
>>>> LOGO=ubuntu-logo
>>>> ```
>>>>
>>>> 2. **Device Information and Settings:**
>>>> - List Block Devices and Scheduler:
>>>> ```
>>>> $ lsblk
>>>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
>>>> sda 8:0 0 4.4T 0 disk
>>>> └─sda1 8:1 0 4.4T 0 part /data
>>>> ...
>>>>
>>>> $ cat /sys/block/sda/queue/scheduler
>>>> none [mq-deadline] kyber bfq
>>>>
>>>> $ cat /sys/block/sda/queue/rotational
>>>> 1
>>>> ```
>>>>
>>>> 3. **Reproducing the problem:**
>>>> - Navigate to the cgroup v2 filesystem and configure I/O settings:
>>>> ```
>>>> $ cd /sys/fs/cgroup/
>>>> $ stat -fc %T /sys/fs/cgroup
>>>> cgroup2fs
>>>> $ mkdir test
>>>> $ echo "8:0 wbps=10485760 wiops=100000" > io.max
>>>> ```
>>>> In this setup:
>>>> wbps=10485760 sets the write bytes per second limit to 10 MB/s.
>>>> wiops=100000 sets the write I/O operations per second limit to 100,000.
>>>>
>>>> - Add process to the cgroup and verify:
>>>> ```
>>>> $ echo $$ > cgroup.procs
>>>> $ cat cgroup.procs
>>>> 3826771
>>>> 3828513
>>>> $ ps -ef|grep 3826771
>>>> root 3826771 3826768 0 22:04 pts/1 00:00:00 -bash
>>>> root 3828761 3826771 0 22:06 pts/1 00:00:00 ps -ef
>>>> root 3828762 3826771 0 22:06 pts/1 00:00:00 grep --color=auto 3826771
>>>> ```
>>>>
>>>> - Observe I/O performance using `dd` commands and `iostat`:
>>>> ```
>>>> $ dd if=/dev/zero of=/data/file1 bs=512M count=1 &
>>>> $ dd if=/dev/zero of=/data/file1 bs=512M count=1 &
>>
>> You're testing buffer IO here, and I don't see that write back cgroup is
>> enabled. Is this test intentional? Why not test direct IO?
>
> Yes, I was testing buffered I/O and can confirm that CONFIG_CGROUP_WRITEBACK
> was enabled.
>
> $ cat /boot/config-6.10.0-rc5+ |grep CONFIG_CGROUP_WRITEBACK
> CONFIG_CGROUP_WRITEBACK=y
>
> We intend to configure both wbps (write bytes per second) and wiops
> (write I/O operations
> per second) for the containers. IIUC, this setup will effectively
> restrict both their block device
> I/Os and buffered I/Os.
>
>> Why not test direct IO?
>
> I was testing direct IO as well. However it did not work as expected with
> `echo "8:0 wbps=10485760 wiops=100000" > io.max`.
>
> $ time dd if=/dev/zero of=/data/file7 bs=512M count=1 oflag=direct
So, you're issuing one huge IO, with 512M.
> 1+0 records in
> 1+0 records out
> 536870912 bytes (537 MB, 512 MiB) copied, 51.5962 s, 10.4 MB/s
And this result looks correct. Please noted that blk-throtl works before
IO submit, while iostat reports IO that are done. A huge IO can be
throttled for a long time.
>
> real 0m51.637s
> user 0m0.000s
> sys 0m0.313s
>
> $ iostat -d 1 -h -y -p sda
> tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn
> kB_dscd Device
> 9.00 0.0k 1.3M 0.0k 0.0k 1.3M
> 0.0k sda
> 9.00 0.0k 1.3M 0.0k 0.0k 1.3M
> 0.0k sda1
I don't understand yet is why there are few IO during the wait. Can you
test for a raw disk to bypass filesystem?
Thanks,
Kuai
Powered by blists - more mailing lists