[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznGZP3KUBN2M6syrjTmVOdSM0zx23hcJ6+hqE8Drgz2f-A@mail.gmail.com>
Date: Fri, 10 May 2024 10:43:20 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>, Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>, Tejun Heo <tj@...nel.org>, Josef Bacik <josef@...icpanda.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, linux-mm@...ck.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, steve.kang@...soc.com
Subject: Re: [RFC PATCH 2/2] mm: introduce budgt control in readahead
On Thu, May 9, 2024 at 11:15 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Thu, May 09, 2024 at 10:39:37AM +0800, zhaoyang.huang wrote:
> > -static unsigned long get_next_ra_size(struct file_ra_state *ra,
> > +static unsigned long get_next_ra_size(struct readahead_control *ractl,
> > unsigned long max)
> > {
> > - unsigned long cur = ra->size;
> > + unsigned long cur = ractl->ra->size;
> > + struct inode *inode = ractl->mapping->host;
> > + unsigned long budgt = inode->i_sb->s_bdev ?
> > + blk_throttle_budgt(inode->i_sb->s_bdev) : 0;
>
> You can't do this. There's no guarantee that the IO is going to
> mapping->host->i_sb->s_bdev. You'd have to figure out how to ask the
> filesystem to get the bdev for the particular range (eg the fs might
> implement RAID internally).
>
Thanks for the prompt. I did some basic research on soft RAID and
wonder if applying the bps limit on /dev/md0 like below could make
this work.
mdadm -C -v /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
mount /dev/md0 /mnt/raid0/
echo "/dev/md0 100000" > blkio.throttle.read_bps_device
I didn't find information about 'RAID internally'. Could we set the
limit on the root device(the one used for mount) to manage the whole
partition without caring about where the bio finally goes? Or ask the
user to decide if to use by making sure the device they apply will not
do RAID?
Powered by blists - more mailing lists