[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFj5m9K4yv4wkX2bhXSOf141dY9O96WdNfjMMYXCOoyM_Fdndg@mail.gmail.com>
Date: Wed, 24 Sep 2025 18:04:52 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Zhaoyang Huang <huangzhaoyang@...il.com>
Cc: Bart Van Assche <bvanassche@....org>, Suren Baghdasaryan <surenb@...gle.com>, Todd Kjos <tkjos@...roid.com>,
Christoph Hellwig <hch@...radead.org>, "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Jens Axboe <axboe@...nel.dk>, linux-mm@...ck.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, steve.kang@...soc.com,
Minchan Kim <minchan@...nel.org>, Ming Lei <ming.lei@...hat.com>
Subject: Re: [RFC PATCH] driver: loop: introduce synchronized read for loop driver
On Wed, Sep 24, 2025 at 5:13 PM Zhaoyang Huang <huangzhaoyang@...il.com> wrote:
>
> loop google kernel team. When active_depth of the cgroupv2 is set to
> 3, the loop device's request I2C will be affected by schedule latency
> which is introduced by huge numbers of kworker thread corresponding to
> blkcg for each. What's your opinion on this RFC patch?
There are some issues on this RFC patch:
- current->plug can't be touched by driver, cause there can be request
from other devices
- you can't sleep in loop_queue_rq()
The following patchset should address your issue, and I can rebase & resend
if no one objects.
https://lore.kernel.org/linux-block/20250322012617.354222-1-ming.lei@redhat.com/
Thanks,
>
> On Wed, Sep 24, 2025 at 12:30 AM Bart Van Assche <bvanassche@....org> wrote:
> >
> > On 9/22/25 8:50 PM, Zhaoyang Huang wrote:
> > > Yes, we have tried to solve this case from the above perspective. As
> > > to the scheduler, packing small tasks to one core(Big core in ARM)
> > > instead of spreading them is desired for power-saving reasons. To the
> > > number of kworker threads, it is upon current design which will create
> > > new work for each blkcg. According to ANDROID's current approach, each
> > > PID takes one cgroup and correspondingly a kworker thread which
> > > actually induces this scenario.
> >
> > More cgroups means more overhead from cgroup-internal tasks, e.g.
> > accumulating statistics. How about requesting to the Android core team
> > to review the approach of associating one cgroup with each PID? I'm
> > wondering whether the approach of one cgroup per aggregate profile
> > (SCHED_SP_BACKGROUND, SCHED_SP_FOREGROUND, ...) would work.
> >
> > Thanks,
> >
> > Bart.
>
Powered by blists - more mailing lists