[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGWkznFe4W0M4NE_ZjiSC6+28tHqJoah6dmP+X1aP6oCCTTe2Q@mail.gmail.com>
Date: Thu, 25 Sep 2025 09:14:12 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: Bart Van Assche <bvanassche@....org>, Suren Baghdasaryan <surenb@...gle.com>, Todd Kjos <tkjos@...roid.com>,
Christoph Hellwig <hch@...radead.org>, "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Jens Axboe <axboe@...nel.dk>, linux-mm@...ck.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, steve.kang@...soc.com,
Minchan Kim <minchan@...nel.org>
Subject: Re: [RFC PATCH] driver: loop: introduce synchronized read for loop driver
On Wed, Sep 24, 2025 at 6:05 PM Ming Lei <ming.lei@...hat.com> wrote:
>
> On Wed, Sep 24, 2025 at 5:13 PM Zhaoyang Huang <huangzhaoyang@...il.com> wrote:
> >
> > loop google kernel team. When active_depth of the cgroupv2 is set to
> > 3, the loop device's request I2C will be affected by schedule latency
> > which is introduced by huge numbers of kworker thread corresponding to
> > blkcg for each. What's your opinion on this RFC patch?
>
> There are some issues on this RFC patch:
>
> - current->plug can't be touched by driver, cause there can be request
> from other devices
>
> - you can't sleep in loop_queue_rq()
>
> The following patchset should address your issue, and I can rebase & resend
> if no one objects.
>
> https://lore.kernel.org/linux-block/20250322012617.354222-1-ming.lei@redhat.com/
Thanks for the patch, that is what I want.
>
> Thanks,
>
>
> >
> > On Wed, Sep 24, 2025 at 12:30 AM Bart Van Assche <bvanassche@....org> wrote:
> > >
> > > On 9/22/25 8:50 PM, Zhaoyang Huang wrote:
> > > > Yes, we have tried to solve this case from the above perspective. As
> > > > to the scheduler, packing small tasks to one core(Big core in ARM)
> > > > instead of spreading them is desired for power-saving reasons. To the
> > > > number of kworker threads, it is upon current design which will create
> > > > new work for each blkcg. According to ANDROID's current approach, each
> > > > PID takes one cgroup and correspondingly a kworker thread which
> > > > actually induces this scenario.
> > >
> > > More cgroups means more overhead from cgroup-internal tasks, e.g.
> > > accumulating statistics. How about requesting to the Android core team
> > > to review the approach of associating one cgroup with each PID? I'm
> > > wondering whether the approach of one cgroup per aggregate profile
> > > (SCHED_SP_BACKGROUND, SCHED_SP_FOREGROUND, ...) would work.
> > >
> > > Thanks,
> > >
> > > Bart.
> >
>
Powered by blists - more mailing lists