[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0jWSH_+wC7P=bBV8uKNp1PBUjkE06Ec6HR1Zd5as8GQ2g@mail.gmail.com>
Date: Fri, 29 Dec 2023 17:36:01 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Stanislaw Gruszka <stanislaw.gruszka@...ux.intel.com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, "Rafael J. Wysocki" <rjw@...ysocki.net>,
Greg KH <gregkh@...uxfoundation.org>, linux-pm@...r.kernel.org,
Youngmin Nam <youngmin.nam@...sung.com>, linux-kernel@...r.kernel.org,
d7271.choe@...sung.com, janghyuck.kim@...sung.com, hyesoo.yu@...sung.com,
Alan Stern <stern@...land.harvard.edu>, Ulf Hansson <ulf.hansson@...aro.org>
Subject: Re: [PATCH v1 2/3] async: Introduce async_schedule_dev_nocall()
On Fri, Dec 29, 2023 at 3:54 PM Stanislaw Gruszka
<stanislaw.gruszka@...ux.intel.com> wrote:
>
> On Fri, Dec 29, 2023 at 02:37:36PM +0100, Rafael J. Wysocki wrote:
> > > > +bool async_schedule_dev_nocall(async_func_t func, struct device *dev)
> > > > +{
> > > > + struct async_entry *entry;
> > > > +
> > > > + entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL);
> > >
> > > Is GFP_KERNEL intended here ?
> >
> > Yes, it is.
> >
> > PM will be the only user of this, at least for now, and it all runs in
> > process context.
> >
> > > I think it's not safe since will
> > > be called from device_resume_noirq() .
> >
> > device_resume_noirq() runs in process context too.
> >
> > The name is somewhat confusing (sorry about that) and it means that
> > hardirq handlers (for the majority of IRQs) don't run in that resume
> > phase, but interrupts are enabled locally on all CPUs (this is
> > required for wakeup handling, among other things).
>
> Then my concern would be: if among devices with disabled IRQs are
> disk devices? Seems there are disk devices as well, and because
> GFP_KERNEL can start reclaiming memory by doing disk IO (write
> dirty pages for example), with disk driver interrupts disabled
> reclaiming process can not finish.
>
> I do not see how such possible infinite waiting for disk IO
> scenario is prevented here, did I miss something?
Well, it is not a concern, because the suspend code already prevents
the mm subsystem from trying too hard to find free memory. See the
pm_restrict_gfp_mask() call in enter_state().
Otherwise, it would have been a problem for any GFP_KERNEL allocations
made during system-wide suspend-resume, not just in the _noirq phases.
Powered by blists - more mailing lists