lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 3 Nov 2012 16:35:13 +0800 From: Ming Lei <ming.lei@...onical.com> To: linux-kernel@...r.kernel.org Cc: Alan Stern <stern@...land.harvard.edu>, Oliver Neukum <oneukum@...e.de>, Minchan Kim <minchan@...nel.org>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, "Rafael J. Wysocki" <rjw@...k.pl>, Jens Axboe <axboe@...nel.dk>, "David S. Miller" <davem@...emloft.net>, Andrew Morton <akpm@...ux-foundation.org>, netdev@...r.kernel.org, linux-usb@...r.kernel.org, linux-pm@...r.kernel.org, linux-mm@...ck.org, Ming Lei <ming.lei@...onical.com> Subject: [PATCH v4 5/6] PM / Runtime: force memory allocation with no I/O during Runtime PM callbcack This patch applies the introduced memalloc_noio_save() and memalloc_noio_restore() to force memory allocation with no I/O during runtime_resume/runtime_suspend callback on device with the flag of 'memalloc_noio' set. Cc: Alan Stern <stern@...land.harvard.edu> Cc: Oliver Neukum <oneukum@...e.de> Cc: Rafael J. Wysocki <rjw@...k.pl> Signed-off-by: Ming Lei <ming.lei@...onical.com> --- v4: - runtime_suspend need this too because rpm_resume may wait for completion of concurrent runtime_suspend, so deadlock still may be triggered in runtime_suspend path. --- drivers/base/power/runtime.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c index d477924..7ed17a9 100644 --- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -368,6 +368,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) int (*callback)(struct device *); struct device *parent = NULL; int retval; + unsigned int noio_flag; trace_rpm_suspend(dev, rpmflags); @@ -477,7 +478,20 @@ static int rpm_suspend(struct device *dev, int rpmflags) if (!callback && dev->driver && dev->driver->pm) callback = dev->driver->pm->runtime_suspend; - retval = rpm_callback(callback, dev); + /* + * Deadlock might be caused if memory allocation with GFP_KERNEL + * happens inside runtime_suspend callback of one block device's + * ancestor or the block device itself. Network device might be + * thought as part of iSCSI block device, so network device and + * its ancestor should be marked as memalloc_noio. + */ + if (dev->power.memalloc_noio) { + memalloc_noio_save(noio_flag); + retval = rpm_callback(callback, dev); + memalloc_noio_restore(noio_flag); + } else { + retval = rpm_callback(callback, dev); + } if (retval) goto fail; @@ -560,6 +574,7 @@ static int rpm_resume(struct device *dev, int rpmflags) int (*callback)(struct device *); struct device *parent = NULL; int retval = 0; + unsigned int noio_flag; trace_rpm_resume(dev, rpmflags); @@ -709,7 +724,20 @@ static int rpm_resume(struct device *dev, int rpmflags) if (!callback && dev->driver && dev->driver->pm) callback = dev->driver->pm->runtime_resume; - retval = rpm_callback(callback, dev); + /* + * Deadlock might be caused if memory allocation with GFP_KERNEL + * happens inside runtime_resume callback of one block device's + * ancestor or the block device itself. Network device might be + * thought as part of iSCSI block device, so network device and + * its ancestor should be marked as memalloc_noio. + */ + if (dev->power.memalloc_noio) { + memalloc_noio_save(noio_flag); + retval = rpm_callback(callback, dev); + memalloc_noio_restore(noio_flag); + } else { + retval = rpm_callback(callback, dev); + } if (retval) { __update_runtime_status(dev, RPM_SUSPENDED); pm_runtime_cancel_pending(dev); -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists