lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 21 Jun 2009 08:18:47 +0200 From: Pavel Machek <pavel@....cz> To: Benjamin Herrenschmidt <benh@...nel.crashing.org> Cc: Pekka J Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, mingo@...e.hu, npiggin@...e.de, akpm@...ux-foundation.org, cl@...ux-foundation.org, torvalds@...ux-foundation.org, "Rafael J. Wysocki" <rjw@...k.pl> Subject: Re: [PATCH v2] slab,slub: ignore __GFP_WAIT if we're booting or suspending Hi! > > Academic for boot, probably real for suspend/resume. There the atomic > > reserves could matter because the memory can be pretty full when you > > start suspend. > > Right, that might be something to look into, though we haven't yet > applied the technique for suspend & resume. My main issue with it at the > moment is how do I synchronize with allocations that are already > sleeping when changing the gfp flag mask without bloating the normal Well, but the problem already exists, no? If someone is already sleeping due to __GFP_WAIT, he'll probably sleep till the resume. ...well, if he's sleeping in the disk driver, I suspect driver will finish outstanding requests as part of .suspend(). > I also suspect that we might want to try to make -some- amount of free > space before starting suspend, though of course not nearly as > aggressively as with std. We free 4MB in 2.6.30, but Rafael is removing that for 2.6.31 :-(. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists