lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20071026174409.GA1573@elf.ucw.cz> Date: Fri, 26 Oct 2007 19:44:09 +0200 From: Pavel Machek <pavel@....cz> To: Peter Zijlstra <a.p.zijlstra@...llo.nl> Cc: Christoph Lameter <clameter@....com>, Daniel Phillips <phillips@...nq.net>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, akpm@...ux-foundation.org, dkegel@...gle.com, David Miller <davem@...emloft.net>, Nick Piggin <npiggin@...e.de> Subject: Re: [RFC 0/3] Recursive reclaim (on __PF_MEMALLOC) Hi! > > > or > > > > > > - have a global reserve and selectively serves sockets > > > (what I've been doing) > > > > That is a scalability problem on large systems! Global means global > > serialization, cacheline bouncing and possibly livelocks. If we get into > > this global shortage then all cpus may end up taking the same locks > > cycling thought the same allocation paths. > > Dude, breathe, these boxens of yours will never swap over network simply > because you never configure swap. > > And, _no_, it does not necessarily mean global serialisation. By simply > saying there must be N pages available I say nothing about on which node > they should be available, and the way the watermarks work they will be > evenly distributed over the appropriate zones. Agreed. Scalability of emergency swapping reserved is simply unimportant. Please, lets get swapping to _work_ first, then we can make it faster. No, I do not think we'll ever see a livelock on this. Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists