lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 14 Jun 2013 14:32:11 +0000 From: Christoph Lameter <cl@...two.org> To: Roman Gushchin <klamm@...dex-team.ru> cc: penberg@...nel.org, mpm@...enic.com, akpm@...ux-foundation.org, mgorman@...e.de, rientjes@...gle.com, glommer@...allels.com, hannes@...xchg.org, minchan@...nel.org, jiang.liu@...wei.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH] slub: Avoid direct compaction if possible On Fri, 14 Jun 2013, Roman Gushchin wrote: > Slub tries to allocate contiguous pages even if memory is fragmented and > there are no free contiguous pages. In this case it calls direct compaction > to allocate contiguous page. Compaction requires the taking of some heavily > contended locks (e.g. zone locks). So, running compaction (direct and using > kswapd) simultaneously on several processors can cause serious performance > issues. The main thing that this patch does is to add a nocompact flag to the page allocator. That needs to be a separate patch. Also fix the description. Slub does not invoke compaction. The page allocator initiates compaction under certain conditions. > It's possible to avoid such problems (or at least to make them less probable) > by avoiding direct compaction. If it's not possible to allocate a contiguous > page without compaction, slub will fall back to order 0 page(s). In this case > kswapd will be woken to perform asynchronous compaction. So, slub can return > to default order allocations as soon as memory will be de-fragmented. Sounds like a good idea. Do you have some numbers to show the effect of this patch? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists