lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 8 Oct 2010 14:12:01 +0900 From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Daisuke Nishimura <nishimura@....nes.nec.co.jp>, Minchan Kim <minchan.kim@...il.com>, Greg Thelen <gthelen@...gle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, containers@...ts.osdl.org, Andrea Righi <arighi@...eler.com>, Balbir Singh <balbir@...ux.vnet.ibm.com> Subject: Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re: [PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup() On Thu, 7 Oct 2010 21:55:56 -0700 Andrew Morton <akpm@...ux-foundation.org> wrote: > On Fri, 8 Oct 2010 13:37:12 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote: > > > On Thu, 7 Oct 2010 16:14:54 -0700 > > Andrew Morton <akpm@...ux-foundation.org> wrote: > > > > > On Thu, 7 Oct 2010 17:04:05 +0900 > > > KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote: > > > > > > > Now, at task migration among cgroup, memory cgroup scans page table and moving > > > > account if flags are properly set. > > > > > > > > The core code, mem_cgroup_move_charge_pte_range() does > > > > > > > > pte_offset_map_lock(); > > > > for all ptes in a page table: > > > > 1. look into page table, find_and_get a page > > > > 2. remove it from LRU. > > > > 3. move charge. > > > > 4. putback to LRU. put_page() > > > > pte_offset_map_unlock(); > > > > > > > > for pte entries on a 3rd level? page table. > > > > > > > > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as > > > > > > > > for 32 pages: pte_offset_map_lock() > > > > find_and_get a page > > > > record it > > > > pte_offset_map_unlock() > > > > for all recorded pages > > > > isolate it from LRU. > > > > move charge > > > > putback to LRU > > > > for all recorded pages > > > > put_page() > > > > > > The patch makes the code larger, more complex and slower! > > > > > > > Slower ? > > Sure. It walks the same data three times, potentially causing > thrashing in the L1 cache. Hmm, make this 2 times, at least. > It takes and releases locks at a higher frequency. It increases the text size. > But I don't think page_table_lock is a lock which someone can hold so long that 1. find_get_page 2. spin_lock(zone->lock) 3. remove it from LRU 4. lock_page_cgroup() 5. move charge (This means page 5. putback to LRU for 4096/8=1024 pages long. will try to make the routine smarter. But I want to get rid of page_table_lock -> lock_page_cgroup(). Thanks, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists