[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <xr93aassvoco.fsf@ninji.mtv.corp.google.com>
Date: Sat, 24 Apr 2010 08:53:27 -0700
From: Greg Thelen <gthelen@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Vivek Goyal <vgoyal@...hat.com>, balbir@...ux.vnet.ibm.com,
Andrea Righi <arighi@...eler.com>,
Trond Myklebust <trond.myklebust@....uio.no>,
Suleiman Souhlal <suleiman@...gle.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
containers@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH -mmotm 1/5] memcg: disable irq at page cgroup lock
Peter Zijlstra <peterz@...radead.org> writes:
> On Fri, 2010-04-23 at 13:17 -0700, Greg Thelen wrote:
>> - lock_page_cgroup(pc);
>> + /*
>> + * Unless a page's cgroup reassignment is possible, then avoid grabbing
>> + * the lock used to protect the cgroup assignment.
>> + */
>> + rcu_read_lock();
>
> Where is the matching barrier?
Good catch. A call to smp_wmb() belongs in
mem_cgroup_begin_page_cgroup_reassignment() like so:
static void mem_cgroup_begin_page_cgroup_reassignment(void)
{
VM_BUG_ON(mem_cgroup_account_move_ongoing);
mem_cgroup_account_move_ongoing = true;
smp_wmb();
synchronize_rcu();
}
I'll add this to the patch.
>> + smp_rmb();
>> + if (unlikely(mem_cgroup_account_move_ongoing)) {
>> + local_irq_save(flags);
>
> So the added irq-disable is a bug-fix?
The irq-disable is not needed for current code, only for upcoming
per-memcg dirty page accounting which will be refactoring
mem_cgroup_update_file_mapped() into a generic memcg stat update
routine. I assume these locking changes should be bundled with the
dependant memcg dirty page accounting changes which need the ability to
update counters from irq routines. I'm sorry I didn't make that more
clear.
>> + lock_page_cgroup(pc);
>> + locked = true;
>> + }
>> +
>> mem = pc->mem_cgroup;
>> if (!mem || !PageCgroupUsed(pc))
>> goto done;
>> @@ -1449,6 +1468,7 @@ void mem_cgroup_update_file_mapped(struct page *page, int val)
>> /*
>> * Preemption is already disabled. We can use __this_cpu_xxx
>> */
>> + VM_BUG_ON(preemptible());
>
> Insta-bug here, there is nothing guaranteeing we're not preemptible
> here.
My addition of VM_BUG_ON() was to programmatic assert what the comment
was asserting. All callers of mem_cgroup_update_file_mapped() hold the
pte spinlock, which disables preemption. So I don't think this
VM_BUG_ON() will cause panic. A function level comment for
mem_cgroup_update_file_mapped() declaring that "callers must have
preemption disabled" will be added to make this more clear.
>> if (val > 0) {
>> __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
>> SetPageCgroupFileMapped(pc);
>> @@ -1458,7 +1478,11 @@ void mem_cgroup_update_file_mapped(struct page *page, int val)
>> }
>>
>> done:
>> - unlock_page_cgroup(pc);
>> + if (unlikely(locked)) {
>> + unlock_page_cgroup(pc);
>> + local_irq_restore(flags);
>> + }
>> + rcu_read_unlock();
>> }
--
Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists