lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Aug 2011 23:07:29 -0700
From:	Greg Thelen <gthelen@...gle.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	containers@...ts.osdl.org, linux-fsdevel@...r.kernel.org,
	Balbir Singh <bsingharora@...il.com>,
	Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
	Minchan Kim <minchan.kim@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Wu Fengguang <fengguang.wu@...el.com>,
	Dave Chinner <david@...morbit.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	Andrea Righi <andrea@...terlinux.com>,
	Ciju Rajan K <ciju@...ux.vnet.ibm.com>,
	David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v9 03/13] memcg: add dirty page accounting infrastructure

KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> writes:

> On Wed, 17 Aug 2011 09:14:55 -0700
> Greg Thelen <gthelen@...gle.com> wrote:
>
>> Add memcg routines to count dirty, writeback, and unstable_NFS pages.
>> These routines are not yet used by the kernel to count such pages.  A
>> later change adds kernel calls to these new routines.
>> 
>> As inode pages are marked dirty, if the dirtied page's cgroup differs
>> from the inode's cgroup, then mark the inode shared across several
>> cgroup.
>> 
>> Signed-off-by: Greg Thelen <gthelen@...gle.com>
>> Signed-off-by: Andrea Righi <andrea@...terlinux.com>
>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> A nitpick..
>
>
>
>> +static inline
>> +void mem_cgroup_move_account_page_stat(struct mem_cgroup *from,
>> +				       struct mem_cgroup *to,
>> +				       enum mem_cgroup_stat_index idx)
>> +{
>> +	preempt_disable();
>> +	__this_cpu_dec(from->stat->count[idx]);
>> +	__this_cpu_inc(to->stat->count[idx]);
>> +	preempt_enable();
>> +}
>> +
>
> this_cpu_dec()
> this_cpu_inc()
>
> without preempt_disable/enable will work. CPU change between dec/inc will
> not be problem.
>
> Thanks,
> -Kame

I agree, but this fix is general cleanup, which seems independent of
memcg dirty accounting.  This preemption disable/enable pattern exists
before this patch series in both mem_cgroup_charge_statistics() and
mem_cgroup_move_account().  For consistency we should change both.  To
keep the dirty page accounting series simple, I would like to make these
changes outside of this series.  On x86 usage of this_cpu_dec/inc looks
equivalent to __this_cpu_inc(), so I assume the only trade off is that
preemptible non-x86 using generic this_this_cpu() will internally
disable/enable preemption in this_cpu_*() operations.

I'll submit a cleanup patch outside of the dirty limit patches for this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ