lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2001170125350.20618@chino.kir.corp.google.com>
Date:   Fri, 17 Jan 2020 01:31:50 -0800 (PST)
From:   David Rientjes <rientjes@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
cc:     Kirill Tkhai <ktkhai@...tuozzo.com>,
        Wei Yang <richardw.yang@...ux.intel.com>, hannes@...xchg.org,
        vdavydov.dev@...il.com, akpm@...ux-foundation.org,
        kirill.shutemov@...ux.intel.com, yang.shi@...ux.alibaba.com,
        cgroups@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, alexander.duyck@...il.com,
        stable@...r.kernel.org
Subject: Re: [Patch v3] mm: thp: grab the lock before manipulation defer
 list

On Fri, 17 Jan 2020, Michal Hocko wrote:

> On Thu 16-01-20 14:01:59, David Rientjes wrote:
> > On Thu, 16 Jan 2020, Kirill Tkhai wrote:
> > 
> > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > index c5b5f74cfd4d..6450bbe394e2 100644
> > > > --- a/mm/memcontrol.c
> > > > +++ b/mm/memcontrol.c
> > > > @@ -5360,10 +5360,12 @@ static int mem_cgroup_move_account(struct page *page,
> > > >  	}
> > > >  
> > > >  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > -	if (compound && !list_empty(page_deferred_list(page))) {
> > > > +	if (compound) {
> > > >  		spin_lock(&from->deferred_split_queue.split_queue_lock);
> > > > -		list_del_init(page_deferred_list(page));
> > > > -		from->deferred_split_queue.split_queue_len--;
> > > > +		if (!list_empty(page_deferred_list(page))) {
> > > > +			list_del_init(page_deferred_list(page));
> > > > +			from->deferred_split_queue.split_queue_len--;
> > > > +		}
> > > >  		spin_unlock(&from->deferred_split_queue.split_queue_lock);
> > > >  	}
> > > >  #endif
> > > > @@ -5377,11 +5379,13 @@ static int mem_cgroup_move_account(struct page *page,
> > > >  	page->mem_cgroup = to;
> > > >  
> > > >  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > -	if (compound && list_empty(page_deferred_list(page))) {
> > > > +	if (compound) {
> > > >  		spin_lock(&to->deferred_split_queue.split_queue_lock);
> > > > -		list_add_tail(page_deferred_list(page),
> > > > -			      &to->deferred_split_queue.split_queue);
> > > > -		to->deferred_split_queue.split_queue_len++;
> > > > +		if (list_empty(page_deferred_list(page))) {
> > > > +			list_add_tail(page_deferred_list(page),
> > > > +				      &to->deferred_split_queue.split_queue);
> > > > +			to->deferred_split_queue.split_queue_len++;
> > > > +		}
> > > >  		spin_unlock(&to->deferred_split_queue.split_queue_lock);
> > > >  	}
> > > >  #endif
> > > 
> > > The patch looks OK for me. But there is another question. I forget, why we unconditionally
> > > add a page with empty deferred list to deferred_split_queue. Shouldn't we also check that
> > > it was initially in the list? Something like:
> > > 
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index d4394ae4e5be..0be0136adaa6 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -5289,6 +5289,7 @@ static int mem_cgroup_move_account(struct page *page,
> > >  	struct pglist_data *pgdat;
> > >  	unsigned long flags;
> > >  	unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1;
> > > +	bool split = false;
> > >  	int ret;
> > >  	bool anon;
> > >  
> > > @@ -5346,6 +5347,7 @@ static int mem_cgroup_move_account(struct page *page,
> > >  		if (!list_empty(page_deferred_list(page))) {
> > >  			list_del_init(page_deferred_list(page));
> > >  			from->deferred_split_queue.split_queue_len--;
> > > +			split = true;
> > >  		}
> > >  		spin_unlock(&from->deferred_split_queue.split_queue_lock);
> > >  	}
> > > @@ -5360,7 +5362,7 @@ static int mem_cgroup_move_account(struct page *page,
> > >  	page->mem_cgroup = to;
> > >  
> > >  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > -	if (compound) {
> > > +	if (compound && split) {
> > >  		spin_lock(&to->deferred_split_queue.split_queue_lock);
> > >  		if (list_empty(page_deferred_list(page))) {
> > >  			list_add_tail(page_deferred_list(page),
> > > 
> > 
> > I think that's a good point, especially considering that the current code 
> > appears to unconditionally place any compound page on the deferred split 
> > queue of the destination memcg.  The correct list that it should appear 
> > on, I believe, depends on whether the pmd has been split for the process 
> > being moved: note the MC_TARGET_PAGE caveat in 
> > mem_cgroup_move_charge_pte_range() that does not move the charge for 
> > compound pages with split pmds.  So when mem_cgroup_move_account() is 
> > called with compound == true, we're moving the charge of the entire 
> > compound page: why would it appear on that memcg's deferred split queue?
> 
> I believe Kirill asked how do we know that the page should be actually
> added to the deferred list just from the list_empty check. In other
> words what if the page hasn't been split at all?
> 

Right, and I don't think that it necessarily is and the second 
conditional in Wei's patch will always succeed unless we have raced.  That 
patch is for a lock concern but I think Kirill's question has uncovered 
something more interesting.

Kirill S would definitely be best to answer Kirill T's question, but from 
my understanding when mem_cgroup_move_account() is called with 
compound == true that we always have an intact pmd (we never migrate 
partial page charges for pages on the deferred split queue with the 
current charge migration implementation) and thus the underlying page is 
not eligible to be split and shouldn't be on the deferred split queue.

In other words, a page being on the deferred split queue for a memcg 
should only happen when it is charged to that memcg.  (This wasn't the 
case when we only had per-node split queues.)  I think that's currently 
broken in mem_cgroup_move_account() before Wei's patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ