lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Jul 2014 17:18:18 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>, Tejun Heo <tj@...nel.org>,
	Vladimir Davydov <vdavydov@...allels.com>, linux-mm@...ck.org,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 13/13] mm: memcontrol: rewrite uncharge API

On Tue 15-07-14 11:09:37, Johannes Weiner wrote:
> On Tue, Jul 15, 2014 at 04:23:50PM +0200, Michal Hocko wrote:
> > On Tue 15-07-14 10:25:45, Michal Hocko wrote:
[...]
> > > @@ -2760,15 +2752,15 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
> > >  		spin_unlock_irq(&zone->lru_lock);
> > >  	}
> > >  
> > > -	mem_cgroup_charge_statistics(memcg, page, anon, nr_pages);
> > > -	unlock_page_cgroup(pc);
> > > -
> > > +	local_irq_disable();
> > > +	mem_cgroup_charge_statistics(memcg, page, nr_pages);
> > >  	/*
> > >  	 * "charge_statistics" updated event counter. Then, check it.
> > >  	 * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
> > >  	 * if they exceeds softlimit.
> > >  	 */
> > >  	memcg_check_events(memcg, page);
> > > +	local_irq_enable();
> > 
> > preempt_{enable,disbale} should be sufficient for
> > mem_cgroup_charge_statistics and memcg_check_events no?
> > The first one is about per-cpu accounting (and that should be atomic
> > wrt. IRQ on the same CPU) and the later one uses IRQ safe locks down in
> > mem_cgroup_update_tree.
> 
> How could it be atomic wrt. IRQ on the local CPU when IRQs that modify
> the counters can fire on the local CPU?

I meant that __this_atomic_add and __this_cpu_inc should be atomic wrt. IRQ.
We do not care that an IRQ might jump in between two per-cpu operations.
This is racy from other CPUs anyway.

> 
> > > @@ -780,11 +780,14 @@ static int move_to_new_page(struct page *newpage, struct page *page,
> > >  		rc = fallback_migrate_page(mapping, newpage, page, mode);
> > >  
> > >  	if (rc != MIGRATEPAGE_SUCCESS) {
> > > -		newpage->mapping = NULL;
> > > +		if (!PageAnon(newpage))
> > > +			newpage->mapping = NULL;
> > 
> > OK, I am probably washed out from looking into this for too long but I
> > cannot figure why have you done this...
> 
> mem_cgroup_uncharge() relies on PageAnon() working.  Usually, anon
> pages retain their page->mapping until they hit the page allocator,
> the exception was old migration pages.

OK, got it now. I was suprised by a change in !memcg path. Maybe this is
worth a comment?

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ