[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140715155600.GA29269@dhcp22.suse.cz>
Date: Tue, 15 Jul 2014 17:56:00 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>, Tejun Heo <tj@...nel.org>,
Vladimir Davydov <vdavydov@...allels.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 13/13] mm: memcontrol: rewrite uncharge API
On Tue 15-07-14 11:46:43, Johannes Weiner wrote:
> On Tue, Jul 15, 2014 at 05:18:18PM +0200, Michal Hocko wrote:
> > On Tue 15-07-14 11:09:37, Johannes Weiner wrote:
> > > On Tue, Jul 15, 2014 at 04:23:50PM +0200, Michal Hocko wrote:
> > > > On Tue 15-07-14 10:25:45, Michal Hocko wrote:
> > [...]
> > > > > @@ -2760,15 +2752,15 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
> > > > > spin_unlock_irq(&zone->lru_lock);
> > > > > }
> > > > >
> > > > > - mem_cgroup_charge_statistics(memcg, page, anon, nr_pages);
> > > > > - unlock_page_cgroup(pc);
> > > > > -
> > > > > + local_irq_disable();
> > > > > + mem_cgroup_charge_statistics(memcg, page, nr_pages);
> > > > > /*
> > > > > * "charge_statistics" updated event counter. Then, check it.
> > > > > * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
> > > > > * if they exceeds softlimit.
> > > > > */
> > > > > memcg_check_events(memcg, page);
> > > > > + local_irq_enable();
> > > >
> > > > preempt_{enable,disbale} should be sufficient for
> > > > mem_cgroup_charge_statistics and memcg_check_events no?
> > > > The first one is about per-cpu accounting (and that should be atomic
> > > > wrt. IRQ on the same CPU) and the later one uses IRQ safe locks down in
> > > > mem_cgroup_update_tree.
> > >
> > > How could it be atomic wrt. IRQ on the local CPU when IRQs that modify
> > > the counters can fire on the local CPU?
> >
> > I meant that __this_atomic_add and __this_cpu_inc should be atomic wrt. IRQ.
> > We do not care that an IRQ might jump in between two per-cpu operations.
> > This is racy from other CPUs anyway.
>
> It's really about a single RMW (+=) being interrupted by an IRQ.
> this_cpu_ guarantees IRQ-atomicity, but __this_cpu_ does not.
Yes, you are right. I was too x86 centric where both add and inc are
really a single instruction. Generic implementation already shows I was
wrong.
Sorry for the noise!
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists