lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111107175303.GI2332@linux.vnet.ibm.com>
Date:	Mon, 7 Nov 2011 09:53:03 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Li Zefan <lizf@...fujitsu.com>, Ingo Molnar <mingo@...e.hu>,
	eric.dumazet@...il.com, shaohua.li@...el.com, ak@...ux.intel.com,
	mhocko@...e.cz, alex.shi@...el.com, efault@....de,
	linux-kernel@...r.kernel.org, Paul Turner <pjt@...gle.com>
Subject: Re: [GIT PULL rcu/next] RCU commits for 3.1

On Mon, Nov 07, 2011 at 05:12:50PM +0000, Stephane Eranian wrote:
> Paul,
> 
> On Mon, Nov 7, 2011 at 4:56 PM, Paul E. McKenney
> <paulmck@...ux.vnet.ibm.com> wrote:
> > On Mon, Nov 07, 2011 at 05:35:56PM +0100, Peter Zijlstra wrote:
> >> On Mon, 2011-11-07 at 16:16 +0000, Stephane Eranian wrote:
> >> > On Mon, Nov 7, 2011 at 3:15 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> >> > > So far nobody seems to have stated if this is an actual problem or just
> >> > > shutting up lockdep-prove-rcu? I very much suspect the latter, in which
> >> > > case I really utterly hate the patch because it adds instructions to
> >> > > fast-paths just to kill a debug warning.
> >> > >
> >> > I think the core issue at stake here is not so much the cgroup disappearing.
> >> > It cannot go away because it is ref counted (perf_events does the necessary
> >> > css_get()/css_put()). But it is rather the task disappearing while we
> >> > are operating
> >> > on its state.
> >> >
> >> > I don't think task (prev or next) can disappear while we execute
> >> > perf_cgroup_sched_out()/perf_cgroup_sched_in() because we are in the context
> >> > switch code.
> >>
> >> Right.
> >>
> >> > What remains is:
> >> >   * update_cgrp_time_from_event()
> >> >     alway operates on current task
> >> >
> >> >   * perf_cgroup_set_timestamp()
> >> >
> >> >        - perf_event_task_tick() -> cpu_ctx_sched_in() but in this case
> >> > it is on the current task
> >> >        - perf_event_task_sched_in() in context switch code so I assume
> >> > it is safe
> >> >        - __perf_event_enable() but it is called on current
> >> >
> >> >   - perf_cgroup_switch()
> >> >     * perf_cgroup_sched_in()/perf_cgroup_sched_out() -> context switch code
> >> >
> >> >     * perf_cgroup_attach()
> >> >       called from cgroup code. Does not appear to hold task_lock().
> >> >       the routine already grabs the rcu_read_lock() but it that enough
> >> > to guarantee the task cannot
> >> >       vanish. I would hope so, otherwise I think the cgroup attach
> >> > code has a problem.
> >>
> >> yeah, task_struct is rcu-freed
> >
> > But we are not in an RCU read-side critical section, otherwise the splat
> > would not have happened.  Or did I miss a turn in the analysis roadmap
> > above?
> >
> >> > In summary, unless I am mistaken, it looks to me that we may not need
> >> > those new rcu_read_lock()
> >> > calls after all.
> >> >
> >> > Does anyone have a different analysis?
> >>
> >> The only other problem I could see is that perf_cgroup_sched_{in,out}
> >> can race against perf_cgroup_attach_task() and make the wrong decision.
> >> But then perf_cgroup_attach will call perf_cgroup_switch() to fix that
> >> up again.
> >
> > If this really is a false positive, what should be used to get rid of
> > the splats?
> >
> I think on that path:
> 
> >>> [<8108aa02>] perf_event_enable_on_exec+0x1d2/0x1e0
> >>> [<81063764>] ? __lock_release+0x54/0xb0
> >>> [<8108cca8>] perf_event_comm+0x18/0x60
> >>> [<810d1abd>] ? set_task_comm+0x5d/0x80
> >>> [<81af622d>] ? _raw_spin_unlock+0x1d/0x40
> >>> [<810d1ac4>] set_task_comm+0x64/0x80
> 
> We are neither holding the rcu_read_lock() nor the task_lock() but we
> are operating on the current task. The task cannot just vanish. So
> the rcu_dereference() and lock_is_held() macros may detect a false
> positive in that case. Yet, I doubt this would be the only place....

In that case, could something like task==current be added to the
macro's check?  Perhaps this is what Peter was suggesting...

							Thanx, Paul

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ