lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080826225519.GC27724@x200.localdomain>
Date:	Wed, 27 Aug 2008 02:55:19 +0400
From:	Alexey Dobriyan <adobriyan@...il.com>
To:	Roland McGrath <roland@...hat.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] utrace core

On Tue, Aug 26, 2008 at 03:01:57PM -0700, Roland McGrath wrote:
> This adds the utrace facility, a new modular interface in the kernel for
> implementing user thread tracing and debugging.  This fits on top of the
> tracehook_* layer, so the new code is well-isolated.

I'll says this again: tracehook_* is pointless abstraction because
there will be no second generic tracing facility. The author of second
one will be asked what is bad in utrace with very high odds.

> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1196,6 +1196,11 @@ struct task_struct {
>  #endif
>  	seccomp_t seccomp;
>  
> +#ifdef CONFIG_UTRACE
> +	struct utrace *utrace;
> +	unsigned long utrace_flags;
> +#endif

Again, embed struct utrace directly into task_struct. task_struct
lifetime rules are way more tested than struct utrace ones.

Add simple spinlock guarding all accesses (OK, I haven't looked very
closely if it's possible)

Nobody needs hundred-line utrace_attach with CPU barriers.

Nobody needs RCU.

Nobody needs restart logic.

Reminder: that struct utrace double-free was P_I_T_A to debug.

I'll check last utrace oops we talked is still there and bogus patch was applied
(sorry, haven't slept night at all). And run to confirm that attach/detach/exec
program still crashes it. There is PREEMPT_RCU now so it will be even more not
funny.

> --- /dev/null
> +++ b/kernel/utrace.c

> +/*
> + * Make sure target->utrace is allocated, and return with it locked on
> + * success.  This function mediates startup races.  The creating parent
> + * task has priority, and other callers will delay here to let its call
> + * succeed and take the new utrace lock first.
> + */
> +static struct utrace *utrace_first_engine(struct task_struct *target,
> +					  struct utrace_attached_engine *engine)
> +	__acquires(utrace->lock)
> +{
> +	struct utrace *utrace;
> +
> +	/*
> +	 * If this is a newborn thread and we are not the creator,
> +	 * we have to wait for it.  The creator gets the first chance
> +	 * to attach.  The PF_STARTING flag is cleared after its
> +	 * report_clone hook has had a chance to run.
> +	 */
> +	if (target->flags & PF_STARTING) {
> +		utrace = current->utrace;
> +		if (utrace == NULL || utrace->u.live.cloning != target) {
> +			yield();
> +			if (signal_pending(current))
> +				return ERR_PTR(-ERESTARTNOINTR);
> +			return NULL;
> +		}
> +	}
> +
> +	utrace = kmem_cache_zalloc(utrace_cachep, GFP_KERNEL);
> +	if (unlikely(utrace == NULL))
> +		return ERR_PTR(-ENOMEM);
> +
> +	INIT_LIST_HEAD(&utrace->attached);
> +	INIT_LIST_HEAD(&utrace->attaching);
> +	list_add(&engine->entry, &utrace->attached);
> +	spin_lock_init(&utrace->lock);
> +	CHECK_INIT(utrace);
> +
> +	spin_lock(&utrace->lock);
> +	task_lock(target);
> +	if (likely(target->utrace == NULL)) {
> +		rcu_assign_pointer(target->utrace, utrace);
> +
> +		/*
> +		 * The task_lock protects us against another thread doing
> +		 * the same thing.  We might still be racing against
> +		 * tracehook_release_task.  It's called with ->exit_state
> +		 * set to EXIT_DEAD and then checks ->utrace with an
> +		 * smp_mb() in between.  If EXIT_DEAD is set, then
> +		 * release_task might have checked ->utrace already and saw
> +		 * it NULL; we can't attach.  If we see EXIT_DEAD not yet
> +		 * set after our barrier, then we know release_task will
> +		 * see our target->utrace pointer.
> +		 */
> +		smp_mb();
> +		if (likely(target->exit_state != EXIT_DEAD)) {
> +			task_unlock(target);
> +			return utrace;
> +		}
> +
> +		/*
> +		 * The target has already been through release_task.
> +		 * Our caller will restart and notice it's too late now.
> +		 */
> +		target->utrace = NULL;
> +	}
> +
> +	/*
> +	 * Another engine attached first, so there is a struct already.
> +	 * A null return says to restart looking for the existing one.
> +	 */
> +	task_unlock(target);
> +	spin_unlock(&utrace->lock);
> +	kmem_cache_free(utrace_cachep, utrace);
> +
> +	return NULL;
> +}

All this junk will dissapear. I even posted proff-of-concept patch.

> +/*
> + * Called with utrace locked.  Clean it up and free it via RCU.
> + */
> +static void rcu_utrace_free(struct utrace *utrace)
> +	__releases(utrace->lock)
> +{
> +	CHECK_DEAD(utrace);
> +	spin_unlock(&utrace->lock);
> +	INIT_RCU_HEAD(&utrace->u.dead);
> +	call_rcu(&utrace->u.dead, utrace_free);

INIT_RCU_HEAD is not needed, call_rcu() will overwrite rcu head unconditionally.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ