lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20071009132437.GA18312@vino.hallyn.com>
Date:	Tue, 9 Oct 2007 08:24:37 -0500
From:	"Serge E. Hallyn" <serge@...lyn.com>
To:	Pierre Peiffer <Pierre.Peiffer@...l.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] IPC: cleanup some code and wrong comments about
	semundo list managment

Quoting Pierre Peiffer (Pierre.Peiffer@...l.net):
> From: Pierre Peiffer <pierre.peiffer@...l.net>
> 
> Some comments about sem_undo_list seem wrong.
> About the comment above unlock_semundo:
> "... If task2 now exits before task1 releases the lock (by calling
> unlock_semundo()), then task1 will never call spin_unlock(). ..."
> 
> This is just wrong, I see no reason for which task1 will not call
> spin_unlock... The rest of this comment is also wrong... Unless I
> miss something (of course).

I see, that was changed since commit
00a5dfdb93f74e4d95fb0d83c890728e331f8810 in 2005  :)

> Finally, (un)lock_semundo functions are useless, so remove them
> for simplification. (this avoids an useless if statement)

Looks correct.

> Signed-off-by: Pierre Peiffer <pierre.peiffer@...l.net>

Acked-by: Serge Hallyn <serue@...ibm.com>

> ---
> 
>  ipc/sem.c |   46 ++++++----------------------------------------
>  1 files changed, 6 insertions(+), 40 deletions(-)
> 
> diff --git a/ipc/sem.c b/ipc/sem.c
> index b676fef..5585817 100644
> --- a/ipc/sem.c
> +++ b/ipc/sem.c
> @@ -957,36 +957,6 @@ asmlinkage long sys_semctl (int semid, int semnum, int cmd, union semun arg)
>  	}
>  }
>  
> -static inline void lock_semundo(void)
> -{
> -	struct sem_undo_list *undo_list;
> -
> -	undo_list = current->sysvsem.undo_list;
> -	if (undo_list)
> -		spin_lock(&undo_list->lock);
> -}
> -
> -/* This code has an interaction with copy_semundo().
> - * Consider; two tasks are sharing the undo_list. task1
> - * acquires the undo_list lock in lock_semundo().  If task2 now
> - * exits before task1 releases the lock (by calling
> - * unlock_semundo()), then task1 will never call spin_unlock().
> - * This leave the sem_undo_list in a locked state.  If task1 now creats task3
> - * and once again shares the sem_undo_list, the sem_undo_list will still be
> - * locked, and future SEM_UNDO operations will deadlock.  This case is
> - * dealt with in copy_semundo() by having it reinitialize the spin lock when 
> - * the refcnt goes from 1 to 2.
> - */
> -static inline void unlock_semundo(void)
> -{
> -	struct sem_undo_list *undo_list;
> -
> -	undo_list = current->sysvsem.undo_list;
> -	if (undo_list)
> -		spin_unlock(&undo_list->lock);
> -}
> -
> -
>  /* If the task doesn't already have a undo_list, then allocate one
>   * here.  We guarantee there is only one thread using this undo list,
>   * and current is THE ONE
> @@ -1047,9 +1017,9 @@ static struct sem_undo *find_undo(struct ipc_namespace *ns, int semid)
>  	if (error)
>  		return ERR_PTR(error);
>  
> -	lock_semundo();
> +	spin_lock(&ulp->lock);
>  	un = lookup_undo(ulp, semid);
> -	unlock_semundo();
> +	spin_unlock(&ulp->lock);
>  	if (likely(un!=NULL))
>  		goto out;
>  
> @@ -1077,10 +1047,10 @@ static struct sem_undo *find_undo(struct ipc_namespace *ns, int semid)
>  	new->semadj = (short *) &new[1];
>  	new->semid = semid;
>  
> -	lock_semundo();
> +	spin_lock(&ulp->lock);
>  	un = lookup_undo(ulp, semid);
>  	if (un) {
> -		unlock_semundo();
> +		spin_unlock(&ulp->lock);
>  		kfree(new);
>  		ipc_lock_by_ptr(&sma->sem_perm);
>  		ipc_rcu_putref(sma);
> @@ -1091,7 +1061,7 @@ static struct sem_undo *find_undo(struct ipc_namespace *ns, int semid)
>  	ipc_rcu_putref(sma);
>  	if (sma->sem_perm.deleted) {
>  		sem_unlock(sma);
> -		unlock_semundo();
> +		spin_unlock(&ulp->lock);
>  		kfree(new);
>  		un = ERR_PTR(-EIDRM);
>  		goto out;
> @@ -1102,7 +1072,7 @@ static struct sem_undo *find_undo(struct ipc_namespace *ns, int semid)
>  	sma->undo = new;
>  	sem_unlock(sma);
>  	un = new;
> -	unlock_semundo();
> +	spin_unlock(&ulp->lock);
>  out:
>  	return un;
>  }
> @@ -1279,10 +1249,6 @@ asmlinkage long sys_semop (int semid, struct sembuf __user *tsops, unsigned nsop
>  
>  /* If CLONE_SYSVSEM is set, establish sharing of SEM_UNDO state between
>   * parent and child tasks.
> - *
> - * See the notes above unlock_semundo() regarding the spin_lock_init()
> - * in this code.  Initialize the undo_list->lock here instead of get_undo_list()
> - * because of the reasoning in the comment above unlock_semundo.
>   */
>  
>  int copy_semundo(unsigned long clone_flags, struct task_struct *tsk)
> 
> 
> Pierre Peiffer
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ