lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Mar 2007 11:30:47 -0700
From:	Paul Jackson <pj@....com>
To:	vatsa@...ibm.com
Cc:	balbir@...ibm.com, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Fix race between attach_task and cpuset_exit

vatsa wrote:
> Well, someone may have attached to this cpuset while we were waiting on the 
> mutex_lock(). So we need to do a atomic_read again to ensure it is still
> unused

pj replied:
> If we hold the task lock that now
> (thanks to your good work) guards this pointer, and if we decrement to
> zero the reference count on the cpuset to which it points 

I incorrectly described the locking, I think.

A cpusets reference count increases if either another task is attached
to it, or if a task already attached forks.

If we decrement to zero the count, we -know- that no more tasks are
attached to it.

If we hold the cpuset manage_mutex, then we -know- that attach_task can't
attach tasks to it.

But now that you mention it, that additional atomic_read of the count in
check_for_release() seems suspicious to me.  I'm afraid that the following
could happen:

    1) given cpusets A and A/B, with a single task attached to A (none to B)
    2) some other tasks issues a "rmdir A/B"
    3) near the end of the cpuset_rmdir() code, after we have removed A/B, we
	invoke check_for_release()
    4) just at that instant, the single task in A exits, decrementing the
	count on A to zero
    5) both the exiting task and the task doing the rmdir execute the
	cpuset_release_agent() and check_for_release() code.

Aha - yes, maybe that could happen, but it is OK !!

Multiple tasks all pounding on the same cpuset with this release logic
is not a problem. That just ends up being multiple tasks doing a 'rmdir'
on that cpuset from user space.  At most one of them succeeds in
removing the directory, and if it is removed, then the remaining get an
error that there is no such directory.

The race I worried about in last nights post is NOT a problem:
> Is there perhaps another race here?  Could it happen that:
>  1) the atomic_dec() lowers the count to say one (any value > zero)
>  2) after we drop the task lock, some other task or tasks decrement
>     the count to zero
>  3) we catch that zero when we atomic_read the count, and issue a spurious
>     check_for_release().

This is one of the advantages of not actually unlinking cpusets at this point,
when it seems they are no longer used.  We just fire off a user mode helper
thread to attempt a subsequent removal.  That separate thread will get the locking
correct, from the top down, and if the cpuset is still really and truly unused,
then and only then actually remove it.

Simultaneous spurious check_for_release() calls are not a problem!

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ