[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100326163236.1ffb2c2b@notabene.brown>
Date: Fri, 26 Mar 2010 16:32:36 +1100
From: Neil Brown <neilb@...e.de>
To: ebiederm@...ssion.com (Eric W. Biederman)
Cc: Greg Kroah-Hartman <gregkh@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sysfs: simplify handling for s_active refcount
On Thu, 25 Mar 2010 21:24:43 -0700
ebiederm@...ssion.com (Eric W. Biederman) wrote:
> NeilBrown <neilb@...e.de> writes:
>
> > s_active counts the number of active references to a 'sysfs_direct'.
> > When we wish to deactivate a sysfs_direct, we subtract a large
> > number for the refcount so it will always appear negative. When
> > it is negative, new references will not be taken.
> > After that subtraction, we wait for all the active references to
> > drain away.
> >
> > The subtraction of the large number contains exactly the same
> > information as the setting of the flag SYSFS_FLAG_REMOVED.
> > (We know this as we already assert that SYSFS_FLAG_REMOVED is set
> > before adding the large-negative-bias).
> > So doing both is pointless.
> >
> > By starting s_active with a value of 1, not 0 (as is typical of
> > reference counts) and using atomic_inc_not_zero, we can significantly
> > simplify the code while keeping exactly the same functionality.
>
> Overall your logic appears correct but in detail this patch scares me.
>
> sd->s_flags is protected by the sysfs_mutex, and you aren't
> taking it when you read it. So in general I don't see the new check
> if (sd->s_flags & SYSFS_FLAG_REMOVED) == 0 providing any guarantee of
> progress whatsoever with user space applications repeated reading from
> a sysfs file when that sysfs file is being removed. They could easily
> have the sd->s_flags value cached and never see the new value, given a
> crazy enough cache architecture.
As you say, this is only a liveness issue. The atomic_inc_not_zero
guarantees that we don't take a new reference after the last one is gone.
The test on SYSFS_FLAG_REMOVED is only there to ensure that the count does
eventually get to zero.
There could only be a problem here if the change to s_flags was not
propagated to all CPUs in some reasonably small time.
I'm not expert on these things but it was my understanding that interesting
cache architectures could arbitrarily re-order accesses, but does not delay
them indefinitely.
Inserting barriers could possibly make this more predictable, but that would
just delay certain loads/stores until a known state was reached - it would
not make the data visible at another CPU any faster (or would it?).
So unless there is no cache-coherency protocol at all, I think that
SYSFS_FLAG_REMOVED will be seen promptly and that s_active will drop to zero
and quickly as it does today.
>
> So as attractive as this patch is I don't think it is correct.
>
I'm pleased you find it attractive - I certainly think the
"atomic_inc_not_zero" is much more readable than the code it replaces.
Hopefully if there are really problems (maybe I've fundamentally
misunderstood caches) they can be easily resolved (a couple of memory
barriers at worst?).
Thanks for the review,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists