[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080110190700.961C885D@kernel>
Date: Thu, 10 Jan 2008 11:07:00 -0800
From: Dave Hansen <haveblue@...ibm.com>
To: linux-kernel@...r.kernel.org
Cc: miklos@...redi.hu, hch@...radead.org, serue@...ibm.com,
Dave Hansen <haveblue@...ibm.com>
Subject: [RFC][PATCH 2/4] change mnt_writers underflow protection logic
The comment tells most of the story. I want to make the
spinlock in this case into a mutex, and the current
underflow protection mechanism uses preempt disabling from
put/get_cpu_Var(). I can't use that with a mutex.
Without the preempt disabling, there is no limit to the
number of cpus that might get to:
use_cpu_writer_for_mount(cpu_writer, mnt);
if (cpu_writer->count > 0) {
cpu_writer->count--;
} else {
atomic_dec(&mnt->__mnt_writers);
}
spin_unlock(&cpu_writer->lock);
---->HERE
if (must_check_underflow)
handle_write_count_underflow(mnt);
because they get preempted once the spinlock is unlocked.
So, there's no limit on how many times __mnt_writers may
be decremented. (I know the limit is still the number of
tasks on the system, but that's a heck of a lot higher
than the number of cpus.) Doing the simple check in this
patch before the decrement and under a lock removes the
possibility that this can happen.
Since there are only NR_CPUS mnt_writer[]s, we can only
have NR_CPUS lock holders in the critical section at a
time, __mnt_writers can only underflow by
MNT_WRITER_UNDERFLOW_LIMIT+NR_CPUS.
Signed-off-by: Dave Hansen <haveblue@...ibm.com>
---
linux-2.6.git-dave/fs/namespace.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
diff -puN fs/namespace.c~change-underflow-protection-logic fs/namespace.c
--- linux-2.6.git/fs/namespace.c~change-underflow-protection-logic 2008-01-10 10:36:37.000000000 -0800
+++ linux-2.6.git-dave/fs/namespace.c 2008-01-10 10:36:37.000000000 -0800
@@ -267,15 +267,30 @@ void mnt_drop_write(struct vfsmount *mnt
int must_check_underflow = 0;
struct mnt_writer *cpu_writer;
- cpu_writer = &get_cpu_var(mnt_writers);
+retry:
+ cpu_writer = &__get_cpu_var(mnt_writers);
spin_lock(&cpu_writer->lock);
use_cpu_writer_for_mount(cpu_writer, mnt);
if (cpu_writer->count > 0) {
cpu_writer->count--;
} else {
- must_check_underflow = 1;
+ /* Without this check, it is theoretically
+ * possible to underflow __mnt_writers.
+ * An unlimited number of processes could
+ * all do this decrement, unlock, and then
+ * stall before the underflow handling.
+ * Doing this check limits the underflow
+ * to the number of cpu_writer->lock
+ * holders (NR_CPUS).
+ */
+ if (atomic_read(&mnt->__mnt_writers) <
+ MNT_WRITER_UNDERFLOW_LIMIT) {
+ spin_unlock(&cpu_writer->lock);
+ goto retry;
+ }
atomic_dec(&mnt->__mnt_writers);
+ must_check_underflow = 1;
}
spin_unlock(&cpu_writer->lock);
@@ -286,15 +301,6 @@ void mnt_drop_write(struct vfsmount *mnt
*/
if (must_check_underflow)
handle_write_count_underflow(mnt);
- /*
- * This could be done right after the spinlock
- * is taken because the spinlock keeps us on
- * the cpu, and disables preemption. However,
- * putting it here bounds the amount that
- * __mnt_writers can underflow. Without it,
- * we could theoretically wrap __mnt_writers.
- */
- put_cpu_var(mnt_writers);
}
EXPORT_SYMBOL_GPL(mnt_drop_write);
_
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists