lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200904291533.07377.knikanth@novell.com>
Date:	Wed, 29 Apr 2009 15:33:06 +0530
From:	Nikanth Karthikesan <knikanth@...ell.com>
To:	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

On Wednesday 29 April 2009 13:29:30 Andrew Morton wrote:
> On Wed, 29 Apr 2009 12:21:39 +0530 Nikanth Karthikesan <knikanth@...ell.com> wrote:
> > Hi Jens
> >
> > Currently io_context has an atomic_t(int) as refcount. In case of cfq,
> > for each device a task does I/O, a reference to the io_context would be
> > taken. And when there are multiple process sharing io_contexts(CLONE_IO)
> > would also have a reference to the same io_context. Theoretically the
> > possible maximum number of processes sharing the same io_context + the
> > number of disks/cfq_data referring to the same io_context can overflow
> > the 32-bit counter on a very high-end machine. Even though it is an
> > improbable case, let us make it difficult by changing the refcount to
> > atomic64_t(long).
>
> Sorry, atomic64_t isn't implemented on 32 bit architectures.
>
> Perhaps it should be, but I expect it'd be pretty slow.

Oh! Sorry, I didn't notice the #ifdef earlier. I guess thats why there is only
a single in-tree user for atomic64_t!

In this case, could we make it atomic64_t only on 64-bit architectures and
keep it as atomic_t on 32-bit machines? Something like the attached patch.

I wonder whether we should also add BUG_ON's whenever the refcount is about to
wrap? Or try to handle it gracefully. Another approach would be to impose an
artificial limit on the no of tasks that could share an io_context. Or resort
to lock protection. The problem is not very serious/common.

Thanks
Nikanth

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..5be4585 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc)
 	if (ioc == NULL)
 		return 1;
 
-	BUG_ON(atomic_read(&ioc->refcount) == 0);
+	BUG_ON(atomic_read_ioc_refcount(ioc) == 0);
 
-	if (atomic_dec_and_test(&ioc->refcount)) {
+	if (atomic_dec_and_test_ioc_refcount(ioc)) {
 		rcu_read_lock();
 		if (ioc->aic && ioc->aic->dtor)
 			ioc->aic->dtor(ioc->aic);
@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node)
 		ret = current_io_context(gfp_flags, node);
 		if (unlikely(!ret))
 			break;
-	} while (!atomic_inc_not_zero(&ret->refcount));
+	} while (!atomic_inc_not_zero_ioc_refcount(ret));
 
 	return ret;
 }
@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc)
 	struct io_context *dst = *pdst;
 
 	if (src) {
-		BUG_ON(atomic_read(&src->refcount) == 0);
-		atomic_inc(&src->refcount);
+		BUG_ON(atomic_read_ioc_refcount(src) == 0);
+		atomic_inc_ioc_refcount(src);
 		put_io_context(dst);
 		*pdst = src;
 	}
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..42d5018 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
 	if (!cfqd->active_cic) {
 		struct cfq_io_context *cic = RQ_CIC(rq);
 
-		atomic_inc(&cic->ioc->refcount);
+		atomic_inc_ioc_refcount(cic->ioc);
 		cfqd->active_cic = cic;
 	}
 }
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..bdc7156 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -64,7 +64,11 @@ struct cfq_io_context {
  * and kmalloc'ed. These could be shared between processes.
  */
 struct io_context {
+#ifdef CONFIG_64BIT
+	atomic64_t refcount;
+#else
 	atomic_t refcount;
+#endif
 	atomic_t nr_tasks;
 
 	/* all the fields below are protected by this lock */
@@ -85,14 +89,30 @@ struct io_context {
 	void *ioc_data;
 };
 
+#ifdef CONFIG_64BIT
+
+#define atomic_read_ioc_refcount(ioc)	atomic64_read(&ioc->refcount)
+#define atomic_inc_ioc_refcount(ioc)	atomic64_inc(&ioc->refcount)
+#define atomic_dec_and_test_ioc_refcount(ioc)	atomic64_dec_and_test(&ioc->refcount)
+#define atomic_inc_not_zero_ioc_refcount(ioc)	atomic64_inc_not_zero(&ioc->refcount)
+
+#else
+
+#define atomic_read_ioc_refcount(ioc)	atomic_read(&ioc->refcount)
+#define atomic_inc_ioc_refcount(ioc)	atomic_inc(&ioc->refcount)
+#define atomic_dec_and_test_ioc_refcount(ioc)	atomic_dec_and_test(&ioc->refcount)
+#define atomic_inc_not_zero_ioc_refcount(ioc)	atomic_inc_not_zero(&ioc->refcount)
+
+#endif
+
 static inline struct io_context *ioc_task_link(struct io_context *ioc)
 {
 	/*
 	 * if ref count is zero, don't allow sharing (ioc is going away, it's
 	 * a race).
 	 */
-	if (ioc && atomic_inc_not_zero(&ioc->refcount)) {
-		atomic_inc(&ioc->nr_tasks);
+	if (ioc && atomic_inc_not_zero_ioc_refcount(ioc)) {
+		atomic_inc_ioc_refcount(ioc);
 		return ioc;
 	}
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ