[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+bWgcS=c=KrthWyyjjBpc72DEK-=czLYK8=SkmOsZ_-jg@mail.gmail.com>
Date: Wed, 16 Jan 2019 13:51:33 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Elena Reshetova <elena.reshetova@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Anders Roxell <anders.roxell@...aro.org>,
Mark Rutland <mark.rutland@....com>,
LKML <linux-kernel@...r.kernel.org>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] kcov: convert kcov.refcount to refcount_t
On Wed, Jan 16, 2019 at 11:27 AM Elena Reshetova
<elena.reshetova@...el.com> wrote:
>
> atomic_t variables are currently used to implement reference
> counters with the following properties:
> - counter is initialized to 1 using atomic_set()
> - a resource is freed upon counter reaching zero
> - once counter reaches zero, its further
> increments aren't allowed
> - counter schema uses basic atomic operations
> (set, inc, inc_not_zero, dec_and_test, etc.)
>
> Such atomic variables should be converted to a newly provided
> refcount_t type and API that prevents accidental counter overflows
> and underflows. This is important since overflows and underflows
> can lead to use-after-free situation and be exploitable.
>
> The variable kcov.refcount is used as pure reference counter.
> Convert it to refcount_t and fix up the operations.
>
> **Important note for maintainers:
>
> Some functions from refcount_t API defined in lib/refcount.c
> have different memory ordering guarantees than their atomic
> counterparts.
> The full comparison can be seen in
> https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
> in state to be merged to the documentation tree.
> Normally the differences should not matter since refcount_t provides
> enough guarantees to satisfy the refcounting use cases, but in
> some rare cases it might matter.
> Please double check that you don't have some undocumented
> memory guarantees for this variable usage.
>
> For the kcov.refcount it might make a difference
> in following places:
> - kcov_put(): decrement in refcount_dec_and_test() only
> provides RELEASE ordering and control dependency on success
> vs. fully ordered atomic counterpart
Reviewed-by: Dmitry Vyukov <dvyukov@...gle.com>
Thanks for improving this.
KCOV uses refcounts in a very simple canonical way, so no hidden
ordering implied.
Am I missing something or refcount_dec_and_test does not in fact
provide ACQUIRE ordering?
+case 5) - decrement-based RMW ops that return a value
+-----------------------------------------------------
+
+Function changes:
+ atomic_dec_and_test() --> refcount_dec_and_test()
+ atomic_sub_and_test() --> refcount_sub_and_test()
+ no atomic counterpart --> refcount_dec_if_one()
+ atomic_add_unless(&var, -1, 1) --> refcount_dec_not_one(&var)
+
+Memory ordering guarantees changes:
+ fully ordered --> RELEASE ordering + control dependency
I think that's against the expected refcount guarantees. When I
privatize an atomic_dec_and_test I would expect that not only stores,
but also loads act on a quiescent object. But loads can hoist outside
of the control dependency.
Consider the following example, is it the case that the BUG_ON can still fire?
struct X {
refcount_t rc; // == 2
int done1, done2; // == 0
};
// thread 1:
x->done1 = 1;
if (refcount_dec_and_test(&x->rc))
BUG_ON(!x->done2);
// thread 2:
x->done2 = 1;
if (refcount_dec_and_test(&x->rc))
BUG_ON(!x->done1);
> Suggested-by: Kees Cook <keescook@...omium.org>
> Reviewed-by: David Windsor <dwindsor@...il.com>
> Reviewed-by: Hans Liljestrand <ishkamiel@...il.com>
> Signed-off-by: Elena Reshetova <elena.reshetova@...el.com>
> ---
> kernel/kcov.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index c2277db..051e86e 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -20,6 +20,7 @@
> #include <linux/debugfs.h>
> #include <linux/uaccess.h>
> #include <linux/kcov.h>
> +#include <linux/refcount.h>
> #include <asm/setup.h>
>
> /* Number of 64-bit words written per one comparison: */
> @@ -44,7 +45,7 @@ struct kcov {
> * - opened file descriptor
> * - task with enabled coverage (we can't unwire it from another task)
> */
> - atomic_t refcount;
> + refcount_t refcount;
> /* The lock protects mode, size, area and t. */
> spinlock_t lock;
> enum kcov_mode mode;
> @@ -228,12 +229,12 @@ EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
>
> static void kcov_get(struct kcov *kcov)
> {
> - atomic_inc(&kcov->refcount);
> + refcount_inc(&kcov->refcount);
> }
>
> static void kcov_put(struct kcov *kcov)
> {
> - if (atomic_dec_and_test(&kcov->refcount)) {
> + if (refcount_dec_and_test(&kcov->refcount)) {
> vfree(kcov->area);
> kfree(kcov);
> }
> @@ -312,7 +313,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
> if (!kcov)
> return -ENOMEM;
> kcov->mode = KCOV_MODE_DISABLED;
> - atomic_set(&kcov->refcount, 1);
> + refcount_set(&kcov->refcount, 1);
> spin_lock_init(&kcov->lock);
> filep->private_data = kcov;
> return nonseekable_open(inode, filep);
> --
> 2.7.4
>
Powered by blists - more mailing lists