lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Feb 2017 17:21:56 -0800
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Elena Reshetova <elena.reshetova@...el.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        peterz@...radead.org, gregkh@...uxfoundation.org,
        viro@...iv.linux.org.uk, catalin.marinas@....com, mingo@...hat.com,
        arnd@...db.de, luto@...nel.org
Subject: Re: [PATCH 0/5] mm subsystem refcounter conversions

On Tue, 21 Feb 2017 11:58:39 +0200 Elena Reshetova <elena.reshetova@...el.com> wrote:

> Now when new refcount_t type and API are finally merged
> (see include/linux/refcount.h), the following
> patches convert various refcounters in the mm susystem from atomic_t
> to refcount_t. By doing this we prevent intentional or accidental
> underflows or overflows that can led to use-after-free vulnerabilities.
> 
> The below patches are fully independent and can be cherry-picked separately.
> Since we convert all kernel subsystems in the same fashion, resulting
> in about 300 patches, we have to group them for sending at least in some
> fashion to be manageable. Please excuse the long cc list.

I don't think so.  Unless I'm missing something rather large...


We're going to convert every

	atomic_inc(&foo);

into an uninlined function which calls an uninlined

bool refcount_inc_not_zero(refcount_t *r)
{
	unsigned int old, new, val = atomic_read(&r->refs);

	for (;;) {
		new = val + 1;

		if (!val)
			return false;

		if (unlikely(!new))
			return true;

		old = atomic_cmpxchg_relaxed(&r->refs, val, new);
		if (old == val)
			break;

		val = old;
	}

	WARN(new == UINT_MAX, "refcount_t: saturated; leaking memory.\n");

	return true;
}

The performance implications of this proposal are terrifying.

I suggest adding a set of non-debug inlined refcount functions which
just fall back to the simple atomic.h operations.

And add a new CONFIG_DEBUG_REFCOUNT.  So the performance (and code
size!) with CONFIG_DEBUG_REFCOUNT=n is unaltered from present code. 
And make CONFIG_DEBUG_REFCOUNT suitably difficult to set.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ