lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1710201430420.4531@nanos>
Date:   Fri, 20 Oct 2017 14:43:56 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Elena Reshetova <elena.reshetova@...el.com>
cc:     mingo@...hat.com, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, peterz@...radead.org,
        gregkh@...uxfoundation.org, viro@...iv.linux.org.uk, tj@...nel.org,
        hannes@...xchg.org, lizefan@...wei.com, acme@...nel.org,
        alexander.shishkin@...ux.intel.com, eparis@...hat.com,
        akpm@...ux-foundation.org, arnd@...db.de, luto@...nel.org,
        keescook@...omium.org, dvhart@...radead.org, ebiederm@...ssion.com,
        linux-mm@...ck.org, axboe@...nel.dk
Subject: Re: [PATCH 01/15] sched: convert sighand_struct.count to
 refcount_t

On Fri, 20 Oct 2017, Elena Reshetova wrote:

> atomic_t variables are currently used to implement reference
> counters with the following properties:
>  - counter is initialized to 1 using atomic_set()
>  - a resource is freed upon counter reaching zero
>  - once counter reaches zero, its further
>    increments aren't allowed
>  - counter schema uses basic atomic operations
>    (set, inc, inc_not_zero, dec_and_test, etc.)
> 
> Such atomic variables should be converted to a newly provided
> refcount_t type and API that prevents accidental counter overflows
> and underflows. This is important since overflows and underflows
> can lead to use-after-free situation and be exploitable.
> 
> The variable sighand_struct.count is used as pure reference counter.

This still does not mention that atomic_t != recfcount_t ordering wise and
why you think that this does not matter in that use case.

And looking deeper:

> @@ -1381,7 +1381,7 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
>  	struct sighand_struct *sig;
>  
>  	if (clone_flags & CLONE_SIGHAND) {
> -		atomic_inc(&current->sighand->count);
> +		refcount_inc(&current->sighand->count);
>  		return 0;

>  void __cleanup_sighand(struct sighand_struct *sighand)
>  {
> -	if (atomic_dec_and_test(&sighand->count)) {
> +	if (refcount_dec_and_test(&sighand->count)) {

How did you make sure that these atomic operations have no other
serialization effect and can be replaced with refcount?

I complained about that before and Peter explained it to you in great
length, but you just resend the same thing again. Where is the correctness
analysis? Seriously, for this kind of stuff it's not sufficient to use a
coccinelle script and copy boiler plate change logs and be done with it.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ