lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+a_cBWk9E04EbsgGp6XPr-bQ=Wnt2qMgwiJJq0LSOQ02w@mail.gmail.com>
Date:	Fri, 18 Sep 2015 11:06:46 +0200
From:	Dmitry Vyukov <dvyukov@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>, will.deacon@....com
Cc:	Oleg Nesterov <oleg@...hat.com>, ebiederm@...ssion.com,
	Al Viro <viro@...iv.linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>, mhocko@...e.cz,
	LKML <linux-kernel@...r.kernel.org>, ktsan@...glegroups.com,
	Kostya Serebryany <kcc@...gle.com>,
	Andrey Konovalov <andreyknvl@...gle.com>,
	Alexander Potapenko <glider@...gle.com>,
	Hans Boehm <hboehm@...gle.com>
Subject: Re: [PATCH] kernel: fix data race in put_pid

On Fri, Sep 18, 2015 at 10:51 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Thu, Sep 17, 2015 at 08:09:19PM +0200, Oleg Nesterov wrote:
>> On 09/17, Dmitry Vyukov wrote:
>> >
>> > I can update the patch description, but let me explain it here first.
>>
>> Yes thanks.
>>
>> > Here is the essence of what happens:
>>
>> Aha, so you really meant that 2 put_pid's can race with each other,
>>
>> > // thread 1
>> > 1: pid->foo = 1; // foo is the first word of pid object
>> > // then it does put_pid
>> > 2: atomic_dec_and_test(&pid->count) // decrements count to 1 and
>> > returns false so the function returns
>> >
>> > // thread 2
>> > // executes put_pid
>> > 3: atomic_load(&pid->count); // returns 1, so proceed to kmem_cache_free
>> > // then kmem_cache_free does:
>> > 4: *(void**)pid = head->freelist;
>> > 5: head->freelist = (void*)pid;
>> >
>> > This can be executed as:
>> >
>> > 4: *(void**)pid = head->freelist;
>> > 1: pid->foo = 1; // foo is the first word of pid object
>> > 2: atomic_dec_and_test(&pid->count) // decrements count to 1 and
>> > returns false so the function returns
>> > 3: atomic_load(&pid->count); // returns 1, so proceed to kmem_cache_free
>> > 5: head->freelist = (void*)pid;
>>
>> Unless I am totally confused, everything is simpler. We can forget
>> about the hoisting, freelist, etc.
>>
>> Thread 2 can see the result of atomic_dec_and_test(), but not the
>> result of "pid->foo = 1". In this case in can free the object which
>> can be re-allocated _before_ STORE(pid->foo) completes. Of course,
>> this would be really bad.
>>
>> I need to recheck, but afaics this is not possible. This optimization
>> is fine, but probably needs a comment.
>
> For sure, this code doesn't make any sense to me.
>
>> We rely on delayed_put_pid()
>> called by RCU. And note that nobody can write to this pid after it
>> is removed from the rcu-protected list.
>>
>> So I think this is false alarm, but I'll try to recheck tomorrow, it
>> is too late for me today.
>
> As an alternative patch, could we not do:
>
>   void put_pid(struct pid *pid)
>   {
>         struct pid_namespace *ns;
>
>         if (!pid)
>                 return;
>
>         ns = pid->numbers[pid->level].ns;
>         if ((atomic_read(&pid->count) == 1) ||
>              atomic_dec_and_test(&pid->count)) {
>
> +               smp_read_barrier_depends(); /* ctrl-dep */
>
>                 kmem_cache_free(ns->pid_cachep, pid);
>                 put_pid_ns(ns);
>         }
>   }
>
> That would upgrade the atomic_read() path to a full READ_ONCE_CTRL(),
> and thereby avoid any of the kmem_cache_free() stores from leaking out.
> And its free, except on Alpha. Whereas the atomic_read_acquire() will
> generate a full memory barrier on whole bunch of archs.


What you propose makes sense.

+Will, Paul

Can we have something along the lines of:

#define atomic_read_ctrl(v) READ_ONCE_CTRL(&(v)->counter)

then?

I've found a bunch of similar cases, e.g.:
https://groups.google.com/forum/#!topic/ktsan/YoU0yX2wQJU

They all would benefit from atomic_read_ctrl.




-- 
Dmitry Vyukov, Software Engineer, dvyukov@...gle.com
Google Germany GmbH, Dienerstraße 12, 80331, München
Geschäftsführer: Graham Law, Christine Elizabeth Flores
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Diese E-Mail ist vertraulich. Wenn Sie nicht der richtige Adressat
sind, leiten Sie diese bitte nicht weiter, informieren Sie den
Absender und löschen Sie die E-Mail und alle Anhänge. Vielen Dank.
This e-mail is confidential. If you are not the right addressee please
do not forward it, please inform the sender, and please erase this
e-mail including any attachments. Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ