[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jL4ojFEWAkW0NDSERUFTQhPgwb6ToOcAsg_4QySex1GfA@mail.gmail.com>
Date: Sat, 7 Sep 2013 23:01:18 -0700
From: Kees Cook <keescook@...omium.org>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: "David S. Miller" <davem@...emloft.net>,
LKML <linux-kernel@...r.kernel.org>,
linux-crypto <linux-crypto@...r.kernel.org>,
Tyler Hicks <tyhicks@...onical.com>
Subject: Re: race condition in crypto larval handling
On Sat, Sep 7, 2013 at 9:54 PM, Herbert Xu <herbert@...dor.apana.org.au> wrote:
> On Sun, Sep 08, 2013 at 02:37:03PM +1000, Herbert Xu wrote:
>> On Sat, Sep 07, 2013 at 08:34:15PM -0700, Kees Cook wrote:
>> >
>> > However, I noticed on the "good" path (even without the above patch),
>> > I sometimes see a double-kfree triggered by the modprobe process. I
>> > can't, however, see how that's happening, since larval_destroy should
>> > only be called when refcnt == 0.
>>
>> Do you still see this double free with this patch? Without the
>> patch it is completely expected as killing the same lavral twice
>> will cause memory corruption leading to all sorts of weirdness,
>> even if you stop it from deleting the list entry twice.
I noticed while testing the larval_kill fix, and then tried it again
after reverting the fix -- both showed the behavior.
> Actually I know what it is. sha512 registers two algorithms.
> Therefore, it will create two larvals in sequence and then destroy
> them in turn. So it's not a double free at all. If you put a
> printk in crypto_larval_alloc that should confirm this.
Ah! That would make sense; it just happens to re-allocate to the exact
same location, yes. Whew, that's certainly what's happening. I can
retest to confirm in my morning.
Thanks again for the larval_kill fix! I'll get it rolled out for wider
testing to confirm that it make our crash numbers go down.
-Kees
--
Kees Cook
Chrome OS Security
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists