[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPSr9jH3sowszuNtBaTM1Wdi9vW+iakYX1G3arj+2_r5r7bYwQ@mail.gmail.com>
Date: Sat, 25 May 2019 20:15:01 +0800
From: Muchun Song <smuchun@...il.com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Prateek Sood <prsood@...eaurora.org>,
Mukesh Ojha <mojha@...eaurora.org>, gkohli@...eaurora.org,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
zhaowuyun@...gtech.com
Subject: Re: [PATCH v4] driver core: Fix use-after-free and double free on
glue directory
Hi greg k-h,
Greg KH <gregkh@...uxfoundation.org> 于2019年5月25日周六 上午3:04写道:
>
> On Thu, May 16, 2019 at 10:23:42PM +0800, Muchun Song wrote:
> > There is a race condition between removing glue directory and adding a new
> > device under the glue directory. It can be reproduced in following test:
>
> <snip>
>
> Is this related to:
> Subject: [PATCH v3] drivers: core: Remove glue dirs early only when refcount is 1
>
> ?
>
> If so, why is the solution so different?
In the v1 patch, the solution is that remove glue dirs early only when
refcount is 1. So
the v1 patch like below:
@@ -1825,7 +1825,7 @@ static void cleanup_glue_dir(struct device *dev,
struct kobject *glue_dir)
return;
mutex_lock(&gdp_mutex);
- if (!kobject_has_children(glue_dir))
+ if (!kobject_has_children(glue_dir) && kref_read(&glue_dir->kref) == 1)
kobject_del(glue_dir);
kobject_put(glue_dir);
mutex_unlock(&gdp_mutex);
-----------------------------------------------------------------------
But from Ben's suggestion as below:
I find relying on the object count for such decisions rather fragile as
it could be taken temporarily for other reasons, couldn't it ? In which
case we would just fail...
Ideally, the looking up of the glue dir and creation of its child
should be protected by the same lock instance (the gdp_mutex in that
case).
-----------------------------------------------------------------------
So another solution is used from Ben's suggestion in the v2 patch. But
I forgot to update the commit message until the v4 patch. Thanks.
Yours,
Muchun
Powered by blists - more mailing lists