lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181012114305.tj72zxridkyayyq3@pathway.suse.cz>
Date:   Fri, 12 Oct 2018 13:43:05 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     Miroslav Benes <mbenes@...e.cz>
Cc:     Jiri Kosina <jikos@...nel.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jason Baron <jbaron@...mai.com>,
        Joe Lawrence <joe.lawrence@...hat.com>,
        Jessica Yu <jeyu@...nel.org>,
        Evgenii Shatokhin <eshatokhin@...tuozzo.com>,
        live-patching@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v12 04/12] livepatch: Consolidate klp_free functions

On Fri 2018-08-31 12:39:23, Miroslav Benes wrote:
> On Tue, 28 Aug 2018, Petr Mladek wrote:
> 
> > The code for freeing livepatch structures is a bit scattered and tricky:
> > 
> >   + direct calls to klp_free_*_limited() and kobject_put() are
> >     used to release partially initialized objects
> > 
> >   + klp_free_patch() removes the patch from the public list
> >     and releases all objects except for patch->kobj
> > 
> >   + object_put(&patch->kobj) and the related wait_for_completion()
> >     are called directly outside klp_mutex; this code is duplicated;
> > 
> > Now, we are going to remove the registration stage to simplify the API
> > and the code. This would require handling more situations in
> > klp_enable_patch() error paths.
> > 
> > More importantly, we are going to add a feature called atomic replace.
> > It will need to dynamically create func and object structures. We will
> > want to reuse the existing init() and free() functions. This would
> > create even more error path scenarios.
> > 
> > This patch implements a more clever free functions:
> > 
> >   + checks kobj.state_initialized instead of @limit
> > 
> >   + initializes patch->list early so that the check for empty list
> >     always works
> > 
> >   + The action(s) that has to be done outside klp_mutex are done
> >     in separate klp_free_patch_end() function. It waits only
> >     when patch->kobj was really released via the _begin() part.
> > 
> > Note that it is safe to put patch->kobj under klp_mutex. It calls
> > the release callback only when the reference count reaches zero.
> > Therefore it does not block any related sysfs operation that took
> > a reference and might eventually wait for klp_mutex.
> 
> This seems to be the reason of the issue which lockdep reported. The patch 
> moved kobject_put(&patch->kobj) under klp_mutex. Perhaps I cannot read 
> kernfs code properly today, but I fail to understand why it is supposed to 
> be safe.

My expectation was that any read/write operation on the related
sysfs interface took reference of the kobject. Then kobject_put()
would just decrement a reference counter and postpone the real
removal until all other operations were finished.

But it seems that the read/write operations take reference on
another (kernfs_node) object and do not block releasing the kobject
by kobject_put().

> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index b3956cce239e..3ca404545150 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -465,17 +465,15 @@ static struct kobj_type klp_ktype_func = {
> >  	.sysfs_ops = &kobj_sysfs_ops,
> >  };
> >  
> > -/*
> > - * Free all functions' kobjects in the array up to some limit. When limit is
> > - * NULL, all kobjects are freed.
> > - */
> > -static void klp_free_funcs_limited(struct klp_object *obj,
> > -				   struct klp_func *limit)
> > +static void klp_free_funcs(struct klp_object *obj)
> >  {
> >  	struct klp_func *func;
> >  
> > -	for (func = obj->funcs; func->old_name && func != limit; func++)
> > -		kobject_put(&func->kobj);
> > +	klp_for_each_func(obj, func) {
> > +		/* Might be called from klp_init_patch() error path. */
> > +		if (func->kobj.state_initialized)
> > +			kobject_put(&func->kobj);
> > +	}
> >  }
> 
> Just for the record, it is a slightly suboptimal because now we iterate 
> through the whole list. We could add break to else branch, I think, but 
> it's not necessary.

Interesting optimization. It would keep the limit and work at this
stage.

But it would stop working once we add the dynamically allocated
structures. They are allocated and initialized in two separate cycles.
We need to free all allocated structures when any initialization
fails.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ