lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 Oct 2017 16:18:48 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     Jason Baron <jbaron@...mai.com>
Cc:     linux-kernel@...r.kernel.org, live-patching@...r.kernel.org,
        jpoimboe@...hat.com, jeyu@...nel.org, jikos@...nel.org,
        mbenes@...e.cz
Subject: Re: [PATCH v4 3/3] livepatch: add atomic replace

On Thu 2017-10-12 17:12:29, Jason Baron wrote:
> Sometimes we would like to revert a particular fix. This is currently
> This is not easy because we want to keep all other fixes active and we
> could revert only the last applied patch.
> 
> One solution would be to apply new patch that implemented all
> the reverted functions like in the original code. It would work
> as expected but there will be unnecessary redirections. In addition,
> it would also require knowing which functions need to be reverted at
> build time.
> 
> A better solution would be to say that a new patch replaces
> an older one. This might be complicated if we want to replace
> a particular patch. But it is rather easy when a new cummulative
> patch replaces all others.
> 
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index f53eed5..d1c7a06 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -283,8 +301,21 @@ static int klp_write_object_relocations(struct module *pmod,
>  	return ret;
>  }
>  
> +atomic_t klp_nop_release_count;
> +static DECLARE_WAIT_QUEUE_HEAD(klp_nop_release_wait);
> +
>  static void klp_kobj_release_object(struct kobject *kobj)
>  {
> +	struct klp_object *obj;
> +
> +	obj = container_of(kobj, struct klp_object, kobj);
> +	/* Free dynamically allocated object */
> +	if (!obj->funcs) {
> +		kfree(obj->name);
> +		kfree(obj);
> +		atomic_dec(&klp_nop_release_count);
> +		wake_up(&klp_nop_release_wait);

I would slightly optimize this by

		if (atomic_dec_and_test((&klp_nop_release_count))
			wake_up(&klp_nop_release_wait);

> +	}
>  }
>  
>  static struct kobj_type klp_ktype_object = {
> @@ -294,6 +325,16 @@ static struct kobj_type klp_ktype_object = {
>  
>  static void klp_kobj_release_func(struct kobject *kobj)
>  {
> +	struct klp_func *func;
> +
> +	func = container_of(kobj, struct klp_func, kobj);
> +	/* Free dynamically allocated functions */
> +	if (!func->new_func) {
> +		kfree(func->old_name);
> +		kfree(func);
> +		atomic_dec(&klp_nop_release_count);
> +		wake_up(&klp_nop_release_wait);

Same here

		if (atomic_dec_and_test((&klp_nop_release_count))
			wake_up(&klp_nop_release_wait);


> +	}
>  }
>  
>  static struct kobj_type klp_ktype_func = {
> @@ -436,8 +480,14 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>  	if (ret)
>  		return ret;
>  
> -	klp_for_each_func(obj, func) {
> -		ret = klp_init_func(obj, func);
> +	list_add(&obj->obj_entry, &patch->obj_list);
> +	INIT_LIST_HEAD(&obj->func_list);
> +
> +	if (nop)
> +		return 0;

Ah, this is something that I wanted to avoid. It makes the code
very hard to read and maintain. It forces us to duplicate
some code in klp_alloc_nop_func(). I think that I complained
about this in v2 already.

I understand that you actually kept it because of me.
It is related to the possibility to re-enable released
patches :-(

The klp_init_*() stuff is called from __klp_enable_patch()
for the "nop" functions now. And it has already been called
for the statically defined structures in klp_register_patch().
Therefore we need to avoid calling it twice for the static
structures.

One solution would be to do these operations for the statically
defined structures in __klp_enable_patch() as well. But this
would mean a big redesign of the code.

Another solution would be to give up on the idea that the replaced
patches might be re-enabled without re-loading. I am afraid
that this the only reasonable approach. It will help to avoid
also the extra klp_replaced_patches list. All this will help to
make the code much easier.

I am really sorry that I asked you to do this exercise and
support the patch re-enablement. It looked like a good idea.
I did not expect that it would be that complicated.

I stop reviewing this patch because it will look quite
different again. I will only keep some random comments
around that I added before finding this main design flaw.

Thanks a lot for the hard work. v4 looks much better than
v2 in many ways. I think that we are going on the right way.

> +
> +	klp_for_each_func_static(obj, func) {
> +		ret = klp_init_func(obj, func, false);
>  		if (ret)
>  			goto free;
>  	}
> @@ -456,6 +506,226 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>  	return ret;
>  }

[...]

> +/* Add 'nop' functions which simply return to the caller to run
> + * the original function. The 'nop' functions are added to a
> + * patch to facilitate a 'replace' mode
> + */
> +static int klp_add_nops(struct klp_patch *patch)
> +{
> +	struct klp_patch *old_patch;
> +	struct klp_object *old_obj;
> +	int err = 0;
> +
> +	if (!patch->replace)
> +		return 0;

It would be more sane if this returns -EINVAL and
we call the following in __klp_enable_patch().

	if (patch->replace) {
		ret = klp_add_nops(patch);
		...
	}

Otherwise, people might wonder why klp_add_nops() is always called.
They would need to look how it is implemented just to realize that
it is a NOP when patch->replace is not set.

> +
> +	list_for_each_entry(old_patch, &klp_patches, list) {
> +		if (old_patch == patch)
> +			break;
> +
> +		klp_for_each_object(old_patch, old_obj) {
> +			err = klp_add_nop_object(patch, old_obj);
> +			if (err) {
> +				klp_remove_nops(patch);
> +				return err;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  static int __klp_disable_patch(struct klp_patch *patch)
>  {
>  	if (klp_transition_patch)
> @@ -527,10 +798,22 @@ static int __klp_enable_patch(struct klp_patch *patch)
>  	if (WARN_ON(patch->enabled))
>  		return -EINVAL;
>  
> +	if (klp_is_patch_replaced(patch)) {
> +		list_move_tail(&patch->list, &klp_patches);
> +		replaced = true;
> +	}
> +
>  	/* enforce stacking: only the first disabled patch can be enabled */
>  	if (patch->list.prev != &klp_patches &&
> -	    !list_prev_entry(patch, list)->enabled)
> -		return -EBUSY;
> +	    !list_prev_entry(patch, list)->enabled) {
> +		ret = -EBUSY;
> +		goto cleanup_list;
> +	}
> +
> +	/* setup nops */

Please, remove the comment. It does not add much information.

> +	ret = klp_add_nops(patch);
> +	if (ret)
> +		goto cleanup_list;

It will be more obvious when we do:

	if (patch->replace) {
		ret = klp_add_nops(patch);
		if (ret)
			goto cleanup_list;
	}

>  
>  	/*
>  	 * A reference is taken on the patch module to prevent it from being

Best Regards,
Petr

Powered by blists - more mailing lists