lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fffb1e51-cf84-8ccb-afc6-15df9c130d3d@akamai.com>
Date:   Mon, 11 Sep 2017 23:46:28 -0400
From:   Jason Baron <jbaron@...mai.com>
To:     Petr Mladek <pmladek@...e.com>
Cc:     linux-kernel@...r.kernel.org, live-patching@...r.kernel.org,
        jpoimboe@...hat.com, jeyu@...nel.org, jikos@...nel.org,
        mbenes@...e.cz
Subject: Re: [PATCH v2 2/3] livepatch: add atomic replace



On 09/11/2017 09:53 AM, Petr Mladek wrote:
> On Wed 2017-08-30 17:38:44, Jason Baron wrote:
>> When doing cumulative patches, if patch A introduces a change to function 1,
>> and patch B reverts the change to function 1 and introduces changes to say
>> function 2 and 3 as well, the change that patch A introduced to function 1
>> is still present.
> 
> This is a bit confusing. One might thing that the function 1 still
> uses the code added by the patch A. I would say something like:
> 
> 
> Sometimes we would like to revert a particular fix. This is not easy
> because we want to keep all other fixes active and we could revert
> only the last applied patch.
> 
> One solution would be to apply new patch that implemented all
> the reverted functions like in the original code. It would work
> as expected but there will be unnecessary redirections.
> 
> A better solution would be to say that a new patch replaces
> an older one. This might be complicated if we want to replace
> a particular patch. But it is rather easy when a new cummulative
> patch replaces all others.
> 
> For this, a new "replace" flag is added to the struct klp_patch.
> When it is enabled, 'no_op' klp_func will be dynamically
> created for all functions that are already being patched but
> that will not longer be modified by the new patch. They are
> temporarily used as a new target during the patch transition...
> 
>

ok. I will re-work the text here.

>> Introduce atomic replace, by first creating a 'no_op' klp_func for all
>> the functions that are reverted by patch B. The reason that 'no_op'
>> klp_funcs are created, instead of just unregistering directly from the ftrace
>> function hook, is to ensure that the consistency model is properly preserved.
>> By introducing the 'no_op' functions, 'atomic replace' leverages the existing
>> consistency model code. Then, after transition to the new code, 'atomic
>> replace' unregisters the ftrace handlers that are associated with the 'no_op'
>> klp_funcs, such that that we run the original un-patched function with no
>> additional no_op function overhead.
>>
>> Since 'atomic replace' has completely replaced all previous livepatch modules,
>> it explicitly disables all previous livepatch modules, in the example- patch A,
>> such that the livepatch module associated with patch A can be completely removed
>> (rmmod). Patch A is now in a permanently disabled state. But if it is removed
>> from the kernel with rmmod, it can be re-inserted (insmod), and act as an atomic
>> replace on top of patch A.
>>
>> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
>> index 8d3df55..ee6d18b 100644
>> --- a/include/linux/livepatch.h
>> +++ b/include/linux/livepatch.h
>> @@ -119,10 +121,12 @@ struct klp_object {
>>   * @mod:	reference to the live patch module
>>   * @objs:	object entries for kernel objects to be patched
>>   * @immediate:  patch all funcs immediately, bypassing safety mechanisms
>> + * @replace:	replace all funcs, reverting functions that are no longer patched
>>   * @list:	list node for global list of registered patches
>>   * @kobj:	kobject for sysfs resources
>>   * @obj_list:	head of list for dynamically allocated struct klp_object
>>   * @enabled:	the patch is enabled (but operation may be incomplete)
>> + * @replaced:	the patch has been replaced an can not be re-enabled
> 
> I finally understand why you do this. I forgot that even disabled
> patches are still in klp_patch list.
> 
> This makes some sense for patches that fix different bugs and
> touch the same function. They should be applied and enabled
> in the right order because a later patch for the same function
> must be based on the code from the previous one.
> 
> It makes less sense for cummulative patches that we use in kGraft.
> There basically every patch has the "replace" flag set. If
> we enable one patch it simply replaces any other one. Then
> ordering is not important. Each patch is standalone and
> consistent on its own.
> 
> 
> I see two problems with your approach. One is cosmetic.
> The names of the two flags replace/replaced are too similar.
> It is quite prone for mistakes ;-)
> 
> Second, we could not remove module where any function is patched
> usign the "immediate" flag. But people might want to revert
> to this patch if the last one does not work as expected.
> After all the main purpose of livepatching is to fix
> problems without a system reboot. Therefore we should
> allow to enable the replaced patches even without
> removing the module.
> 

If the livepatch has the 'replace' bit set and not the 'immediate' bit,
then I believe that we could remove *all* types of previous livepatches
even if they have the 'immediate' flag set. That is, because of the
consistency model, we are sure that we are running only functions from
the cumulative replace patch.

So I think it may be worth making a distinction between patches that
have 'replace' bit set and the immediate bit *unset*, and those that
have the 'replace' bit set and the immediate bit *set*. So it seems to
me that this patchset could just reject 'replace' patches that have the
'immediate' flag set, and treat the 'immediate' flag being set as
separate todo item if there is interest?

Then, the 'replace' patch with 'immediate' not set, conceptually does a
'disable' and an 'unregister' of all previous live patches, such that
they could be removed (rmmod) and re-inserted.

> 
> One solution would be to remove the replaced patches from
> klp_patch list. And enforce stacking the following way
> in __klp_enable_patch():
> 
> 	/*
> 	 * Enforce stacking: only the first disabled patch can be
> 	 * enabled. The only exception is a patch that atomically
> 	 * replaces all other patches and was disabled by
> 	 * another patch of this type.
> 	 */
> 	if (list_empty(&patch->list)) {
> 		if (!patch->replace)
> 			return -EBUSY;
> 		list_add_tail(&patch->list, &klp_patches);
> 	} else if (patch->list.prev != &klp_patches &&
> 		   !list_prev_entry(patch, list)->enabled) {
> 		return -EBUSY;
> 	}
> 
> Well, I would add this support later in a separate patch.
> We would need to (re)generate the no_op stuff in __klp_enable_patch()
> for this. Also we would need to make sure that this patch
> could not longer get enabled once klp_unregister_patch()
> is called.
> 
> For now, I would just remove the replaced patches from
> klp_patches list and refuse enabling patches that are
> not in the list. I mean to use a check of
> list_empty(&patch->list) instead of the extra
> "replaced" variable.

Ok, that makes sense to me as a way to remove 'replaced'.

> 
> 
>>   * @finish:	for waiting till it is safe to remove the patch module
>>   */
>>  struct klp_patch {
>> @@ -130,12 +134,14 @@ struct klp_patch {
>>  	struct module *mod;
>>  	struct klp_object *objs;
>>  	bool immediate;
>> +	bool replace;
>>  
>>  	/* internal */
>>  	struct list_head list;
>>  	struct kobject kobj;
>>  	struct list_head obj_list;
>>  	bool enabled;
>> +	bool replaced;
>>  	struct completion finish;
>>  };
>>  
>> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
>> index 6004be3..21cecc5 100644
>> --- a/kernel/livepatch/core.c
>> +++ b/kernel/livepatch/core.c
>> @@ -600,13 +603,38 @@ static void klp_free_patch(struct klp_patch *patch)
>>  		list_del(&patch->list);
>>  }
>>  
>> -static int klp_init_func(struct klp_object *obj, struct klp_func *func)
>> +void klp_patch_free_no_ops(struct klp_patch *patch)
>> +{
>> +	struct klp_object *obj, *tmp_obj;
>> +	struct klp_func *func, *tmp_func;
>> +
>> +	klp_for_each_object(patch, obj) {
>> +		list_for_each_entry_safe(func, tmp_func, &obj->func_list,
>> +					 func_entry) {
>> +			list_del_init(&func->func_entry);
>> +			kobject_put(&func->kobj);
>> +			kfree(func->old_name);
>> +			kfree(func);
> 
> kobject_put() is asynchronous. The structure should be freed using
> the kobject release method.
> 
> The question is how secure this should be. We might want to block
> other operations with the patch until all the release methods
> are called. But this might be tricky as there is not a single
> root kobject that would get destroyed at the end.
> 
> A crazy solution would be to define a global atomic counter.
> It might get increased with each kobject_put() call and
> descreased in each release method. Then we could wait
> until it gets zero.
> 
> It should be safe to wait with klp_mutex taken. Note that this
> is not possible with patch->kobj() where the "enable"
> attribute takes the mutex as well, see
> enabled_store() in kernel/livepatch/core.c.

Thanks for pointing this out - it looks like the livepatch code uses
wait_for_completion() with special kobj callbacks. Perhaps, there could
be a nop callback, but I'd have to look at this more closely...

> 
> 
>> +		}
>> +		INIT_LIST_HEAD(&obj->func_list);
>> +	}
>> +	list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, obj_entry) {
>> +		list_del_init(&obj->obj_entry);
>> +		kobject_put(&obj->kobj);
>> +		kfree(obj->name);
>> +		kfree(obj);
> 
> Same here.
> 
>> +	}
>> +	INIT_LIST_HEAD(&patch->obj_list);
>> +}
>> +
>> +static int klp_init_func(struct klp_object *obj, struct klp_func *func,
>> +			 bool no_op)
>>  {
>> -	if (!func->old_name || !func->new_func)
>> +	if (!func->old_name || (!no_op && !func->new_func))
>>  		return -EINVAL;
>>  
>> -	INIT_LIST_HEAD(&func->stack_node);
>>  	INIT_LIST_HEAD(&func->func_entry);
>> +	INIT_LIST_HEAD(&func->stack_node);
> 
> This just shuffles a line that was added in the previous patch ;-)
> 
>>  	func->patched = false;
>>  	func->transition = false;
>>  
>> @@ -670,19 +698,23 @@ static int klp_init_object_loaded(struct klp_patch *patch,
>>  	return 0;
>>  }
>>  
>> -static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>> +static int klp_init_object(struct klp_patch *patch, struct klp_object *obj,
>> +			   bool no_op)
> 
> It we initialized the no_op stuff earlier, we might pass this
> information via obj->no_op. See below for more details.
> 
>>  {
>>  	struct klp_func *func;
>>  	int ret;
>>  	const char *name;
>>  
>> -	if (!obj->funcs)
>> +	if (!obj->funcs && !no_op)
>>  		return -EINVAL;
>>  
>>  	obj->patched = false;
>>  	obj->mod = NULL;
>> +	INIT_LIST_HEAD(&obj->obj_entry);
>> +	INIT_LIST_HEAD(&obj->func_list);
> 
> This should be done already in the previous patch.
> Well, we might want to change this to
> 
> 	if (!obj->no_op)
> 		INIT_LIST_HEAD(&obj->func_list);
> 
> 
>>  
>> -	klp_find_object_module(obj);
>> +	if (!no_op)
>> +		klp_find_object_module(obj);
> 
> This looks just like an optimization. You copy the information
> from the related to-be-removed patch. I would avoid it to reduce
> the extra changes and twists in the code. It is a rare slow path.

ok.

> 
> Instead, I suggest to initialize the no_op related stuff
> earlier and then call the classic init() methods that
> would fill the rest. See below.
> 
> 
>>  	name = klp_is_module(obj) ? obj->name : "vmlinux";
>>  	ret = kobject_init_and_add(&obj->kobj, &klp_ktype_object,
>> @@ -690,8 +722,12 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>>  	if (ret)
>>  		return ret;
>>  
>> +	if (no_op)
>> +		return 0;
> 
> This might be hard to maintain. It seems to be needed because
> you call klp_init_no_ops() after you already klp_init_object()
> for the statically defined ones.
> 
> Alternative solution would be to add the dynamically allocated
> structures earlier and call basically the unchanged klp_init_object()
> for all of them. See below for more details.'

Ok, I can re-work this.

> 
>> +
>>  	klp_for_each_func(obj, func) {
>> -		ret = klp_init_func(obj, func);
>> +		func->no_op = false;
> 
> This is weird and not needed. We should not touch it here.
> Note that statically defined structures are zeroed except
> for the explicitely defined members. Therefore it is false
> out of box. It should be set earlier for the dynamic ones.
> 
>> +		ret = klp_init_func(obj, func, false);
>>  		if (ret)
>>  			goto free;
>>  	}
>> @@ -710,6 +746,115 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>>  	return ret;
>>  }
>>  
>> +static int klp_init_patch_no_ops(struct klp_patch *prev_patch,
>> +				 struct klp_patch *patch)
>> +{
>> +	struct klp_object *obj, *prev_obj;
>> +	struct klp_func *prev_func, *func;
>> +	int ret;
>> +	bool found, mod;
> 
> This is a lot of spagetti code: more than one sceen and 6 indentation
> levels. Also all the use of prev_patch is rather confusing.
> 
> If we call this function before we initialize the statically defined
> objects. Then we could keep the original klp_init_* functions
> almost without changes.
> 
> First, I suggest to create a function that would initialize just
> the stuff specific for the dynamically allocated stuff.
> 
> static void klp_init_patch_dyn(struct klp_patch *patch)
> {
> 	struct klp_object *obj;
> 	struct klp_object *func;
> 
> 	INIT_LIST_HEAD(&patch->obj_list);
> 	klp_for_each_object(patch, obj) {
> 		INIT_LIST_HEAD(&obj->obj_entry);
> 		INIT_LIST_HEAD(&obj->func_list);
> 		klp_for_each_func(obj, func) {
> 			INIT_LIST_HEAD(&func->func_entry);
> 		}
> 	}
> }
> 
> I still suggest to make clear that these list items are
> not generic. I would prefer to make it clear that they
> are used by the no_op stuff =>
> 
>   patch->obj_list  -> patch->nop_obj_list
>   obj->obj_entry   -> obj->nop_obj_entry
>   obj->func_list   -> obj->nop_func_list
>   func->func_entry -> func->nop_func_entry
> 
> Note that I used "nop" instead of "no_op". I think that
> it is a well known shortcut.
> 
> 
> 
> let's go back to the init stuff. As a second step, we could
> allocate and add the structures for the no_op operations.
> We should fill the no_op-specific stuff and the items that
> are normally pre-set in statically defined structures.
> Then we could call the classic klp_init_object().
> 
> I would split this into several levels. I do not say that the
> following is the best split but...
> 
> static int klp_check_and_add_no_op_func(struct klp_patch *obj,
> 				struct klp_func *old_func)
> {
> 	struct klp_func *func;
> 
> 	func = klp_find_func(obj, old_func->old_name, old_func->old_sympos);
> 	if (func)
> 		return 0;
> 
> 	func = klp_create_no_op_func(old_obj->name);
> 	if (IS_ERR(func))
> 		return PTR_ERR(func);
> 
> 	list_add(&func->func_entry, &obj->func_list);
> 
> 	return 0;
> }
> 
> static int klp_check_and_add_no_op_object(struct klp_patch *patch,
> 				struct klp_object *old_obj)
> {
> 	struct klp_object *obj;
> 	struct klp_func *old_func, *func;
> 	int err = 0;
> 
> 	obj = klp_find_obj(patch, old_obj->name);
> 	if (!obj) {
> 		obj = klp_create_no_op_object(old_obj->name);
> 		if (IS_ERR(obj))
> 			   return PTR_ERR(obj);
> 
> 		list_add(&obj->obj_entry, &patch->obj_list);
> 	}
> 
> 	klp_for_each_func(old_obj, old_func) {
> 		err = klp_check_and_add_no_op_func(obj, old_func);
> 		if (err)
> 			// destroy previously created no_op stuff?
> 	}
> 
> 	return 0;
> }
> 
> static int klp_check_and_add_no_ops(struct klp_patch *patch)
> {
> 	struct klp_patch *old_patch;
> 	struct klp_object *old_obj, *obj;
> 	int err = 0;
> 
> 	list_for_each_entry(old_patch, &klp_patches, list) {
> 		klp_for_each_object(old_patch, old_obj) {
> 			err = klp_check_and_add_no_op_object(patch,
> 					old_obj);
> 			if (err)
> 				return err;
> 		}
> 	}
> 
> 	return 0;
> }
> 
> It is more code but it should be better readable and manageable.
> Especially, the new functions handle only the no_op specific stuff.
> They do not duplicate functionality that could be done
> by the original init() functions for the statically
> defined structures.

Ok, this restructuring makes sense and I will try it out.

> 
>> +
>> +	klp_for_each_object(prev_patch, prev_obj) {
>> +		klp_for_each_func(prev_obj, prev_func) {
>> +next_func:
> 
>> +			klp_for_each_object(patch, obj) {
>> +				klp_for_each_func(obj, func) {
>> +					if ((strcmp(prev_func->old_name,
>> +						    func->old_name) == 0) &&
>> +						(prev_func->old_sympos ==
>> +							func->old_sympos)) {
>> +						goto next_func;
>> +					}
>> +				}
>> +			}
> 
> This could be used to implement that klp_find_func().
> 
> 
>> +			mod = klp_is_module(prev_obj);
>> +			found = false;
>> +			klp_for_each_object(patch, obj) {
>> +				if (mod) {
>> +					if (klp_is_module(obj) &&
>> +					    strcmp(prev_obj->name,
>> +						   obj->name) == 0) {
>> +						found = true;
>> +						break;
>> +					}
>> +				} else if (!klp_is_module(obj)) {
>> +					found = true;
>> +					break;
>> +				}
>> +			}
> 
> This could be used to implement klp_find_object().
> 
> 
>> +			if (!found) {
>> +				obj = kzalloc(sizeof(*obj), GFP_KERNEL);
>> +				if (!obj)
>> +					return -ENOMEM;
>> +				if (prev_obj->name) {
>> +					obj->name = kstrdup(prev_obj->name,
>> +							    GFP_KERNEL);
>> +					if (!obj->name) {
>> +						kfree(obj);
>> +						return -ENOMEM;
>> +					}
>> +				} else {
>> +					obj->name = NULL;
>> +				}
>> +				obj->funcs = NULL;
>> +				ret = klp_init_object(patch, obj, true);
>> +				if (ret) {
>> +					kfree(obj->name);
>> +					kfree(obj);
>> +					return ret;
>> +				}
>> +				obj->mod = prev_obj->mod;
>> +				list_add(&obj->obj_entry, &patch->obj_list);
>> +			}
> 
> This is a base for the klp_create_no_op_object(). It should set everything
> that will be needed for the classic klp_init_object() plus that no_op
> specific stuff. It might might looks like:
> 
> static struct klp_object *
> klp_create_no_op_object(const char *name)
> {
> 	struct klp_object *obj;
> 
> 	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> 	if (!obj)
> 		return ERR_PTR(-ENOMEM);
> 
> 	obj->no_op = true;
> 	INIT_LIST_HEAD(&obj->func_list);
> 
> 	if (name) {
> 		obj->name = kstrdup(name, GFP_KERNEL);
> 		if (!obj->name) {
> 			kfree(obj);
> 			return ERR_PRT(-ENOMEM);
> 		}
> 	}
> 
> 	return obj;
> }
> 
>> +			func = kzalloc(sizeof(*func), GFP_KERNEL);
>> +			if (!func)
>> +				return -ENOMEM;
>> +			if (prev_func->old_name) {
>> +				func->old_name = kstrdup(prev_func->old_name,
>> +							 GFP_KERNEL);
>> +				if (!func->old_name) {
>> +					kfree(func);
>> +					return -ENOMEM;
>> +				}
>> +			} else {
>> +				func->old_name = NULL;
>> +			}
>> +			func->new_func = NULL;
>> +			func->old_sympos = prev_func->old_sympos;
>> +			ret = klp_init_func(obj, func, true);
>> +			func->immediate = prev_func->immediate;
>> +			func->old_addr = prev_func->old_addr;
>> +			func->old_size = prev_func->old_size;
>> +			func->new_size = 0;
>> +			func->no_op = true;
>> +			list_add(&func->func_entry, &obj->func_list);
> 
> And this is a base for klp_create_no_op_func(). Something like:
> 
> static struct klp_func *
> klp_create_no_op_func(const char *old_name,
> 		      unsigned long old_sympos,
> 		      bool immediate)
> {
> 	if (!old_name)
> 		return ERR_PTR(-EINVAL);
> 
> 	func = kzalloc(sizeof(*func), GFP_KERNEL);
> 	if (!func)
> 		return ERR_PTR(-ENOMEM);
> 
> 	func->old_name = kstrdup(old_name, GFP_KERNEL);
> 	if (!func->old_name) {
> 		kfree(func);
> 		return ERR_PTR(-ENOMEM);
> 	}
> 
> 	func->old_sympos = old_sympos;
> 	func->immediate = immediate;
> 	func->no_op = true;
> 
> 	return func;
> }
> 
> 
>> +		}
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int klp_init_no_ops(struct klp_patch *patch)
>> +{
>> +	struct klp_patch *prev_patch;
>> +	int ret = 0;
>> +
>> +	if (!patch->replace)
>> +		return 0;
>> +
>> +	prev_patch = patch;
>> +	while (prev_patch->list.prev != &klp_patches) {
>> +		prev_patch = list_prev_entry(prev_patch, list);
>> +
>> +		ret = klp_init_patch_no_ops(prev_patch, patch);
>> +		if (ret)
>> +			return ret;
>> +
>> +		if (prev_patch->replace)
>> +			break;
>> +	}
>> +	return 0;
>> +}
>> +
>>  static int klp_init_patch(struct klp_patch *patch)
>>  {
>>  	struct klp_object *obj;
>> @@ -721,6 +866,8 @@ static int klp_init_patch(struct klp_patch *patch)
>>  	mutex_lock(&klp_mutex);
>>  
>>  	patch->enabled = false;
>> +	patch->replaced = false;
>> +
>>  	init_completion(&patch->finish);
>>  
>>  	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
>> @@ -732,20 +879,25 @@ static int klp_init_patch(struct klp_patch *patch)
>>  
>>  	INIT_LIST_HEAD(&patch->obj_list);
>>  	klp_for_each_object(patch, obj) {
>> -		INIT_LIST_HEAD(&obj->obj_entry);
>> -		INIT_LIST_HEAD(&obj->func_list);
>> -		ret = klp_init_object(patch, obj);
>> +		ret = klp_init_object(patch, obj, false);
> 
> The extra parameter still might be needed because only no_ops
> need to get re-added and reinitialized when the disabled patch
> is enabled again. Well, this feature might be added later
> in a separate patch.
> 
>>  		if (ret)
>>  			goto free;
>>  	}
>>  
>>  	list_add_tail(&patch->list, &klp_patches);
>>  
>> +	ret = klp_init_no_ops(patch);
>> +	if (ret) {
>> +		list_del(&patch->list);
>> +		goto free;
>> +	}
> 
> This should be done earlier using that klp_check_and_add_no_ops().
> 
>> +
>>  	mutex_unlock(&klp_mutex);
>>  
>>  	return 0;
>>  
>>  free:
>> +	klp_patch_free_no_ops(patch);
>>  	klp_free_objects_limited(patch, obj);
>>  
>>  	mutex_unlock(&klp_mutex);
>> diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
>> index b004a1f..d5e620e 100644
>> --- a/kernel/livepatch/transition.c
>> +++ b/kernel/livepatch/transition.c
>> @@ -70,6 +72,7 @@ static void klp_synchronize_transition(void)
>>  	schedule_on_each_cpu(klp_sync);
>>  }
>>  
>> +
> 
> extra empty line?
> 
>>  /*
>>   * The transition to the target patch state is complete.  Clean up the data
>>   * structures.
>> @@ -81,14 +84,39 @@ static void klp_complete_transition(void)
>>  	struct task_struct *g, *task;
>>  	unsigned int cpu;
>>  	bool immediate_func = false;
>> +	bool no_op = false;
>> +	struct klp_patch *prev_patch;
>> +
>> +	/*
>> +	 * for replace patches, we disable all previous patches, and replace
>> +	 * the dynamic no-op functions by removing the ftrace hook. After
>> +	 * klp_synchronize_transition() is called its safe to free the
>> +	 * the dynamic no-op functions, done by klp_patch_free_no_ops()
>> +	 */
>> +	if (klp_target_state == KLP_PATCHED && klp_transition_patch->replace) {
>> +		prev_patch = klp_transition_patch;
>> +		while (prev_patch->list.prev != &klp_patches) {
>> +			prev_patch = list_prev_entry(prev_patch, list);
> 
> I wonder if the following code might be more obvious:
> 
> 	list_for_each_entry(old_patch, &klp_patches, list) {
> 		if (old_patch = klp_transition_patch)
> 			continue;
> 
>> +			if (prev_patch->enabled) {
>> +				klp_unpatch_objects(prev_patch, false);
>> +				prev_patch->enabled = false;
> 
> I suggested to do this in __klp_disable_patch() in the last review.
> But we need something else.
> 
> The fact is that patch->enable is normally manipulated at the
> beginning of the transition, see __klp_enable_patch()
> and __klp_disable_patch(). Therefore we are kind of
> inconsistent when we manipulate it here for the replaced
> patchses.>

ok, the suggestion is that for the non-immediate replace we do a
'special' disable here, once we know that the transition patch has been
successfully enabled.

> But we must keep the patches enabled during the entire
> transition. Othewise, klp_module_coming() and klp_module_going()
> would not patch/unpatch the objects correctly.

I think the patch does, its only when its been successfully enabled that
it goes and disables the previous patches.

> 
> In fact, I think that we should set
> klp_transition_patch->enabled = false in klp_complete_transition()
> instead of in __klp_disable_patch(). I mean that flag should reflect
> whether the ftrace handlers see the patch or not. Therefore it
> should be always true during the transition. Then we would not
> need the extra check for klp_transition_patch in coming/going
> handlers. Well, this should be done in separate patch.
> I could prepare it if you want.

I don't quite see why this is needed for this patch series - I think the
added nops should be handled the way all the other functions are...


> 
> 
>> +				prev_patch->replaced = true;
>> +				module_put(prev_patch->mod);
> 
> In each case, we could do this only when the related no_op
> functions were not applied using "immediate" flag.

yes.

> 
> I am not 100% sure that it is really necessary. But I would feel
> more comfortable when we do this after klp_synchronize_transition();
> so that any ftrace handler could not see the related
> functions on the stack.

Which bit are you referring to here?

> 
>> +			}
>> +		}
>> +		klp_unpatch_objects(klp_transition_patch, true);
>> +		no_op = true;
>> +	}
>>  
>>  	if (klp_target_state == KLP_UNPATCHED) {
>>  		/*
>>  		 * All tasks have transitioned to KLP_UNPATCHED so we can now
>>  		 * remove the new functions from the func_stack.
>>  		 */
>> -		klp_unpatch_objects(klp_transition_patch);
>> +		klp_unpatch_objects(klp_transition_patch, false);
> 
> This code patch made me think about a situation when
> the "enable" operation was reverted during the transition.
> Then the target state is KLP_UNPATCHED and no_op stuff
> is there.
> 
> Well, in this case, we need to keep the no_op stuff
> until either the the patch is enabled egain and the
> transition finishes or we need to remove/free it
> klp_unregister_patch().

In the case that klp_cancel_transition() calls klp_complete_transition()
and we remove the nop functions, we also remove all the regular
'functions', which should be re-added on the function stacks if the
patch is re-enabled. So I think this is fine.

Thanks,

-Jason



> 
> 
>> +	}
>>  
>> +	if (klp_target_state == KLP_UNPATCHED || no_op) {
>>  		/*
>>  		 * Make sure klp_ftrace_handler() can no longer see functions
>>  		 * from this patch on the ops->func_stack.  Otherwise, after
> 
> Best Regards,
> Petr
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ