lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Sep 2017 23:41:30 -0400
From:   Jason Baron <jbaron@...mai.com>
To:     linux-kernel@...r.kernel.org, live-patching@...r.kernel.org
Cc:     jpoimboe@...hat.com, jeyu@...nel.org, jikos@...nel.org,
        mbenes@...e.cz, pmladek@...e.com
Subject: [PATCH v3 2/2] livepatch: add atomic replace

Sometimes we would like to revert a particular fix. This is currently
not easy because we want to keep all other fixes active and we could
revert only the last applied patch.

One solution would be to apply new patch that implemented all
the reverted functions like in the original code. It would work
as expected but there will be unnecessary redirections. In addition,
it would also require knowing which functions need to be reverted at
build time.

A better solution would be to say that a new patch replaces
an older one. This might be complicated if we want to replace
a particular patch. But it is rather easy when a new cummulative
patch replaces all others.

Add a new "replace" flag to struct klp_patch. When it is enabled, a set of
nop' klp_func will be dynamically created for all functions that are
already being patched but that will not longer be modified by the new
patch. They are temporarily used as a new target during the patch
transition.

Since 'atomic replace' has completely replaced all previous livepatch
modules, it explicitly disables all previous livepatch modules. However,
previous livepatch modules that have been replaced, can be re-enabled
if they have the 'replace' flag set. In this case the set of 'nop'
functions is re-calculated, such that it replaces all others.

For example, if kpatch-a.ko and kpatch-b.ko have the 'replace' flag set
then:

# insmod kpatch-a.ko
# insmod kpatch-b.ko

At this point we have:

# cat /sys/kernel/livepatch/kpatch-a/enabled
0

# cat /sys/kernel/livepatch/kpatch-b/enabled
1

To revert to the kpatch-a state we can do:

echo 1 > /sys/kernel/livepatch/kpatch-a/enabled

And now we have:

# cat /sys/kernel/livepatch/kpatch-a/enabled
1

# cat /sys/kernel/livepatch/kpatch-b/enabled
0

Note that it may be possible to unload (rmmod) replaced patches in some
cases based on the consistency model, when we know that all the functions
that are contained in the patch may no longer be in used, however its
left as future work, if this functionality is desired.

Also, __klp_enable_patch() calls klp_add_nops(), which necessitated moving
a bunch of existing functions before __klp_enable_patch(). So there is a
bit of churn in moving functions that are not modified.

Signed-off-by: Jason Baron <jbaron@...mai.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Jessica Yu <jeyu@...nel.org>
Cc: Jiri Kosina <jikos@...nel.org>
Cc: Miroslav Benes <mbenes@...e.cz>
Cc: Petr Mladek <pmladek@...e.com>
---
 include/linux/livepatch.h     |   4 +
 kernel/livepatch/core.c       | 642 ++++++++++++++++++++++++++++++------------
 kernel/livepatch/core.h       |   6 +
 kernel/livepatch/patch.c      |  22 +-
 kernel/livepatch/patch.h      |   4 +-
 kernel/livepatch/transition.c |  50 +++-
 6 files changed, 532 insertions(+), 196 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index e03ce11..93e8d44 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -51,6 +51,7 @@
  * @new_size:	    size of the new function
  * @patched:	    the func has been added to the klp_ops list
  * @transition:	    the func is currently being applied or reverted
+ * @nop:	    used to revert a previously patched function
  *
  * The patched and transition variables define the func's patching state.  When
  * patching, a func is always in one of the following states:
@@ -89,6 +90,7 @@ struct klp_func {
 	unsigned long old_size, new_size;
 	bool patched;
 	bool transition;
+	bool nop;
 };
 
 /**
@@ -121,6 +123,7 @@ struct klp_object {
  * @mod:	  reference to the live patch module
  * @objs:	  object entries for kernel objects to be patched
  * @immediate:	  patch all funcs immediately, bypassing safety mechanisms
+ * @replace:	  revert previously patched functions not included in the patch
  * @list:	  list node for global list of registered patches
  * @kobj:	  kobject for sysfs resources
  * @nop_obj_list: head of list for dynamically allocated struct klp_object
@@ -132,6 +135,7 @@ struct klp_patch {
 	struct module *mod;
 	struct klp_object *objs;
 	bool immediate;
+	bool replace;
 
 	/* internal */
 	struct list_head list;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 0d92fe6..f29da1b 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,7 +45,14 @@
  */
 DEFINE_MUTEX(klp_mutex);
 
-static LIST_HEAD(klp_patches);
+LIST_HEAD(klp_patches);
+
+/*
+ * List of 'replaced' patches that have been replaced by a patch that has the
+ * 'replace' bit set. When they are added to this list, they are disabled but
+ * they are elligible for being re-enabled.
+ */
+LIST_HEAD(klp_replaced_patches);
 
 static struct kobject *klp_root_kobj;
 
@@ -181,17 +188,28 @@ static void klp_find_object_module(struct klp_object *obj)
 	mutex_unlock(&module_mutex);
 }
 
-static bool klp_is_patch_registered(struct klp_patch *patch)
+static bool klp_is_patch_in_list(struct klp_patch *patch,
+				 struct list_head *head)
 {
 	struct klp_patch *mypatch;
 
-	list_for_each_entry(mypatch, &klp_patches, list)
+	list_for_each_entry(mypatch, head, list)
 		if (mypatch == patch)
 			return true;
 
 	return false;
 }
 
+static bool klp_is_patch_registered(struct klp_patch *patch)
+{
+	return klp_is_patch_in_list(patch, &klp_patches);
+}
+
+static bool klp_is_patch_replaced(struct klp_patch *patch)
+{
+	return klp_is_patch_in_list(patch, &klp_replaced_patches);
+}
+
 static bool klp_initialized(void)
 {
 	return !!klp_root_kobj;
@@ -377,6 +395,413 @@ static int klp_write_object_relocations(struct module *pmod,
 	return ret;
 }
 
+atomic_t klp_nop_release_count;
+static DECLARE_WAIT_QUEUE_HEAD(klp_nop_release_wait);
+
+static void klp_kobj_release_object(struct kobject *kobj)
+{
+	struct klp_object *obj;
+
+	obj = container_of(kobj, struct klp_object, kobj);
+	/* Free dynamically allocated object */
+	if (!obj->funcs) {
+		kfree(obj->name);
+		kfree(obj);
+		atomic_dec(&klp_nop_release_count);
+		wake_up(&klp_nop_release_wait);
+	}
+}
+
+static struct kobj_type klp_ktype_object = {
+	.release = klp_kobj_release_object,
+	.sysfs_ops = &kobj_sysfs_ops,
+};
+
+static void klp_kobj_release_func(struct kobject *kobj)
+{
+	struct klp_func *func;
+
+	func = container_of(kobj, struct klp_func, kobj);
+	/* Free dynamically allocated functions */
+	if (!func->new_func) {
+		kfree(func->old_name);
+		kfree(func);
+		atomic_dec(&klp_nop_release_count);
+		wake_up(&klp_nop_release_wait);
+	}
+}
+
+static struct kobj_type klp_ktype_func = {
+	.release = klp_kobj_release_func,
+	.sysfs_ops = &kobj_sysfs_ops,
+};
+
+/*
+ * Free all functions' kobjects in the array up to some limit. When limit is
+ * NULL, all kobjects are freed.
+ */
+static void klp_free_funcs_limited(struct klp_object *obj,
+				   struct klp_func *limit)
+{
+	struct klp_func *func;
+
+	for (func = obj->funcs; func->old_name && func != limit; func++)
+		kobject_put(&func->kobj);
+}
+
+/* Clean up when a patched object is unloaded */
+static void klp_free_object_loaded(struct klp_object *obj)
+{
+	struct klp_func *func;
+
+	obj->mod = NULL;
+
+	klp_for_each_func(obj, func)
+		func->old_addr = 0;
+}
+
+/*
+ * Free all objects' kobjects in the array up to some limit. When limit is
+ * NULL, all kobjects are freed.
+ */
+static void klp_free_objects_limited(struct klp_patch *patch,
+				     struct klp_object *limit)
+{
+	struct klp_object *obj;
+
+	for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
+		klp_free_funcs_limited(obj, NULL);
+		kobject_put(&obj->kobj);
+	}
+}
+
+static void klp_free_patch(struct klp_patch *patch)
+{
+	klp_free_objects_limited(patch, NULL);
+	if (!list_empty(&patch->list))
+		list_del(&patch->list);
+}
+
+void klp_remove_nops(struct klp_patch *patch)
+{
+	struct klp_object *obj, *tmp_obj;
+	struct klp_func *func, *tmp_func;
+
+	klp_for_each_object(patch, obj) {
+		list_for_each_entry_safe(func, tmp_func, &obj->nop_func_list,
+					 nop_func_entry) {
+			list_del_init(&func->nop_func_entry);
+			atomic_inc(&klp_nop_release_count);
+			kobject_put(&func->kobj);
+		}
+		INIT_LIST_HEAD(&obj->nop_func_list);
+	}
+	list_for_each_entry_safe(obj, tmp_obj, &patch->nop_obj_list,
+				 nop_obj_entry) {
+		list_del_init(&obj->nop_obj_entry);
+		atomic_inc(&klp_nop_release_count);
+		kobject_put(&obj->kobj);
+	}
+	INIT_LIST_HEAD(&patch->nop_obj_list);
+
+	wait_event(klp_nop_release_wait,
+		   atomic_read(&klp_nop_release_count) == 0);
+}
+
+static int klp_init_func(struct klp_object *obj, struct klp_func *func,
+			 bool nop)
+{
+	if (!func->old_name || (!nop && !func->new_func))
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&func->stack_node);
+	func->patched = false;
+	func->transition = false;
+
+	/* The format for the sysfs directory is <function,sympos> where sympos
+	 * is the nth occurrence of this symbol in kallsyms for the patched
+	 * object. If the user selects 0 for old_sympos, then 1 will be used
+	 * since a unique symbol will be the first occurrence.
+	 */
+	return kobject_init_and_add(&func->kobj, &klp_ktype_func,
+				    &obj->kobj, "%s,%lu", func->old_name,
+				    func->old_sympos ? func->old_sympos : 1);
+}
+
+/* Arches may override this to finish any remaining arch-specific tasks */
+void __weak arch_klp_init_object_loaded(struct klp_patch *patch,
+					struct klp_object *obj)
+{
+}
+
+/* parts of the initialization that is done only when the object is loaded */
+static int klp_init_object_loaded(struct klp_patch *patch,
+				  struct klp_object *obj)
+{
+	struct klp_func *func;
+	int ret;
+
+	module_disable_ro(patch->mod);
+	ret = klp_write_object_relocations(patch->mod, obj);
+	if (ret) {
+		module_enable_ro(patch->mod, true);
+		return ret;
+	}
+
+	arch_klp_init_object_loaded(patch, obj);
+	module_enable_ro(patch->mod, true);
+
+	klp_for_each_func(obj, func) {
+		ret = klp_find_object_symbol(obj->name, func->old_name,
+					     func->old_sympos,
+					     &func->old_addr);
+		if (ret)
+			return ret;
+
+		ret = kallsyms_lookup_size_offset(func->old_addr,
+						  &func->old_size, NULL);
+		if (!ret) {
+			pr_err("kallsyms size lookup failed for '%s'\n",
+			       func->old_name);
+			return -ENOENT;
+		}
+
+		ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
+						  &func->new_size, NULL);
+		if (!ret) {
+			pr_err("kallsyms size lookup failed for '%s' replacement\n",
+			       func->old_name);
+			return -ENOENT;
+		}
+	}
+
+	return 0;
+}
+
+static int klp_init_object(struct klp_patch *patch, struct klp_object *obj,
+			   bool nop)
+{
+	struct klp_func *func;
+	int ret;
+	const char *name;
+
+	if (!obj->funcs && !nop)
+		return -EINVAL;
+
+	obj->patched = false;
+	obj->mod = NULL;
+
+	klp_find_object_module(obj);
+
+	name = klp_is_module(obj) ? obj->name : "vmlinux";
+	ret = kobject_init_and_add(&obj->kobj, &klp_ktype_object,
+				   &patch->kobj, "%s", name);
+	if (ret)
+		return ret;
+
+	if (nop)
+		return 0;
+
+	klp_for_each_func(obj, func) {
+		ret = klp_init_func(obj, func, false);
+		if (ret)
+			goto free;
+	}
+
+	if (klp_is_object_loaded(obj)) {
+		ret = klp_init_object_loaded(patch, obj);
+		if (ret)
+			goto free;
+	}
+
+	return 0;
+
+free:
+	klp_free_funcs_limited(obj, func);
+	kobject_put(&obj->kobj);
+	return ret;
+}
+
+static struct klp_func *klp_find_func(struct klp_object *obj,
+				      struct klp_func *old_func)
+{
+	struct klp_func *func;
+
+	klp_for_each_func(obj, func) {
+		if ((strcmp(old_func->old_name, func->old_name) == 0) &&
+		    (old_func->old_sympos == func->old_sympos)) {
+			return func;
+		}
+	}
+
+	return NULL;
+}
+
+static struct klp_object *klp_find_object(struct klp_patch *patch,
+					  struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+	bool mod = klp_is_module(old_obj);
+
+	klp_for_each_object(patch, obj) {
+		if (mod) {
+			if (klp_is_module(obj) &&
+			    strcmp(old_obj->name, obj->name) == 0) {
+				return obj;
+			}
+		} else if (!klp_is_module(obj)) {
+			return obj;
+		}
+	}
+
+	return NULL;
+}
+
+static struct klp_func *klp_alloc_nop_func(struct klp_func *old_func,
+					   struct klp_object *obj)
+{
+	struct klp_func *func;
+	int err;
+
+	func = kzalloc(sizeof(*func), GFP_KERNEL);
+	if (!func)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&func->nop_func_entry);
+	if (old_func->old_name) {
+		func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
+		if (!func->old_name) {
+			kfree(func);
+			return ERR_PTR(-ENOMEM);
+		}
+	}
+	func->old_sympos = old_func->old_sympos;
+	err = klp_init_func(obj, func, true);
+	if (err) {
+		kfree(func->old_name);
+		kfree(func);
+		return ERR_PTR(err);
+	}
+	func->immediate = old_func->immediate;
+	func->old_addr = old_func->old_addr;
+	func->old_size = old_func->old_size;
+	func->nop = true;
+
+	return func;
+}
+
+static struct klp_object *klp_alloc_nop_object(const char *name,
+					       struct klp_patch *patch)
+{
+	struct klp_object *obj;
+	int err;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&obj->nop_func_list);
+	INIT_LIST_HEAD(&obj->nop_obj_entry);
+	if (name) {
+		obj->name = kstrdup(name, GFP_KERNEL);
+		if (!obj->name) {
+			kfree(obj);
+			return ERR_PTR(-ENOMEM);
+		}
+	}
+	err = klp_init_object(patch, obj, true);
+	if (err) {
+		kfree(obj->name);
+		kfree(obj);
+		return ERR_PTR(err);
+	}
+
+	return obj;
+}
+
+static int klp_add_nop_func(struct klp_object *obj,
+			    struct klp_func *old_func)
+{
+	struct klp_func *func;
+
+	func = klp_find_func(obj, old_func);
+	if (func) {
+		/*
+		 * If any of the previous functions that we are
+		 * reverting don't have the immediate set, then
+		 * we want to set it here. The idea is to use
+		 * the most restrictive case, although its unlikely
+		 * for previous patches to set the immediate flag
+		 * differently for the same function. Note
+		 * if patch->immediate is set this field wouldn't
+		 * be consulted.
+		 */
+		if (func->immediate && !old_func->immediate)
+			func->immediate = false;
+
+		return 0;
+	}
+	func = klp_alloc_nop_func(old_func, obj);
+	if (IS_ERR(func))
+		return PTR_ERR(func);
+	list_add(&func->nop_func_entry, &obj->nop_func_list);
+
+	return 0;
+}
+
+static int klp_add_nop_object(struct klp_patch *patch,
+			      struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+	struct klp_func *old_func;
+	int err = 0;
+
+	obj = klp_find_object(patch, old_obj);
+	if (!obj) {
+		obj = klp_alloc_nop_object(old_obj->name, patch);
+
+		if (IS_ERR(obj))
+			return PTR_ERR(obj);
+		list_add(&obj->nop_obj_entry, &patch->nop_obj_list);
+	}
+	klp_for_each_func(old_obj, old_func) {
+		err = klp_add_nop_func(obj, old_func);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+/* Add 'nop' functions which simply return to the caller to run
+ * the original function. The 'nop' functions are added to a
+ * patch to facilitate a 'replace' mode
+ */
+static int klp_add_nops(struct klp_patch *patch)
+{
+	struct klp_patch *old_patch;
+	struct klp_object *old_obj;
+	int err = 0;
+
+	if (!patch->replace)
+		return 0;
+
+	list_for_each_entry(old_patch, &klp_patches, list) {
+		if (old_patch == patch)
+			break;
+
+		klp_for_each_object(old_patch, old_obj) {
+			err = klp_add_nop_object(patch, old_obj);
+			if (err) {
+				klp_remove_nops(patch);
+				return err;
+			}
+		}
+	}
+
+	return 0;
+}
+
 static int __klp_disable_patch(struct klp_patch *patch)
 {
 	if (klp_transition_patch)
@@ -441,6 +866,7 @@ static int __klp_enable_patch(struct klp_patch *patch)
 {
 	struct klp_object *obj;
 	int ret;
+	bool replaced = false;
 
 	if (klp_transition_patch)
 		return -EBUSY;
@@ -448,10 +874,22 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	if (WARN_ON(patch->enabled))
 		return -EINVAL;
 
+	if (klp_is_patch_replaced(patch)) {
+		list_move_tail(&patch->list, &klp_patches);
+		replaced = true;
+	}
+
 	/* enforce stacking: only the first disabled patch can be enabled */
 	if (patch->list.prev != &klp_patches &&
-	    !list_prev_entry(patch, list)->enabled)
-		return -EBUSY;
+	    !list_prev_entry(patch, list)->enabled) {
+		ret = -EBUSY;
+		goto cleanup_list;
+	}
+
+	/* setup nops */
+	ret = klp_add_nops(patch);
+	if (ret)
+		goto cleanup_list;
 
 	/*
 	 * A reference is taken on the patch module to prevent it from being
@@ -462,8 +900,10 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	 * determine if a thread is still running in the patched code contained
 	 * in the patch module once the ftrace registration is successful.
 	 */
-	if (!try_module_get(patch->mod))
-		return -ENODEV;
+	if (!try_module_get(patch->mod)) {
+		ret = -ENODEV;
+		goto cleanup_list;
+	}
 
 	pr_notice("enabling patch '%s'\n", patch->mod->name);
 
@@ -497,6 +937,12 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	patch->enabled = true;
 
 	return 0;
+
+cleanup_list:
+	if (replaced)
+		list_move(&patch->list, &klp_replaced_patches);
+
+	return ret;
 }
 
 /**
@@ -514,7 +960,7 @@ int klp_enable_patch(struct klp_patch *patch)
 
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
+	if (!klp_is_patch_registered(patch) && !klp_is_patch_replaced(patch)) {
 		ret = -EINVAL;
 		goto err;
 	}
@@ -553,7 +999,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
+	if (!klp_is_patch_registered(patch) && !klp_is_patch_replaced(patch)) {
 		/*
 		 * Module with the patch could either disappear meanwhile or is
 		 * not properly initialized yet.
@@ -630,179 +1076,6 @@ static struct kobj_type klp_ktype_patch = {
 	.default_attrs = klp_patch_attrs,
 };
 
-static void klp_kobj_release_object(struct kobject *kobj)
-{
-}
-
-static struct kobj_type klp_ktype_object = {
-	.release = klp_kobj_release_object,
-	.sysfs_ops = &kobj_sysfs_ops,
-};
-
-static void klp_kobj_release_func(struct kobject *kobj)
-{
-}
-
-static struct kobj_type klp_ktype_func = {
-	.release = klp_kobj_release_func,
-	.sysfs_ops = &kobj_sysfs_ops,
-};
-
-/*
- * Free all functions' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_funcs_limited(struct klp_object *obj,
-				   struct klp_func *limit)
-{
-	struct klp_func *func;
-
-	for (func = obj->funcs; func->old_name && func != limit; func++)
-		kobject_put(&func->kobj);
-}
-
-/* Clean up when a patched object is unloaded */
-static void klp_free_object_loaded(struct klp_object *obj)
-{
-	struct klp_func *func;
-
-	obj->mod = NULL;
-
-	klp_for_each_func(obj, func)
-		func->old_addr = 0;
-}
-
-/*
- * Free all objects' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_objects_limited(struct klp_patch *patch,
-				     struct klp_object *limit)
-{
-	struct klp_object *obj;
-
-	for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
-		klp_free_funcs_limited(obj, NULL);
-		kobject_put(&obj->kobj);
-	}
-}
-
-static void klp_free_patch(struct klp_patch *patch)
-{
-	klp_free_objects_limited(patch, NULL);
-	if (!list_empty(&patch->list))
-		list_del(&patch->list);
-}
-
-static int klp_init_func(struct klp_object *obj, struct klp_func *func)
-{
-	if (!func->old_name || !func->new_func)
-		return -EINVAL;
-
-	INIT_LIST_HEAD(&func->stack_node);
-	func->patched = false;
-	func->transition = false;
-
-	/* The format for the sysfs directory is <function,sympos> where sympos
-	 * is the nth occurrence of this symbol in kallsyms for the patched
-	 * object. If the user selects 0 for old_sympos, then 1 will be used
-	 * since a unique symbol will be the first occurrence.
-	 */
-	return kobject_init_and_add(&func->kobj, &klp_ktype_func,
-				    &obj->kobj, "%s,%lu", func->old_name,
-				    func->old_sympos ? func->old_sympos : 1);
-}
-
-/* Arches may override this to finish any remaining arch-specific tasks */
-void __weak arch_klp_init_object_loaded(struct klp_patch *patch,
-					struct klp_object *obj)
-{
-}
-
-/* parts of the initialization that is done only when the object is loaded */
-static int klp_init_object_loaded(struct klp_patch *patch,
-				  struct klp_object *obj)
-{
-	struct klp_func *func;
-	int ret;
-
-	module_disable_ro(patch->mod);
-	ret = klp_write_object_relocations(patch->mod, obj);
-	if (ret) {
-		module_enable_ro(patch->mod, true);
-		return ret;
-	}
-
-	arch_klp_init_object_loaded(patch, obj);
-	module_enable_ro(patch->mod, true);
-
-	klp_for_each_func(obj, func) {
-		ret = klp_find_object_symbol(obj->name, func->old_name,
-					     func->old_sympos,
-					     &func->old_addr);
-		if (ret)
-			return ret;
-
-		ret = kallsyms_lookup_size_offset(func->old_addr,
-						  &func->old_size, NULL);
-		if (!ret) {
-			pr_err("kallsyms size lookup failed for '%s'\n",
-			       func->old_name);
-			return -ENOENT;
-		}
-
-		ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
-						  &func->new_size, NULL);
-		if (!ret) {
-			pr_err("kallsyms size lookup failed for '%s' replacement\n",
-			       func->old_name);
-			return -ENOENT;
-		}
-	}
-
-	return 0;
-}
-
-static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
-{
-	struct klp_func *func;
-	int ret;
-	const char *name;
-
-	if (!obj->funcs)
-		return -EINVAL;
-
-	obj->patched = false;
-	obj->mod = NULL;
-
-	klp_find_object_module(obj);
-
-	name = klp_is_module(obj) ? obj->name : "vmlinux";
-	ret = kobject_init_and_add(&obj->kobj, &klp_ktype_object,
-				   &patch->kobj, "%s", name);
-	if (ret)
-		return ret;
-
-	klp_for_each_func(obj, func) {
-		ret = klp_init_func(obj, func);
-		if (ret)
-			goto free;
-	}
-
-	if (klp_is_object_loaded(obj)) {
-		ret = klp_init_object_loaded(patch, obj);
-		if (ret)
-			goto free;
-	}
-
-	return 0;
-
-free:
-	klp_free_funcs_limited(obj, func);
-	kobject_put(&obj->kobj);
-	return ret;
-}
-
 static void klp_init_patch_dyn(struct klp_patch *patch)
 {
 	struct klp_object *obj;
@@ -840,7 +1113,7 @@ static int klp_init_patch(struct klp_patch *patch)
 	klp_init_patch_dyn(patch);
 
 	klp_for_each_object(patch, obj) {
-		ret = klp_init_object(patch, obj);
+		ret = klp_init_object(patch, obj, false);
 		if (ret)
 			goto free;
 	}
@@ -886,6 +1159,7 @@ int klp_unregister_patch(struct klp_patch *patch)
 		goto err;
 	}
 
+	klp_remove_nops(patch);
 	klp_free_patch(patch);
 
 	mutex_unlock(&klp_mutex);
@@ -1039,7 +1313,7 @@ void klp_module_going(struct module *mod)
 			if (patch->enabled || patch == klp_transition_patch) {
 				pr_notice("reverting patch '%s' on unloading module '%s'\n",
 					  patch->mod->name, obj->mod->name);
-				klp_unpatch_object(obj);
+				klp_unpatch_object(obj, false);
 			}
 
 			klp_free_object_loaded(obj);
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index c74f24c..fe45ee2 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -1,6 +1,12 @@
 #ifndef _LIVEPATCH_CORE_H
 #define _LIVEPATCH_CORE_H
 
+#include <linux/livepatch.h>
+
 extern struct mutex klp_mutex;
+extern struct list_head klp_patches;
+extern struct list_head klp_replaced_patches;
+
+void klp_remove_nops(struct klp_patch *patch);
 
 #endif /* _LIVEPATCH_CORE_H */
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 52c4e90..aae6e4d 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -117,6 +117,9 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 		}
 	}
 
+	/* if its a 'nop', then run the original function */
+	if (func->nop)
+		goto unlock;
 	klp_arch_set_pc(regs, (unsigned long)func->new_func);
 unlock:
 	preempt_enable_notrace();
@@ -235,15 +238,21 @@ static int klp_patch_func(struct klp_func *func)
 	return ret;
 }
 
-void klp_unpatch_object(struct klp_object *obj)
+/* if nop is set, only unpatch the nop functions */
+void klp_unpatch_object(struct klp_object *obj, bool nop)
 {
 	struct klp_func *func;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func) {
+		if (nop && !func->nop)
+			continue;
+
 		if (func->patched)
 			klp_unpatch_func(func);
+	}
 
-	obj->patched = false;
+	if (!nop)
+		obj->patched = false;
 }
 
 int klp_patch_object(struct klp_object *obj)
@@ -257,7 +266,7 @@ int klp_patch_object(struct klp_object *obj)
 	klp_for_each_func(obj, func) {
 		ret = klp_patch_func(func);
 		if (ret) {
-			klp_unpatch_object(obj);
+			klp_unpatch_object(obj, false);
 			return ret;
 		}
 	}
@@ -266,11 +275,12 @@ int klp_patch_object(struct klp_object *obj)
 	return 0;
 }
 
-void klp_unpatch_objects(struct klp_patch *patch)
+/* if nop is set, only unpatch the nop functions */
+void klp_unpatch_objects(struct klp_patch *patch, bool nop)
 {
 	struct klp_object *obj;
 
 	klp_for_each_object(patch, obj)
 		if (obj->patched)
-			klp_unpatch_object(obj);
+			klp_unpatch_object(obj, nop);
 }
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index 0db2271..52cc086 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -27,7 +27,7 @@ struct klp_ops {
 struct klp_ops *klp_find_ops(unsigned long old_addr);
 
 int klp_patch_object(struct klp_object *obj);
-void klp_unpatch_object(struct klp_object *obj);
-void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_object(struct klp_object *obj, bool nop);
+void klp_unpatch_objects(struct klp_patch *patch, bool nop);
 
 #endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index b004a1f..af6c951 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -81,13 +81,35 @@ static void klp_complete_transition(void)
 	struct task_struct *g, *task;
 	unsigned int cpu;
 	bool immediate_func = false;
+	struct klp_patch *old_patch, *tmp_patch;
+
+	/*
+	 * for replace patches, we disable all previous patches, and replace
+	 * the dynamic no-op functions by removing the ftrace hook.
+	 */
+	if (klp_target_state == KLP_PATCHED && klp_transition_patch->replace) {
+		list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches,
+					 list) {
+			if (old_patch == klp_transition_patch)
+				break;
+
+			klp_unpatch_objects(old_patch, false);
+			old_patch->enabled = false;
+			if (old_patch->replace)
+				list_move(&old_patch->list,
+					  &klp_replaced_patches);
+			else
+				list_del_init(&old_patch->list);
+		}
+		klp_unpatch_objects(klp_transition_patch, true);
+	}
 
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
 		 * All tasks have transitioned to KLP_UNPATCHED so we can now
 		 * remove the new functions from the func_stack.
 		 */
-		klp_unpatch_objects(klp_transition_patch);
+		klp_unpatch_objects(klp_transition_patch, false);
 
 		/*
 		 * Make sure klp_ftrace_handler() can no longer see functions
@@ -130,6 +152,19 @@ static void klp_complete_transition(void)
 	}
 
 done:
+	/*
+	 * Free the added nops to free the memory and make ensure they are
+	 * re-calculated by a subsequent enable. There are 3 cases:
+	 * 1) enable succeeded -> we've called ftrace_shutdown(), which
+	 *			  means ftrace hooks are no longer visible.
+	 * 2) disable after enable -> nothing to free, since freed by previous
+	 *			      enable
+	 * 3) disable after failed enable -> klp_synchronize_transition() has
+	 *				     been called above, so we should be
+	 *				     ok to free as per usual rcu.
+	 */
+	klp_remove_nops(klp_transition_patch);
+
 	klp_target_state = KLP_UNDEFINED;
 	klp_transition_patch = NULL;
 }
@@ -202,10 +237,17 @@ static int klp_check_stack_func(struct klp_func *func,
 		if (klp_target_state == KLP_UNPATCHED) {
 			 /*
 			  * Check for the to-be-unpatched function
-			  * (the func itself).
+			  * (the func itself). If we're unpatching
+			  * a nop, then we're running the original
+			  * function.
 			  */
-			func_addr = (unsigned long)func->new_func;
-			func_size = func->new_size;
+			if (func->nop) {
+				func_addr = (unsigned long)func->old_addr;
+				func_size = func->old_size;
+			} else {
+				func_addr = (unsigned long)func->new_func;
+				func_size = func->new_size;
+			}
 		} else {
 			/*
 			 * Check for the to-be-patched function
-- 
2.6.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ