lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170718144745.GG32632@pathway.suse.cz>
Date:   Tue, 18 Jul 2017 16:47:45 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     Joe Lawrence <joe.lawrence@...hat.com>
Cc:     live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jessica Yu <jeyu@...hat.com>, Jiri Kosina <jikos@...nel.org>,
        Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH v2 2/2] livepatch: add shadow variable sample programs

On Wed 2017-06-28 11:37:27, Joe Lawrence wrote:
> diff --git a/samples/livepatch/livepatch-shadow-mod.c b/samples/livepatch/livepatch-shadow-mod.c
> new file mode 100644
> index 000000000000..423f4b7b0adb
> --- /dev/null
> +++ b/samples/livepatch/livepatch-shadow-mod.c
> + * Usage
> + * -----
> + *
> + * Fix the memory leak
> + * -------------------
> + *
> + * Extend functionality
> + * --------------------

When I made first quick look, I had troubles to understand that
these two sections were still part of the Usage section. Also
the commands how to enable the modules were hidden deep in the
expected output analyze.

I wonder if it would make sense to move most of the information
into the respective module sources. I actually missed some
more info in that modules. The few lines were hard to spot
after the long copyright/license blob.

> +
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/stat.h>
> +#include <linux/workqueue.h>
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Joe Lawrence <joe.lawrence@...hat.com>");
> +MODULE_DESCRIPTION("Buggy module for shadow variable demo");
> +
> +#define T1_PERIOD 1			/* allocator thread */
> +#define T2_PERIOD (3 * T1_PERIOD)	/* cleanup thread */
> +
> +LIST_HEAD(dummy_list);
> +DEFINE_MUTEX(dummy_list_mutex);
> +
> +struct dummy {
> +	struct list_head list;
> +	unsigned long jiffies_expire;
> +};
> +
> +noinline struct dummy *dummy_alloc(void)
> +{
> +	struct dummy *d;
> +	void *leak;
> +
> +	d = kzalloc(sizeof(*d), GFP_KERNEL);
> +	if (!d)
> +		return NULL;
> +
> +	/* Dummies live long enough to see a few t2 instances */
> +	d->jiffies_expire = jiffies + 1000 * 4 * T2_PERIOD;
> +
> +	/* Oops, forgot to save leak! */
> +	leak = kzalloc(sizeof(int), GFP_KERNEL);
> +
> +	pr_info("%s: dummy @ %p, expires @ %lx\n",
> +		__func__, d, d->jiffies_expire);
> +
> +	return d;
> +}
> +
> +noinline void dummy_free(struct dummy *d)
> +{
> +	pr_info("%s: dummy @ %p, expired = %lx\n",
> +		__func__, d, d->jiffies_expire);
> +
> +	kfree(d);
> +}
> +
> +noinline bool dummy_check(struct dummy *d, unsigned long jiffies)
> +{
> +	return time_after(jiffies, d->jiffies_expire);
> +}
> +
> +/*
> + * T1: alloc_thread allocates new dummy structures, allocates additional
> + *     memory, aptly named "leak", but doesn't keep permanent record of it.
> + */
> +struct workqueue_struct *alloc_wq;

A custom workqueue is needed only for special purposes.
IMHO, the standard system workqueue is perfectly fine
in our case. AFAIK, it is able to spawn/run about
256 worker threads and process this number of different
works in parallel.

> +struct delayed_work alloc_dwork;
> +static void alloc_thread(struct work_struct *work)

The suffix "_thread" is misleading. I think that alloc_work_func()
is the most descriptive name.


> +{
> +	struct dummy *d;
> +
> +	d = dummy_alloc();
> +	if (!d)
> +		return;
> +
> +	mutex_lock(&dummy_list_mutex);
> +	list_add(&d->list, &dummy_list);
> +	mutex_unlock(&dummy_list_mutex);
> +
> +	queue_delayed_work(alloc_wq, &alloc_dwork,
> +		msecs_to_jiffies(1000 * T1_PERIOD));
> +}
> +
> +static int livepatch_shadow_mod_init(void)
> +{
> +	alloc_wq = create_singlethread_workqueue("klp_demo_alloc_wq");
> +	if (!alloc_wq)
> +		return -1;
> +
> +	cleanup_wq = create_singlethread_workqueue("klp_demo_cleanup_wq");
> +	if (!cleanup_wq)
> +		goto exit_free_alloc;
> +
> +	INIT_DELAYED_WORK(&alloc_dwork, alloc_thread);
> +	queue_delayed_work(alloc_wq, &alloc_dwork, 1000 * T1_PERIOD);
> +
> +	INIT_DELAYED_WORK(&cleanup_dwork, cleanup_thread);
> +	queue_delayed_work(cleanup_wq, &cleanup_dwork,
> +		msecs_to_jiffies(1000 * T2_PERIOD));

If you use the system workqueue and DECLARE_DELAYED_WORK, you might
reduce this to:

       schedule_delayed_work(&alloc_dwork, 1000 * T1_PERIOD);
       schedule_delayed_work(&cleanup_dwork, msecs_to_jiffies(1000 * T2_PERIOD));


> +	return 0;
> +
> +exit_free_alloc:
> +	destroy_workqueue(alloc_wq);
> +
> +	return -1;
> +}
> +
> +static void livepatch_shadow_mod_exit(void)
> +{
> +	struct dummy *d, *tmp;
> +
> +	/* Cleanup T1 */
> +	if (!cancel_delayed_work(&alloc_dwork))
> +		flush_workqueue(alloc_wq);

You will get the same with

	cancel_delayed_work_sync(&alloc_dwork);

I am sorry, I spent too much time working on kthread worker API
and was not able to help myself. Also Tejun would put a shame
on me if I did not suggest this ;-)

Otherwise I like the examples.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ