lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170718191500.vjdjze6fpfz4us3s@redhat.com>
Date:   Tue, 18 Jul 2017 15:15:00 -0400
From:   Joe Lawrence <joe.lawrence@...hat.com>
To:     Petr Mladek <pmladek@...e.com>
Cc:     live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jessica Yu <jeyu@...hat.com>, Jiri Kosina <jikos@...nel.org>,
        Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH v2 2/2] livepatch: add shadow variable sample programs

On Tue, Jul 18, 2017 at 04:47:45PM +0200, Petr Mladek wrote:
> Date:   Tue, 18 Jul 2017 16:47:45 +0200
> From: Petr Mladek <pmladek@...e.com>
> To: Joe Lawrence <joe.lawrence@...hat.com>
> Cc: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org, Josh
>  Poimboeuf <jpoimboe@...hat.com>, Jessica Yu <jeyu@...hat.com>, Jiri Kosina
>  <jikos@...nel.org>, Miroslav Benes <mbenes@...e.cz>
> Subject: Re: [PATCH v2 2/2] livepatch: add shadow variable sample programs
> User-Agent: Mutt/1.5.21 (2010-09-15)
> 
> On Wed 2017-06-28 11:37:27, Joe Lawrence wrote:
> > diff --git a/samples/livepatch/livepatch-shadow-mod.c b/samples/livepatch/livepatch-shadow-mod.c
> > new file mode 100644
> > index 000000000000..423f4b7b0adb
> > --- /dev/null
> > +++ b/samples/livepatch/livepatch-shadow-mod.c
> > + * Usage
> > + * -----
> > + *
> > + * Fix the memory leak
> > + * -------------------
> > + *
> > + * Extend functionality
> > + * --------------------
> 
> When I made first quick look, I had troubles to understand that
> these two sections were still part of the Usage section.

I could double underline "Usage" and keep the single underlines for "Fix
the memory leak' and "Extend functionality" sections if that helps.

>                                                          Also
> the commands how to enable the modules were hidden deep in the
> expected output analyze.

Yeah, v2's examples were hard to summarize and describe through
comments.  The code is pretty simple, but the devil is in the timing
details and log output seemed the best way to illustrate the fixes.

Do you think the log entries are worth copying into these comments?  I
could simply describe the effect and let the reader generate their own
if they so desired.

> I wonder if it would make sense to move most of the information
> into the respective module sources. 

Are you suggesting that it would be clearer to distribute the
log + comments to each individual livepatch-shadow-*.c module file?

>                                     I actually missed some
> more info in that modules. The few lines were hard to spot
> after the long copyright/license blob.

Would a heading like this help separate the legalese from the patch
description?

/* [ ... GPL, copyrights, etc ... ]
 *
 * Module Description
 * ==================
 * [...]
 */
 
> > +
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/sched.h>
> > +#include <linux/slab.h>
> > +#include <linux/stat.h>
> > +#include <linux/workqueue.h>
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_AUTHOR("Joe Lawrence <joe.lawrence@...hat.com>");
> > +MODULE_DESCRIPTION("Buggy module for shadow variable demo");
> > +
> > +#define T1_PERIOD 1			/* allocator thread */
> > +#define T2_PERIOD (3 * T1_PERIOD)	/* cleanup thread */
> > +
> > +LIST_HEAD(dummy_list);
> > +DEFINE_MUTEX(dummy_list_mutex);
> > +
> > +struct dummy {
> > +	struct list_head list;
> > +	unsigned long jiffies_expire;
> > +};
> > +
> > +noinline struct dummy *dummy_alloc(void)
> > +{
> > +	struct dummy *d;
> > +	void *leak;
> > +
> > +	d = kzalloc(sizeof(*d), GFP_KERNEL);
> > +	if (!d)
> > +		return NULL;
> > +
> > +	/* Dummies live long enough to see a few t2 instances */
> > +	d->jiffies_expire = jiffies + 1000 * 4 * T2_PERIOD;
> > +
> > +	/* Oops, forgot to save leak! */
> > +	leak = kzalloc(sizeof(int), GFP_KERNEL);
> > +
> > +	pr_info("%s: dummy @ %p, expires @ %lx\n",
> > +		__func__, d, d->jiffies_expire);
> > +
> > +	return d;
> > +}
> > +
> > +noinline void dummy_free(struct dummy *d)
> > +{
> > +	pr_info("%s: dummy @ %p, expired = %lx\n",
> > +		__func__, d, d->jiffies_expire);
> > +
> > +	kfree(d);
> > +}
> > +
> > +noinline bool dummy_check(struct dummy *d, unsigned long jiffies)
> > +{
> > +	return time_after(jiffies, d->jiffies_expire);
> > +}
> > +
> > +/*
> > + * T1: alloc_thread allocates new dummy structures, allocates additional
> > + *     memory, aptly named "leak", but doesn't keep permanent record of it.
> > + */
> > +struct workqueue_struct *alloc_wq;
> 
> A custom workqueue is needed only for special purposes.
> IMHO, the standard system workqueue is perfectly fine
> in our case. AFAIK, it is able to spawn/run about
> 256 worker threads and process this number of different
> works in parallel.

No problem.  This example executes infrequently and obviously isn't high
priority or anything.

> > +struct delayed_work alloc_dwork;
> > +static void alloc_thread(struct work_struct *work)
> 
> The suffix "_thread" is misleading. I think that alloc_work_func()
> is the most descriptive name.
> 

Noted for v3.

> > +{
> > +	struct dummy *d;
> > +
> > +	d = dummy_alloc();
> > +	if (!d)
> > +		return;
> > +
> > +	mutex_lock(&dummy_list_mutex);
> > +	list_add(&d->list, &dummy_list);
> > +	mutex_unlock(&dummy_list_mutex);
> > +
> > +	queue_delayed_work(alloc_wq, &alloc_dwork,
> > +		msecs_to_jiffies(1000 * T1_PERIOD));
> > +}
> > +
> > +static int livepatch_shadow_mod_init(void)
> > +{
> > +	alloc_wq = create_singlethread_workqueue("klp_demo_alloc_wq");
> > +	if (!alloc_wq)
> > +		return -1;
> > +
> > +	cleanup_wq = create_singlethread_workqueue("klp_demo_cleanup_wq");
> > +	if (!cleanup_wq)
> > +		goto exit_free_alloc;
> > +
> > +	INIT_DELAYED_WORK(&alloc_dwork, alloc_thread);
> > +	queue_delayed_work(alloc_wq, &alloc_dwork, 1000 * T1_PERIOD);
> > +
> > +	INIT_DELAYED_WORK(&cleanup_dwork, cleanup_thread);
> > +	queue_delayed_work(cleanup_wq, &cleanup_dwork,
> > +		msecs_to_jiffies(1000 * T2_PERIOD));
> 
> If you use the system workqueue and DECLARE_DELAYED_WORK, you might
> reduce this to:
> 
>        schedule_delayed_work(&alloc_dwork, 1000 * T1_PERIOD);
>        schedule_delayed_work(&cleanup_dwork, msecs_to_jiffies(1000 * T2_PERIOD));

Good cleanup for v3.

> > +	return 0;
> > +
> > +exit_free_alloc:
> > +	destroy_workqueue(alloc_wq);
> > +
> > +	return -1;
> > +}
> > +
> > +static void livepatch_shadow_mod_exit(void)
> > +{
> > +	struct dummy *d, *tmp;
> > +
> > +	/* Cleanup T1 */
> > +	if (!cancel_delayed_work(&alloc_dwork))
> > +		flush_workqueue(alloc_wq);
> 
> You will get the same with
> 
> 	cancel_delayed_work_sync(&alloc_dwork);
> 
> I am sorry, I spent too much time working on kthread worker API
> and was not able to help myself. Also Tejun would put a shame
> on me if I did not suggest this ;-)

No worries, thanks for the kthread worker tips, especially this last
one.  I'll incorporate them all in the next version.
 
> Otherwise I like the examples.

Funny that I had anticipated head-scratching over the convoluted example
instead of my workqueue usage :)

-- Joe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ