lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 03 Dec 2008 10:25:16 -0800
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	Mimi Zohar <zohar@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	James Morris <jmorris@...ei.org>,
	Christoph Hellwig <hch@...radead.org>,
	Al Viro <viro@...IV.linux.org.uk>,
	David Safford <safford@...son.ibm.com>,
	Serge Hallyn <serue@...ux.vnet.ibm.com>,
	Mimi Zohar <zohar@...ibm.com>
Subject: Re: [PATCH 2/6] integrity: Linux Integrity Module(LIM)

On Wed, 2008-12-03 at 13:15 -0500, Mimi Zohar wrote:
> > > +int register_integrity(const struct integrity_operations *ops)
> > > +{
> > > +	if (integrity_ops != NULL)
> > > +		return -EAGAIN;
> > > +	integrity_ops = ops;
> > > +	return 0;
> > > +}
> > 
> > Is there some locking to keep this from racing and two integrity modules
> > both thinking they succeeded?  Does it matter?
> 
> Wouldn't this be a Kconfig issue.

If it is a Kconfig issue, then we don't really need all the function
pointers and ops things.  Just pick which integrity infrastructure you
want at compile time.  You also wouldn't need that integrity_ops
function at all.

> > > +int integrity_register_template(const char *template_name,
> > > +				const struct template_operations *template_ops)
> > > +{
...
> > > +	mutex_lock(&integrity_templates_mutex);
> > > +	list_add_rcu(&entry->template, &integrity_templates);
> > > +	mutex_unlock(&integrity_templates_mutex);
> > > +	synchronize_rcu();
> > 
> > What's the synchronize_rcu() for here?
> 
> good question.

So, you'll go investigate this, fix it up in a future patch and if you
decide it needs to be kept, it will be commented, right?

> > > +int integrity_unregister_template(const char *template_name)
> > > +{
> > > +	struct template_list_entry *entry;
> > > +
> > > +	mutex_lock(&integrity_templates_mutex);
> > > +	list_for_each_entry(entry, &integrity_templates, template) {
> > > +		if (strncmp(entry->template_name, template_name,
> > > +			    strlen(entry->template_name)) == 0) {
> > > +			list_del_rcu(&entry->template);
> > > +			mutex_unlock(&integrity_templates_mutex);
> > > +			synchronize_rcu();
> > > +			kref_put(&entry->refcount, template_release);
> > > +			return 0;
> > > +		}
> > > +	}
> > > +	mutex_unlock(&integrity_templates_mutex);
> > > +	return -EINVAL;
> > > +}
> > > +EXPORT_SYMBOL_GPL(integrity_unregister_template);
> > 
> > Is this frequently called?  If so, it might be better to use
> > call_rcu(). 
> 
> I don't expect that Templates would be added/removed frequently,
> but I could be wrong.  Only time will tell.

synchronize_rcu() is an "expensive" operation in wall clock time.  It
won't cost you a lot of CPU or anything, but it could cause this code to
block for a bit of time.

What you've said there is:

1. delete the template from the list
2. sleep until no one can possibly see the template any more
3. delete template

Call RCU would, instead, queue the template (and the kref_put()) in a
list.  That list gets called in a batched manner once a grace period has
elapsed.  The batching speeds things up, and it also doesn't block in
the caller.

I do see some other use like this in the kernel, but I have a suspicion
that there's a better way to do it.  Could you have Paul McKenney take a
quick look?  Dipankar or Josh Triplett seem to also love to look at
RCU. :)

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ