lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 1 Apr 2008 14:57:14 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Rodolfo Giometti <giometti@...eenne.com>
Cc:	linux-kernel@...r.kernel.org, dwmw2@...radead.org,
	davej@...hat.com, sam@...nborg.org, greg@...ah.com,
	randy.dunlap@...cle.com
Subject: Re: [PATCH 1/7] LinuxPPS core support.

On Tue, 1 Apr 2008 23:45:22 +0200
Rodolfo Giometti <giometti@...eenne.com> wrote:

> On Tue, Apr 01, 2008 at 01:55:55AM -0700, Andrew Morton wrote:
> > 
> > This can all be handled with suitable locking and refcounting.  The device
> > which is delivering PPS interrupts has a reference on the PPS data
> > structures.  If userspace has PPS open then it also has a reference.
> > 
> > The thread of control which releases the last reference to the PPS data
> > structures also frees them all up.  This may require a schedule_work() if
> > we need to support release-from-interrupt (as it appears that we do), but
> > that's OK - we just need to be able to make the PPS data structures
> > ineligible for new lookups while the schedule_work() is pending.
> > 
> > There should be no need for any thread of control to wait for any other thread
> > of control to do anything.  Get the refcounting right and everything
> > can be done synchronously.
> 
> Here my solution by using get/put functions:

That's looking promising.

> diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c
> index d75c8c8..61c1569 100644
> --- a/drivers/pps/kapi.c
> +++ b/drivers/pps/kapi.c
> @@ -59,6 +59,62 @@ static void pps_add_offset(struct pps_ktime *ts, struct pps_ktime *offset)
>   * Exported functions
>   */
>  
> +/* pps_get_source - find a PPS source
> + *
> + * 	source: the PPS source ID.
> + *
> + * This function is used to find an already registered PPS source into the
> + * system.
> + *
> + * The function returns NULL if found nothing, otherwise it returns a pointer
> + * to the PPS source data struct (the refcounter is incremented by 1).
> + */
> +
> +struct pps_device *pps_get_source(int source)
> +{
> +	struct pps_device *pps;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pps_idr_lock, flags);
> +
> +	pps = idr_find(&pps_idr, source);
> +	if (pps != NULL)
> +		atomic_inc(&pps->usage);
> +
> +	spin_unlock_irqrestore(&pps_idr_lock, flags);
> +
> +	return pps;
> +}
> +
> +/* pps_put_source - free the PPS source data
> + *
> + *	pps: a pointer to the PPS source.
> + *
> + * This function is used to free a PPS data struct if its refcount is 0.
> + */
> +
> +void pps_put_source(struct pps_device *pps)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&pps_idr_lock, flags);
> +
> +        BUG_ON(atomic_read(&pps->usage) == 0);

Please don't forget to use checkpatch.

> +	if (!atomic_dec_and_test(&pps->usage))
> +		goto exit;
> +
> +	/* No more reference to the PPS source. We can safely remove the
> +	 * PPS data struct.
> +	 */
> +	idr_remove(&pps_idr, pps->id);
> +
> +	kfree(pps);
> +
> +exit:
> +	spin_unlock_irqrestore(&pps_idr_lock, flags);
> +}

It'd be slightly moer efficient to move the kfree outside the spinlock:

	if (!atomic_dec_and_test(&pps->usage)) {
		pps = NULL;
		goto exit;
	}

	/* No more reference to the PPS source. We can safely remove the
	 * PPS data struct.
	 */
	idr_remove(&pps_idr, pps->id);
exit:
	spin_unlock_irqrestore(&pps_idr_lock, flags);
	kfree(pps);
}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ