lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8D983423E7EDF846BB3056827B8CC5D1499FC7D9@corpmail2.na.ads.idt.com>
Date:	Thu, 4 Oct 2012 19:08:27 +0000
From:	"Bounine, Alexandre" <Alexandre.Bounine@....com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
	Matt Porter <mporter@...nel.crashing.org>,
	Li Yang <leoli@...escale.com>
Subject: RE: [PATCH 3/5] rapidio: run discovery as an asynchronous process

On Wed, October 03, 2012 6:30 PM
Andrew Morton <akpm@...ux-foundation.org> wrote:
> 
> On Wed,  3 Oct 2012 15:18:41 -0400
> Alexandre Bounine <alexandre.bounine@....com> wrote:
> 
> > ...
> >
> > +static void __devinit disc_work_handler(struct work_struct *_work)
> > +{
> > +	struct rio_disc_work *work = container_of(_work,
> > +						  struct rio_disc_work, work);
> 
> There's a nice simple way to avoid such ugliness:
> 
> --- a/drivers/rapidio/rio.c~rapidio-run-discovery-as-an-asynchronous-
> process-fix
> +++ a/drivers/rapidio/rio.c
> @@ -1269,9 +1269,9 @@ struct rio_disc_work {
> 
>  static void __devinit disc_work_handler(struct work_struct *_work)
>  {
> -	struct rio_disc_work *work = container_of(_work,
> -						  struct rio_disc_work, work);
> +	struct rio_disc_work *work;
> 
> +	work = container_of(_work, struct rio_disc_work, work);
>  	pr_debug("RIO: discovery work for mport %d %s\n",
>  		 work->mport->id, work->mport->name);
>  	rio_disc_mport(work->mport);
> _
> 

Thank you for the fix. Will avoid that ugliness in the future.

> > +	pr_debug("RIO: discovery work for mport %d %s\n",
> > +		 work->mport->id, work->mport->name);
> > +	rio_disc_mport(work->mport);
> > +
> > +	kfree(work);
> > +}
> > +
> >  int __devinit rio_init_mports(void)
> >  {
> >  	struct rio_mport *port;
> > +	struct rio_disc_work *work;
> > +	int no_disc = 0;
> >
> >  	list_for_each_entry(port, &rio_mports, node) {
> >  		if (port->host_deviceid >= 0)
> >  			rio_enum_mport(port);
> > -		else
> > -			rio_disc_mport(port);
> > +		else if (!no_disc) {
> > +			if (!rio_wq) {
> > +				rio_wq = alloc_workqueue("riodisc", 0, 0);
> > +				if (!rio_wq) {
> > +					pr_err("RIO: unable allocate rio_wq\n");
> > +					no_disc = 1;
> > +					continue;
> > +				}
> > +			}
> > +
> > +			work = kzalloc(sizeof *work, GFP_KERNEL);
> > +			if (!work) {
> > +				pr_err("RIO: no memory for work struct\n");
> > +				no_disc = 1;
> > +				continue;
> > +			}
> > +
> > +			work->mport = port;
> > +			INIT_WORK(&work->work, disc_work_handler);
> > +			queue_work(rio_wq, &work->work);
> > +		}
> > +	}
> 
> I'm having a lot of trouble with `no_disc'.  afacit what it does is to
> cease running async discovery for any remaining devices if the
> workqueue
> allocation failed (vaguely reasonable) or if the allocation of a single
> work item failed (incomprehensible).
> 
> But if we don't run discovery, the subsystem is permanently busted for
> at least some devices, isn't it?

This is correct. We are considering ways to restart discovery
process later but it is not applicable now.

> 
> And this code is basically untestable unless the programmer does
> deliberate fault injection, which makes it pretty much unmaintainable.
> 
> So...  if I haven't totally misunderstood, I suggest a rethink is in
> order?
>

I will review and simplify. Probably, just try to allocate all required
resources ahead of port list scan. Simple and safe.
 
> > +	if (rio_wq) {
> > +		pr_debug("RIO: flush discovery workqueue\n");
> > +		flush_workqueue(rio_wq);
> > +		pr_debug("RIO: flush discovery workqueue finished\n");
> > +		destroy_workqueue(rio_wq);
> >  	}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ