lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Dec 2019 15:32:25 +1030
From:   "Andrew Jeffery" <andrew@...id.au>
To:     "Eddie James" <eajames@...ux.ibm.com>, linux-kernel@...r.kernel.org
Cc:     devicetree@...r.kernel.org, "Jason Cooper" <jason@...edaemon.net>,
        linux-aspeed@...ts.ozlabs.org, "Marc Zyngier" <maz@...nel.org>,
        "Rob Herring" <robh+dt@...nel.org>, tglx@...utronix.de,
        mark.rutland@....com, "Joel Stanley" <joel@....id.au>
Subject: Re: [PATCH v2 07/12] drivers/soc: xdma: Add user interface



On Thu, 12 Dec 2019, at 07:13, Eddie James wrote:
> 
> On 12/10/19 9:48 PM, Andrew Jeffery wrote:
> >
> > On Fri, 6 Dec 2019, at 03:45, Eddie James wrote:
> >> +		}
> >> +	} else {
> >> +		mutex_lock(&ctx->file_lock);
> >> +
> >> +		rc = wait_event_interruptible(ctx->wait, !ctx->current_client);
> >> +		if (rc) {
> >> +			mutex_unlock(&ctx->file_lock);
> >> +			return -EINTR;
> >> +		}
> >> +	}
> >> +
> >> +	aspeed_xdma_start(ctx, &op, ctx->vga_phys + offs, client);
> >> +
> >> +	mutex_unlock(&ctx->file_lock);
> > You've used file_lock here to protect aspeed_xdma_start() but start_lock
> > above to protect aspeed_xdma_reset(), so it seems one client can disrupt
> > another by resetting the engine while a DMA is in progress?
> 
> 
> That's correct, that is the intention. In case the transfer hangs, 
> another client needs to be able to reset and clear up a blocking transfer.

Ah. Can we log a noisy warning about resetting the engine while a DMA is
in progress then? I'd hate to debug this otherwise. The more information
we can log about both clients the better.

We still need to make sure we're using consistent locking, even if we wind
up with nested locking.

> >> +
> >> +static int aspeed_xdma_release(struct inode *inode, struct file *file)
> >> +{
> >> +	struct aspeed_xdma_client *client = file->private_data;
> >> +
> >> +	if (client->ctx->current_client == client)
> >> +		client->ctx->current_client = NULL;
> > Shouldn't we also cancel the DMA op? This seems like a DoS risk: set up
> > a non-blocking, large downstream transfer then close the client. Also risks
> > scribbling on memory we no-longer own given we don't cancel/wait for
> > completion in vm close callback?
> 
> 
> Right, better wait for completion. There's no way to cancel a transfer.

Right, that's handy context.

Cheers,

Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ