lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58087B71.2040509@linux.intel.com>
Date:   Thu, 20 Oct 2016 16:08:17 +0800
From:   Lu Baolu <baolu.lu@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Mathias Nyman <mathias.nyman@...ux.intel.com>,
        Ingo Molnar <mingo@...hat.com>, linux-usb@...r.kernel.org,
        x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [RESEND PATCH v2 1/4] usb: dbc: early driver for xhci debug
 capability

Hi Peter,

Thanks for your comments.

On 10/19/2016 09:09 PM, Peter Zijlstra wrote:
> On Wed, Oct 19, 2016 at 08:18:22AM +0800, Lu Baolu wrote:
>> +++ b/drivers/usb/early/xhci-dbc.c
>> +static int xdbc_bulk_write(const char *bytes, int size)
>> +{
>> +	unsigned long flags;
>> +	int ret, timeout = 0;
>> +
>> +	spin_lock_irqsave(&xdbc.lock, flags);
> Yikes!!
>
> So how is this supposed to work from NMI context and the like?
>
> (also, at the very least, that should be a raw_spinlock_t)

Totally agree. We should put it as a raw_spinlock_t().

>
> What do you need the spinlock for? Afaict this is a 'simple' polling
> event handling loop on MMIO, right?

Not only for polling registers in MMIO, but also for handling the
events in the event ring. The event ring is a memory block,
which is allocated during hardware initialization and saved
in a register in MMIO.

There is a single event ring for all events (read completion,
write completion, port status change and transfer errors etc).
The debugging hardware doesn't support interrupt, so software
has to poll the event ring whenever it needs to.

Event ring polling happens at least in write interface (to make
sure the previous transfer has been completed), and a worker
(to check the read events and other things). That's the reason
why I need a spin_lock here.

>
> All we really need to guarantee is that there's only a single CPU trying
> to do that at any one time.
>
> Wouldn't something like:
>
>   https://marc.info/?l=linux-kernel&m=147681099108509&w=2
>
> already take care of that? Then you can drop the lock and things will
> work 'nested'.
>
>> +
>> +	xdbc_handle_events();
>> +
>> +	/* Check completion of the previous request. */
>> +	while (xdbc.flags & XDBC_FLAGS_OUT_PROCESS) {
>> +		if (timeout > 1000000)
>> +			break;
>> +
>> +		spin_unlock_irqrestore(&xdbc.lock, flags);
>> +		xdbc_delay(100);
>> +		spin_lock_irqsave(&xdbc.lock, flags);
>> +		timeout += 100;
>> +
>> +		xdbc_handle_events();
>> +	}
>> +
>> +	if (xdbc.flags & XDBC_FLAGS_OUT_PROCESS) {
>> +		spin_unlock_irqrestore(&xdbc.lock, flags);
>> +
>> +		/*
>> +		 * Oops, hardware wasn't able to complete the
>> +		 * previous transfer.
>> +		 */
>> +		xdbc_trace("oops: previous transfer not completed yet\n");
>> +
>> +		return -ETIMEDOUT;
>> +	}
>> +
>> +	ret = xdbc_bulk_transfer((void *)bytes, size, false);
>> +
>> +	spin_unlock_irqrestore(&xdbc.lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +static void early_xdbc_write(struct console *con, const char *str, u32 n)
>> +{
>> +	int chunk, ret;
>> +	static char buf[XDBC_MAX_PACKET];
>> +	int use_cr = 0;
>> +
>> +	if (!xdbc.xdbc_reg)
>> +		return;
>> +	memset(buf, 0, XDBC_MAX_PACKET);
>> +	while (n > 0) {
>> +		for (chunk = 0; chunk < XDBC_MAX_PACKET && n > 0;
>> +				str++, chunk++, n--) {
>> +			if (!use_cr && *str == '\n') {
>> +				use_cr = 1;
>> +				buf[chunk] = '\r';
>> +				str--;
>> +				n++;
>> +				continue;
>> +			}
>> +			if (use_cr)
>> +				use_cr = 0;
>> +			buf[chunk] = *str;
>> +		}
>> +		if (chunk > 0) {
>> +			ret = xdbc_bulk_write(buf, chunk);
>> +			if (ret < 0)
>> +				break;
>> +		}
>> +	}
>> +}
>> +
>> +static struct console early_xdbc_console = {
>> +	.name =		"earlyxdbc",
>> +	.write =	early_xdbc_write,
>> +	.flags =	CON_PRINTBUFFER,
>> +	.index =	-1,
>> +};
>> +
>> +void __init early_xdbc_register_console(void)
>> +{
>> +	if (early_console)
>> +		return;
>> +
>> +	early_console = &early_xdbc_console;
>> +	if (early_console_keep)
>> +		early_console->flags &= ~CON_BOOT;
>> +	else
>> +		early_console->flags |= CON_BOOT;
>> +	register_console(early_console);
>> +}
>> +
>> +static void xdbc_scrub_function(struct work_struct *work)
>> +{
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&xdbc.lock, flags);
>> +
>> +	/*
>> +	 * DbC is running, check the event ring and
>> +	 * handle the events.
>> +	 */
>> +	if (readl(&xdbc.xdbc_reg->control) & CTRL_DRC)
>> +		xdbc_handle_events();
>> +
>> +	/*
>> +	 * External reset happened. Need to restart the
>> +	 * debugging hardware.
>> +	 */
>> +	if (unlikely(!(readl(&xdbc.xdbc_reg->control) & CTRL_DCE)))
>> +		xdbc_handle_external_reset();
>> +
>> +	spin_unlock_irqrestore(&xdbc.lock, flags);
>> +
>> +	queue_delayed_work(xdbc_wq, &xdbc.scrub, usecs_to_jiffies(100));
>> +}
> Excuse my total lack of USB knowledge, but WTH does this do and what do
> we need it for?
>

As I said above, I need a worker to check the read completion
events and other hardware situations.

One hardware situation that needs to check regularly is that
it might be aborted by the host controller itself. The xhci spec
allows the debug hardware to share some logics with the host
controller (to reduce cost?). As the result, when host controller
driver resets the host (always happens in driver probe or
resume) the debug hardware resets as well. Software needs
to re-initialize and bring it back.

Early printk doesn't need to read anything from debug host.
But if we use it for kernel debugging with kgdb (it's in my work
queue),  we need a read interface. We need to check the event
ring regularly for read completion events.

Best regards,
Lu Baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ