lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5424466A.1010103@freescale.com>
Date:	Thu, 25 Sep 2014 11:44:26 -0500
From:	German Rivera <German.Rivera@...escale.com>
To:	Scott Wood <scottwood@...escale.com>
CC:	Kim Phillips <kim.phillips@...escale.com>,
	<gregkh@...uxfoundation.org>, <arnd@...db.de>,
	<linux-kernel@...r.kernel.org>, <stuart.yoder@...escale.com>,
	<agraf@...e.de>, <linuxppc-release@...ux.freescale.net>,
	Erez Nir-RM30794 <nir.erez@...escale.com>
Subject: Re: [PATCH 1/3 v2] drivers/bus: Added Freescale Management Complex
 APIs


On 09/25/2014 11:16 AM, Scott Wood wrote:
> On Thu, 2014-09-25 at 10:44 -0500, German Rivera wrote:
>>
>> On 09/24/2014 10:40 PM, Kim Phillips wrote:
>>> On Wed, 24 Sep 2014 21:23:59 -0500
>>> German Rivera <German.Rivera@...escale.com> wrote:
>>>
>>>> On 09/23/2014 07:49 PM, Kim Phillips wrote:
>>>>> On Fri, 19 Sep 2014 17:49:39 -0500
>>>>> "J. German Rivera" <German.Rivera@...escale.com> wrote:
>>>>>> +	mc_io->portal_virt_addr = NULL;
>>>>>> +	devm_kfree(mc_io->dev, mc_io);
>>>>>
>>>>> like I said before, there's really no point in clearing something
>>>>> out right before it's freed.
>>>>>
>>>> I disagree. This can help detect cases of double-freeing.
>>>
>>> ?  freeing NULL does nothing - it just returns - which doesn't help
>>> detect anything.  What's more, the kernel has a memory debugging
>>> infrastructure that detects double freeing of the same object.
>> I know that, but silently doing nothing when freeing NULL is a bad
>> practice in general, because it hides a bug.
>
> It doesn't hide a bug "in general".  I'm not sure what the relevance is
> here, though -- this seems to be about reusing portal_virt_addr as a
> debug flag rather than anything to do with actually calling free() on
> portal_virt_addr.
>
>> Is the memory debugging infrastructure enabled by default? If so, then
>> I would agree with you. If not, we would need to be able to reproduce
>> the bug while having memory debugging enabled. This assumes that the
>> bug is easy to reproduce. What if it is not easy to reproduce? Having
>> "first-failure data capture" checks is useful for helping diagnose bugs
>> that are not easy to reproduce.
>>
>> In this case, if due to some bug, fsl_destroy_mc_io() is
>> called twice for the same mc_io object, the combination of doing
>> "mc_io->portal_virt_addr = NULL" and the original "if (WARN_ON",
>> that you want removed, would have helped catch the bug on
>> "first failure".
>> Even removing the "if (WARN_ON)" but keeping the
>> "mc_io->portal_virt_addr = NULL" would still help catch the bug
>> on "first failure", assuming that the system crashes when calling
>> "devm_iounmap(mc_io->dev, NULL);" or at least prints a stack trace.
>
> iounmap of NULL will not crash or print a stack trace.  It is a no-op.
>
Ok, so if iounmap silently does nothing for NULL, and free silently does
nothing for NULL, that means we may not be be able to easily catch 
"destroy double-call" bugs on first-failure, in this case.
If that is not an issue, then I'll remove the "mc_io->portal_virt_addr = 
NULL" as well.

>>>>>> +int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd)
>>>>>> +{
>>>>>> +	enum mc_cmd_status status;
>>>>>> +	int error;
>>>>>> +	unsigned long irqsave_flags = 0;
>>>>>> +	unsigned long jiffies_until_timeout =
>>>>>> +	    jiffies + MC_CMD_COMPLETION_TIMEOUT_JIFFIES;
>>>>>> +
>>>>>> +	/*
>>>>>> +	 * Acquire lock depending on mc_io flags:
>>>>>> +	 */
>>>>>> +	if (mc_io->flags & FSL_MC_PORTAL_SHARED_BY_INT_HANDLERS) {
>>>>>> +		if (mc_io->flags & FSL_MC_PORTAL_SHARED_BY_THREADS)
>>>>>> +			spin_lock_irqsave(&mc_io->spinlock, irqsave_flags);
>>>>>> +		else
>>>>>> +			spin_lock(&mc_io->spinlock);
>>>>>> +	} else if (mc_io->flags & FSL_MC_PORTAL_SHARED_BY_THREADS) {
>>>>>> +		mutex_lock(&mc_io->mutex);
>>>>>> +	}
>>>>>
>>>>> again, I think we need to drop the coming from h/w IRQ context here
>>>>> (SHARED_BY_INT_HANDLERS); there's no IRQ handlers in this
>>>>> patchseries, and calling this function from an IRQ handler would be
>>>>> prohibitively wasteful, guessing by the udelay and timeout values
>>>>> below.
>>>>>
>>>>> Can we just mutex_lock for now, and unconditionally (no
>>>>> SHARED_BY_THREADS check), to protect from nesting?
>>>>>
>>>> I would still prefer to keep the SHARED_BY_THREADS flag, to give option
>>>> of not doing any locking, in cases where the portal used in
>>>> mc_send_command() is not shared among concurrent callers
>>>
>>> how can you guarantee there won't be concurrent callers?  The linux
>>> kernel is multithreaded.
>>>
>> The owner of the portal should know if his/her code can be invoked using
>> the same portal, from multiple threads or not.
>
> Or more generally, whether the caller is responsible for
> synchronization.
>
> Would it make sense to simplify by saying the caller is always
> responsible for synchronization?
>
Certainly this will simplify mc_send_command() but it will put the 
additional burden on all the callers of MC commands. If there is no 
objection, we can remove locking entirely from mc_send_command() and
the the callers of MC commands deal with that.

> Then again, the management complex is not supposed to be on the
> performance critical path, so why not simplify by just always do the
> locking here?
>
But about the few MC commands that need to run on interrupt context
(such as to inspect or clear MC interrupts)?

>>>>>> +	/*
>>>>>> +	 * Wait for response from the MC hardware:
>>>>>> +	 */
>>>>>> +	for (;;) {
>>>>>> +		status = mc_read_response(mc_io->portal_virt_addr, cmd);
>>>>>> +		if (status != MC_CMD_STATUS_READY)
>>>>>> +			break;
>>>>>> +
>>>>>> +		/*
>>>>>> +		 * TODO: When MC command completion interrupts are supported
>>>>>> +		 * call wait function here instead of udelay()
>>>>>> +		 */
>>>>>> +		udelay(MC_CMD_COMPLETION_POLLING_INTERVAL_USECS);
>>>>>
>>>>> this pauses any caller for 0.5ms on every successful command
>>>>> write.  Can the next submission of the patchseries wait until
>>>>> completion IRQs are indeed supported, since both that and the above
>>>>> locking needs to be resolved?
>>>>>
>>>> No. Interrupt handlers will come in a later patch series as they are
>>>> not needed for using the basic MC functionality.
>>>
>>> meanwhile unnecessarily udelaying kernel threads for .5ms upsets
>>> basic kernel functionality :)  Would using the kernel's
>>> wait_for_completion API be a good compromise here?  See
>>> include/linux/completion.h.
>>>
>> The intent of doing the udelay() in the middle of the polling loop was
>> to throttle down the frequency of I/Os done while polling for the
>> completion of the command. Can you elaborate on why ".5ms udelay upsets
>> basic kernel functionality"?
>
> It introduces latency, especially since it's possible for it to happen
> with interrupts disabled.  And you're actually potentially blocking for
> more than that, since 500us is just one iteration of the loop.
>
> The jiffies test for exiting the loop is 500ms, which is *way* too long
> to spend in a critical section.
>
But that would be a worst case, since that is a timeout check.
What timeout value do you think would be more appropriate in this case?

>> Would it be better to just use "plain delay loop", instead of the udelay
>> call, such as the following?
>>
>> for (i = 0; i < 1000; i++)
>>      ;
>
> No, never do that.  You have no idea how long it will actually take.
> GCC might even optimize it out entirely.
>
Ok, so udelay is not good here, a plain vanilla delay loop is not good 
either. Are there other alternatives or we just don't worry about
throttling down the frequency of I/Os done in the polling loop?
(Given the fact that MC commands are not expected to be executed that 
frequently, to frequently cause a lot of I/O traffic with this polling loop)

>> I can see that in the cases where we use "completion interrupts", the
>> ISR can signal the completion, and the polling loop can be replaced by
>> waiting on the completion. However, I don't see how using a completion
>> can help make a polling loop more efficient, if you don't have a
>> "completion interrupt" to signal the completion.
>
> It's not about making it more efficient.  It's about not causing
> problems for other things going on in the system.
>
But still I don't see how a completion can help here, unless
you can signal the completion from an ISR.

-German

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ