lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 28 Sep 2014 15:26:33 +0100
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Chenhui Zhao <chenhui.zhao@...escale.com>
Cc:	linux-kernel@...r.kernel.org, kernel@...gutronix.de,
	linux-arm-kernel@...ts.infradead.org, leoli@...escale.com,
	Jason.Jin@...escale.com, Zhuoyu.Zhang@...escale.com
Subject: Re: [PATCH 3/3] arm: pm: add deep sleep support for LS1

On Sun, Sep 28, 2014 at 07:06:40PM +0800, Chenhui Zhao wrote:
> On Fri, Sep 26, 2014 at 01:14:27PM +0100, Russell King - ARM Linux wrote:
> > On Fri, Sep 26, 2014 at 07:25:03PM +0800, Chenhui Zhao wrote:
> > > +static int ls1_start_deepsleep(unsigned long addr)
> > > +{
> > > +	ls1_do_deepsleep(addr);
> > > +
> > > +	return 0;
> > > +}
> > ...
> > > +	cpu_suspend(SRAM_CODE_BASE_PHY, ls1_start_deepsleep);
> > 
> > What's the point of this function?  Why can't ls1_do_deepsleep() just
> > return zero?
> 
> Just leave space for adding C code in the future.
> 
> > 
> > > +/*
> > > + * r0: the physical entry address of SRAM code
> > > + *
> > > + */
> > > +ENTRY(ls1_do_deepsleep)
> > > +	mov	r13, r0
> > > +
> > > +	/* flush cache */
> > > +	bl	v7_flush_dcache_all
> > 
> > Please explain the purpose of this call via a comment in the code.
> > 
> > The generic code will have saved the CPU state, and will have called
> > flush_cache_louis() to flush the caches to the point of unification.
> > 
> > The only data which will have been loaded into the cache between that
> > point is the stack for the return from __cpu_suspend_save, and
> > speculative prefetches.
> > 
> > So, the only reason I can gather is that you need to flush data from
> > lower levels of the cache below the point of unification.
> > 
> 
> In deep sleep process, all caches will lost, so flush all caches to prevent
> data loss.

You haven't answered my question.

> > > +
> > > +	/* disable cache, C bit in SCTLR */
> > > +	mrc	p15, 0, r0, c1, c0, 0
> > > +	bic	r0, r0, #(1 << 2)
> > > +	mcr	p15, 0, r0, c1, c0, 0
> > > +	isb
> > > +
> > > +	/* disable coherency, SMP bit in ACTLR */
> > > +	mrc	p15, 0, r0, c1, c0, 1
> > > +	bic	r0, r0, #(1 << 6)
> > > +	mcr	p15, 0, r0, c1, c0, 1
> > > +	isb
> > > +	dsb
> > > +
> > > +	/* disable MMU, M bit in SCTLR */
> > > +	mrc	p15, 0, r0, c1, c0, 0
> > > +	bic	r0, r0, #1
> > > +	mcr	p15, 0, r0, c1, c0, 0
> > > +	isb
> > > +
> > > +	/* jump to sram code using physical address */
> > > +	bx	r13
> > 
> > This looks extremely fragile.  You are running in virtual space, and you
> > turn the MMU off.  You are reliant on the MMU still being on for the
> > following instructions to already be in the pipeline.  That's not a
> > very good assumption to make (we've made it in the past and it breaks
> > every so often when things change, eg when the code is no longer laid
> > out how we expect.)
> > 
> > You need to disable the MMU safely, which means using the identity map
> > page tables and executing code in the identity map region.
> 
> Yes, the code will switch off MMU, and switch to physical address space.
> On LS1021, the DDR memory located at the physical address space started from
> 0x80000000, the kernel space also started at 0x80000000 (CONFIG_PAGE_OFFSET = 0x80000000).
> So the virtual address of kernel code is equal to the physical address.

You can't rely on that.

Sorry, NAK.

-- 
FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ