lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 29 Sep 2014 17:42:55 +0800
From:	Chenhui Zhao <chenhui.zhao@...escale.com>
To:	Russell King - ARM Linux <linux@....linux.org.uk>
CC:	<linux-kernel@...r.kernel.org>, <kernel@...gutronix.de>,
	<linux-arm-kernel@...ts.infradead.org>, <leoli@...escale.com>,
	<Jason.Jin@...escale.com>, <Zhuoyu.Zhang@...escale.com>
Subject: Re: [PATCH 3/3] arm: pm: add deep sleep support for LS1

On Sun, Sep 28, 2014 at 03:26:33PM +0100, Russell King - ARM Linux wrote:
> On Sun, Sep 28, 2014 at 07:06:40PM +0800, Chenhui Zhao wrote:
> > On Fri, Sep 26, 2014 at 01:14:27PM +0100, Russell King - ARM Linux wrote:
> > > On Fri, Sep 26, 2014 at 07:25:03PM +0800, Chenhui Zhao wrote:
> > > > +/*
> > > > + * r0: the physical entry address of SRAM code
> > > > + *
> > > > + */
> > > > +ENTRY(ls1_do_deepsleep)
> > > > +	mov	r13, r0
> > > > +
> > > > +	/* flush cache */
> > > > +	bl	v7_flush_dcache_all
> > > 
> > > Please explain the purpose of this call via a comment in the code.
> > > 
> > > The generic code will have saved the CPU state, and will have called
> > > flush_cache_louis() to flush the caches to the point of unification.
> > > 
> > > The only data which will have been loaded into the cache between that
> > > point is the stack for the return from __cpu_suspend_save, and
> > > speculative prefetches.
> > > 
> > > So, the only reason I can gather is that you need to flush data from
> > > lower levels of the cache below the point of unification.
> > > 
> > 
> > In deep sleep process, all caches will lost, so flush all caches to prevent
> > data loss.
> 
> You haven't answered my question.
> 

I want to flush L1 and L2 caches of Cortex-A7 in LS1021. However,
flush_cache_louis() only flushed L1 cache. Therefore, call
v7_flush_dcache_all() to flush L1 and L2 cache. 

> > > > +
> > > > +	/* disable cache, C bit in SCTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 0
> > > > +	bic	r0, r0, #(1 << 2)
> > > > +	mcr	p15, 0, r0, c1, c0, 0
> > > > +	isb
> > > > +
> > > > +	/* disable coherency, SMP bit in ACTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 1
> > > > +	bic	r0, r0, #(1 << 6)
> > > > +	mcr	p15, 0, r0, c1, c0, 1
> > > > +	isb
> > > > +	dsb
> > > > +
> > > > +	/* disable MMU, M bit in SCTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 0
> > > > +	bic	r0, r0, #1
> > > > +	mcr	p15, 0, r0, c1, c0, 0
> > > > +	isb
> > > > +
> > > > +	/* jump to sram code using physical address */
> > > > +	bx	r13
> > > 
> > > This looks extremely fragile.  You are running in virtual space, and you
> > > turn the MMU off.  You are reliant on the MMU still being on for the
> > > following instructions to already be in the pipeline.  That's not a
> > > very good assumption to make (we've made it in the past and it breaks
> > > every so often when things change, eg when the code is no longer laid
> > > out how we expect.)
> > > 
> > > You need to disable the MMU safely, which means using the identity map
> > > page tables and executing code in the identity map region.
> > 
> > Yes, the code will switch off MMU, and switch to physical address space.
> > On LS1021, the DDR memory located at the physical address space started from
> > 0x80000000, the kernel space also started at 0x80000000 (CONFIG_PAGE_OFFSET = 0x80000000).
> > So the virtual address of kernel code is equal to the physical address.
> 
> You can't rely on that.
> 
> Sorry, NAK.

I'll try to use a safer method to implement it.

Chenhui
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ