lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1208052207040.5231@xanadu.home>
Date:	Sun, 5 Aug 2012 22:19:04 -0400 (EDT)
From:	Nicolas Pitre <nicolas.pitre@...aro.org>
To:	Cyril Chemparathy <cyril@...com>
cc:	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	arnd@...db.de, catalin.marinas@....com, linux@....linux.org.uk,
	will.deacon@....com
Subject: Re: [PATCH 04/22] ARM: LPAE: support 64-bit virt/phys patching

On Sun, 5 Aug 2012, Cyril Chemparathy wrote:

> Hi Nicolas,
> 
> On 8/4/2012 2:49 AM, Nicolas Pitre wrote:
> > On Tue, 31 Jul 2012, Cyril Chemparathy wrote:
> > 
> > > This patch adds support for 64-bit physical addresses in virt_to_phys
> > > patching.  This does not do real 64-bit add/sub, but instead patches in
> > > the
> > > upper 32-bits of the phys_offset directly into the output of virt_to_phys.
> > 
> > You should explain _why_ you do not a real aadd/sub.  I did deduce it
> > but that might not be obvious to everyone.  Also this subtlety should be
> > commented in the code as well.
> > 
> 
> We could not do an ADDS + ADC here because the carry is not guaranteed to be
> retained and passed into the ADC.  This is because the compiler is free to
> insert all kinds of stuff between the two non-volatile asm blocks.
> 
> Is there another subtlety here that I have missed out on entirely?

The high bits for the valid physical memory address range for which 
virt_to_phys and phys_to_virt can be used are always the same.  
Therefore no aadition at all is needed, fake or real.  Only providing 
those bits in the top word for the value returned by virt_to_phys is 
needed.

> > > In addition to adding 64-bit support, this patch also adds a
> > > set_phys_offset()
> > > helper that is needed on architectures that need to modify PHYS_OFFSET
> > > during
> > > initialization.
> > > 
> > > Signed-off-by: Cyril Chemparathy <cyril@...com>
> > > ---
> > >   arch/arm/include/asm/memory.h |   22 +++++++++++++++-------
> > >   arch/arm/kernel/head.S        |    6 ++++++
> > >   arch/arm/kernel/setup.c       |   14 ++++++++++++++
> > >   3 files changed, 35 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > > index 4a0108f..110495c 100644
> > > --- a/arch/arm/include/asm/memory.h
> > > +++ b/arch/arm/include/asm/memory.h
> > > @@ -153,23 +153,31 @@
> > >   #ifdef CONFIG_ARM_PATCH_PHYS_VIRT
> > > 
> > >   extern unsigned long __pv_phys_offset;
> > > -#define PHYS_OFFSET __pv_phys_offset
> > > -
> > > +extern unsigned long __pv_phys_offset_high;
> > 
> > As mentioned previously, this is just too ugly.  Please make
> > __pv_phys_offset into a phys_addr_t instead and mask the low/high parts
> > as needed in __virt_to_phys().
> > 
> 
> Maybe u64 instead of phys_addr_t to keep the sizing non-variable?

No.  When not using LPAE, we don't have to pay the price of a u64 value.  
That's why the phys_addr_t type is conditionally defined.  You already 
do  extra processing in virt_to_phys when sizeof(phys_addr_t) > 4 which 
is perfect for dealing with this issue.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ