lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 8 Jul 2013 10:28:26 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Michael Holzheu <holzheu@...ux.vnet.ibm.com>
Cc:	HATAYAMA Daisuke <d.hatayama@...fujitsu.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	kexec@...ts.infradead.org,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Jan Willeke <willeke@...ibm.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 3/5] vmcore: Introduce remap_oldmem_pfn_range()

On Mon, Jul 08, 2013 at 11:28:39AM +0200, Michael Holzheu wrote:
> On Mon, 08 Jul 2013 14:32:09 +0900
> HATAYAMA Daisuke <d.hatayama@...fujitsu.com> wrote:
> 
> > (2013/07/02 4:32), Michael Holzheu wrote:
> > > For zfcpdump we can't map the HSA storage because it is only available
> > > via a read interface. Therefore, for the new vmcore mmap feature we have
> > > introduce a new mechanism to create mappings on demand.
> > >
> > > This patch introduces a new architecture function remap_oldmem_pfn_range()
> > > that should be used to create mappings with remap_pfn_range() for oldmem
> > > areas that can be directly mapped. For zfcpdump this is everything besides
> > > of the HSA memory. For the areas that are not mapped by remap_oldmem_pfn_range()
> > > a generic vmcore a new generic vmcore fault handler mmap_vmcore_fault()
> > > is called.
> > >
> > 
> > This fault handler is only for s390 specific issue. Other architectures don't need
> > this for the time being.
> > 
> > Also, from the same reason, I'm doing this review based on source code only.
> > I cannot run the fault handler on meaningful system, which is currently s390 only.
> 
> You can test the code on other architectures if you do not map anything in advance.
> For example you could just "return 0" in remap_oldmem_pfn_range():
> 
> /*
>  * Architectures may override this function to map oldmem
>  */
> int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
>                                   unsigned long from, unsigned long pfn,
>                                   unsigned long size, pgprot_t prot)
> {
>         return 0;
> }
> 
> In that case for all pages the new mechanism would be used.
> 
> > 
> > I'm also concerned about the fault handler covers a full range of vmcore, which
> > could hide some kind of mmap() bug that results in page fault.
> > 
> > So, the fault handler should be enclosed by ifdef CONFIG_S390 for the time being.
> 
> I personally do not like that, but if Vivek and you prefer this, of course we
> can do that.
> 
> What about something like:
> 
> #ifdef CONFIG_S390
> static int mmap_vmcore_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> ...
> }
> #else
> static int mmap_vmcore_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> {
> 	BUG();
> }
> #endif

I personally perfer not to special case it for s390 only and let the
handler be generic. 

If there is a bug in remap_old_pfn_range(), only side affect is that
we will fault in the page when it is accessed and that will be slow. BUG()
sounds excessive. At max it could be WARN_ONCE().

In regular cases for x86, this path should not even hit. So special casing
it to detect issues with remap_old_pfn_range() does not sound very good
to me. I would rather leave it as it is and if there are bugs and mmap()
slows down, then somebody needs to debug it.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ