lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130311131417.GF23018@phenom.dumpdata.com>
Date:	Mon, 11 Mar 2013 09:14:17 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	WANG Chao <chaowang@...hat.com>, Vivek Goyal <vgoyal@...hat.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	"H. Peter Anvin" <hpa@...or.com>, Shuah Khan <shuahkhan@...il.com>,
	CAI Qian <caiqian@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	kexec <kexec@...ts.infradead.org>,
	Dave Young <dyoung@...hat.com>,
	Takao Indoh <indou.takao@...fujitsu.com>
Subject: Re: 3.9-rc1: crash kernel panic - not syncing: Can not allocate
 SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer

On Fri, Mar 08, 2013 at 11:39:51AM -0800, Yinghai Lu wrote:
> [ Add more to To list ]
> 
> On Fri, Mar 8, 2013 at 10:24 AM, Yinghai Lu <yinghai@...nel.org> wrote:
> > On Fri, Mar 8, 2013 at 4:12 AM, WANG Chao <chaowang@...hat.com> wrote:
> >
> >>> what is 00:02.0 in your system?
> >> This IOMMU issue is related to https://lkml.org/lkml/2012/11/26/814. We can
> >> discuss this IOMMU issue in that thread.
> >> Anyway 00:02.0 is a video card, the box is Ivy Bridge.
> >> # lspci -s 00:02.0 -v
> >> 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor
> >> Graphics Controller (rev 09) (prog-if 00 [VGA controller])
> >>         Subsystem: Intel Corporation Device 2211
> >>         Flags: bus master, fast devsel, latency 0, IRQ 44
> >>         Memory at afc00000 (64-bit, non-prefetchable) [size=4M]
> >>         Memory at c0000000 (64-bit, prefetchable) [size=256M]
> >>         I/O ports at 6000 [size=64]
> >>         Expansion ROM at <unassigned> [disabled]
> >>         Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
> >>         Capabilities: [d0] Power Management version 2
> >>         Capabilities: [a4] PCI Advanced Features
> >>         Kernel driver in use: i915
> >
> > disable drm for i915 will make your iommu work with dump?
> >
> >>
> >>
> >> Is it expected to intel_iommu=on or crashkernel_low to make 2nd kernel boot in
> >> 3.9? Back in 3.8, it works just fine w/ only crashkernel param.
> >
> > Yes, I really do not want to set crashkernel low range like 72M
> > automatically for all.
> > that would have the system with proper iommu support lose 72M under 4G
> > in first kernel.
> > And can not play allocate and return tricks, as first kernel have no
> > idea if iommu will work
> > on second kernel even iommu is working on first kernel.
> >
> > Better to fix iommu support at first.

It would seem that if we really want to go that route we should export
the number of megabytes that SWIOTLB is using. And it actually is - via the
swiotlb_nr_tbl() - thought it is no megabytes but slabs so you do have
to do some bit-shifting around.

If you want to use that, and perhaps alter the function to be swiotlb_size()
(and the xen-swiotlb to do the proper bit-shifting)?

> >
> > For old system that does not have DMAR or kernel does not have IOMMU
> > support enabled, or
> > user does not pass intel_iommu=on.
> > We could set crashkernel low range to 72M automatically.
> 
> It seem that it is not worthy to check case that does not support
> IOMMU in second kernel.
> 
> Please check attached patch that will just set crashkernel_low auto, and if the
> system DO support iommu with kdump, user can specify crashkernel_low=0
> to save low 72M.
> 
> Thanks
> 
> Yinghai


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ