[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE9FiQUmQ35a7PVaB1=9q1yhCa391gc1LbmCfgjrOZCH6qvANw@mail.gmail.com>
Date: Fri, 4 Jan 2013 14:58:15 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Shuah Khan <shuahkhan@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>, Jan Kiszka <jan.kiszka@....de>,
Jason Wessel <jason.wessel@...driver.com>,
linux-kernel@...r.kernel.org,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH v7u1 26/31] x86: Don't enable swiotlb if there is not
enough ram for it
On Fri, Jan 4, 2013 at 2:47 PM, Eric W. Biederman <ebiederm@...ssion.com> wrote:
> Yinghai Lu it looks like your autodetection of the problem case in this
> patch is problematic and needs a rethink. My quick skim says you are
> trying to detect failure too early in the code. Furthermore having
> kexec on panic sized magic comments without explanation is wrong.
current amd iommu implementation have this sequence:
1. alloc buffer for swiotlb.
2. detect and initialize intel iommu or amd iommu
3. release swiotlb if swiotlb == 0 , set by ops_init.
so we need to detect that before allocating buffer for swiotlb.
>
> Shuah Khan this is motivated by kdump. However a correct implementation
> should be about dealing with the case when there is simply not enough
> memory available below 4G for bounce buffers.
>
> If a device needs an iommu, and swiotlb is the only iommu option, and
> there is not enough memory below 4G panic'ing is entirely reasonable.
>
> Do I read this discussion right that we are waisting 64M on systems
> that have the swiotlb code but don't use the swiotlb?
No wasting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists