lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1417516475.5525.57.camel@infradead.org>
Date:	Tue, 02 Dec 2014 10:34:35 +0000
From:	David Woodhouse <dwmw2@...radead.org>
To:	Joerg Roedel <joro@...tes.org>
Cc:	Jiang Liu <jiang.liu@...ux.intel.com>,
	iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] iommu/vt-d: Fix an off-by-one bug in __domain_mapping()

On Mon, 2014-12-01 at 17:27 +0100, Joerg Roedel wrote:
> On Wed, Nov 26, 2014 at 09:42:10AM +0800, Jiang Liu wrote:
> > There's an off-by-one bug in function __domain_mapping(), which may
> > trigger the BUG_ON(nr_pages < lvl_pages) when
> > 	(nr_pages + 1) & superpage_mask == 0
> 
> What is the superpage_mask?
> 
> > The issue was introduced by commit 9051aa0268dc "intel-iommu: Combine
> > domain_pfn_mapping() and domain_sg_mapping()", which sets sg_res to
> > "nr_pages + 1" to avoid some of the 'sg_res==0' code paths.
> > 
> > It's safe to remove extra "+1" because sg_res is only used to calculate
> > page size now.
> 
> From your description and the (hard to read) code in __domain_mapping I
> don't really understand the issue yet. Can you please elaborate on this
> issue can be triggered?

> Is the BUG_ON the only issue and, if yes, can that be fixed by just
> changing the BUG_ON condition?

__domain_mapping() is an amalgamation of the old domain_pfn_mapping()
and domain_sg_mapping() functions. When I did that, in commit 9051aa026,
the 'sg_res' variable was used *only* for tracking how many pages were
left in the current scatterlist element, before we had to get the next
one from the sglist.

For reasons which are lost now, in the case of a simple pfn range I was
setting 'sg_res = nr_pages + 1' to ensure that we *never* got down to
sg_res=0 and tried to look for more from the (non-existent, in this
case) sglist.

Later in commit 6dd9a7c73 we added large page support, using sg_res in a
way which actually required it to be accurate. And now we have an
off-by-one because we'll actually *try* to use a 2GiB large page for a
mapping of size 0x1ff000, because of that '+1'.

The BUG_ON is entirely correct here, and correctly highlighted the
problem.

However, the +1 is no longer necessary, because the check that needed it
was also modified to read 'if (sg_res && nr_pages)', which is perfectly
sufficient and arguably how it should have been done in the first place.

I had an almost identical patch last week for internal testing, because
I stupidly hadn't noticed that Jiang had beaten me to it.

Acked-By: David Woodhouse <David.Woodhouse@...el.com>

> > 	This issue was introduced in v2.6.31, but intel-iommu.c has
> > been moved into drivers/iommu in v3.1. So what's the preferred way
> > to deal with stable kernels between v2.6.31 and v3.1?
> 
> Just remove the kernel version marker from the stable tag. The stable
> kernel maintainers for kernels >3.1 will ask you to backport the patch
> or just backport it by themselfes.

I think this is only an issue since commit 6dd9a7c737 added super page
support in 3.0, isn't it? Before that, the +1 was *needed*.

-- 
dwmw2

Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5745 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ