lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+=Fv5REZNSH584Sy2cA2-iKqfRzV64=d4_nwOCT5vtH+1jX4Q@mail.gmail.com>
Date: Thu, 6 Nov 2025 23:20:53 +0100
From: Magnus Lindholm <linmag7@...il.com>
To: Yuhao Jiang <danisjiang@...il.com>
Cc: Richard Henderson <richard.henderson@...aro.org>, Matt Turner <mattst88@...il.com>, 
	David Airlie <airlied@...hat.com>, linux-alpha@...r.kernel.org, 
	linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org, 
	stable@...r.kernel.org
Subject: Re: [PATCH] agp/alpha: fix out-of-bounds write with negative pg_start

On Fri, Oct 24, 2025 at 5:48 AM Yuhao Jiang <danisjiang@...il.com> wrote:
>
> The code contains an out-of-bounds write vulnerability due to insufficient
> bounds validation. Negative pg_start values and integer overflow in
> pg_start+pg_count can bypass the existing bounds check.
>
> For example, pg_start=-1 with page_count=1 produces a sum of 0, passing
> the check `(pg_start + page_count) > num_entries`, but later writes to
> ptes[-1]. Similarly, pg_start=LONG_MAX-5 with pg_count=10 overflows,
> bypassing the check.

I guess the bounds checking in the AGP code for Alpha has some limitations
as to how it's implemented. I spent some time looking at how bounds checking
in alpha_core_agp_insert_memory() is done on other architectures and I see
some of the same issues in for, example parisc_agp_insert_memory() as well
as amd64_insert_memory(), which even has a /* FIXME: could wrap */ line at
its bounds checking code. I guess even agp_generic_insert_memory() has
similar limitations. I'm wondering if this is the case, because at some
point, it was determined that this will never become a real problem and no
need to mess with old code that isn't really that broken, or just that no one
ever got around to fixing it properly?

If it needs fixing, should we try to fix it for all arch's that have
similar limitations?

Magnus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ