[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ce8af9c13bcea9230c7689f3c1e0e2cd@matoro.tk>
Date: Mon, 28 Feb 2022 17:14:26 -0500
From: matoro <matoro_mailinglist_kernel@...oro.tk>
To: Kees Cook <keescook@...omium.org>
Cc: Alexander Viro <viro@...iv.linux.org.uk>,
Eric Biederman <ebiederm@...ssion.com>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
stable@...r.kernel.org,
Magnus Groß <magnus.gross@...h-aachen.de>,
Thorsten Leemhuis <regressions@...mhuis.info>,
Anthony Yznaga <anthony.yznaga@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
regressions@...ts.linux.dev, linux-ia64@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH 5.16 v2] binfmt_elf: Avoid total_mapping_size for ET_EXEC
On 2022-02-28 15:55, Kees Cook wrote:
> Partially revert commit 5f501d555653 ("binfmt_elf: reintroduce using
> MAP_FIXED_NOREPLACE").
>
> At least ia64 has ET_EXEC PT_LOAD segments that are not virtual-address
> contiguous (but _are_ file-offset contiguous). This would result in
> giant mapping attempts to cover the entire span, including the virtual
> address range hole. Disable total_mapping_size for ET_EXEC, which
> reduces the MAP_FIXED_NOREPLACE coverage to only the first PT_LOAD:
>
> $ readelf -lW /usr/bin/gcc
> ...
> Program Headers:
> Type Offset VirtAddr PhysAddr FileSiz MemSiz
> ...
> ...
> LOAD 0x000000 0x4000000000000000 0x4000000000000000 0x00b5a0 0x00b5a0
> ...
> LOAD 0x00b5a0 0x600000000000b5a0 0x600000000000b5a0 0x0005ac 0x000710
> ...
> ...
> ^^^^^^^^ ^^^^^^^^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^
>
> File offset range : 0x000000-0x00bb4c
> 0x00bb4c bytes
>
> Virtual address range : 0x4000000000000000-0x600000000000bcb0
> 0x200000000000bcb0 bytes
>
> Ironically, this is the reverse of the problem that originally caused
> problems with ET_EXEC and MAP_FIXED_NOREPLACE: overlaps. This problem
> is
> with holes. Future work could restore full coverage if
> load_elf_binary()
> were to perform mappings in a separate phase from the loading (where
> it could resolve both overlaps and holes).
>
> Cc: Alexander Viro <viro@...iv.linux.org.uk>
> Cc: Eric Biederman <ebiederm@...ssion.com>
> Cc: linux-fsdevel@...r.kernel.org
> Cc: linux-mm@...ck.org
> Reported-by: matoro <matoro_mailinglist_kernel@...oro.tk>
> Reported-by: John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>
> Fixes: 5f501d555653 ("binfmt_elf: reintroduce using
> MAP_FIXED_NOREPLACE")
> Link:
> https://lore.kernel.org/r/a3edd529-c42d-3b09-135c-7e98a15b150f@leemhuis.info
> Cc: stable@...r.kernel.org
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
> Here's the v5.16 backport.
> ---
> fs/binfmt_elf.c | 25 ++++++++++++++++++-------
> 1 file changed, 18 insertions(+), 7 deletions(-)
>
> diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
> index f8c7f26f1fbb..911a9e7044f4 100644
> --- a/fs/binfmt_elf.c
> +++ b/fs/binfmt_elf.c
> @@ -1135,14 +1135,25 @@ static int load_elf_binary(struct linux_binprm
> *bprm)
> * is then page aligned.
> */
> load_bias = ELF_PAGESTART(load_bias - vaddr);
> - }
>
> - /*
> - * Calculate the entire size of the ELF mapping (total_size).
> - * (Note that load_addr_set is set to true later once the
> - * initial mapping is performed.)
> - */
> - if (!load_addr_set) {
> + /*
> + * Calculate the entire size of the ELF mapping
> + * (total_size), used for the initial mapping,
> + * due to first_pt_load which is set to false later
> + * once the initial mapping is performed.
> + *
> + * Note that this is only sensible when the LOAD
> + * segments are contiguous (or overlapping). If
> + * used for LOADs that are far apart, this would
> + * cause the holes between LOADs to be mapped,
> + * running the risk of having the mapping fail,
> + * as it would be larger than the ELF file itself.
> + *
> + * As a result, only ET_DYN does this, since
> + * some ET_EXEC (e.g. ia64) may have virtual
> + * memory holes between LOADs.
> + *
> + */
> total_size = total_mapping_size(elf_phdata,
> elf_ex->e_phnum);
> if (!total_size) {
This does the trick! Thank you so much!!
Powered by blists - more mailing lists