lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 7 Oct 2020 10:39:02 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Anshuman Khandual <anshuman.khandual@....com>,
        linux-arm-kernel@...ts.infradead.org
Cc:     will@...nel.org, catalin.marinas@....com,
        Mark Rutland <mark.rutland@....com>,
        Ard Biesheuvel <ardb@...nel.org>,
        Steven Price <steven.price@....com>,
        Robin Murphy <robin.murphy@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH] arm64/mm: Validate hotplug range before creating linear
 mapping

>> We do have __add_pages()->check_hotplug_memory_addressable() where we
>> already check against MAX_PHYSMEM_BITS.
> 
> Initially, I thought about check_hotplug_memory_addressable() but the
> existing check that asserts end of hotplug wrt MAX_PHYSMEM_BITS, is
> generic in nature. AFAIK the linear mapping problem is arm64 specific,
> hence I was not sure whether to add an arch specific callback which
> will give platform an opportunity to weigh in for these ranges.

Also on s390x, the range where you can create an identity mapping depends on
- early kernel setup
- kasan

(I assume it's the same for all archs)

See arch/s390/mm/vmem.c:vmem_add_mapping(), which contains similar
checks (VMEM_MAX_PHYS).

> 
> But hold on, check_hotplug_memory_addressable() only gets called from
> __add_pages() after linear mapping creation in arch_add_memory(). How
> would it help ? We need some thing for add_memory(), its variants and
> also possibly for memremap_pages() when it calls arch_add_memory().
> 

Good point. We chose that place for simplicity when adding it (I was
favoring calling it at two places back then). Now, we might have good
reason to move the checks further up the call chain.

Most probably,

struct range memhp_get_addressable_range(bool need_mapping)
{
	...
}

Would make sense, to deal with memremap_pages() without identity mappings.

We have two options:

1. Generalize the checks, check early in applicable functions. Have a
single way to get applicable ranges, both in callers, and inside the
functions.

2. Keep the checks where they are. Add memhp_get_addressable_range() so
callers can figure limits out. It's less clear what the relation between
the different checks is. And it's likely if things change at one place
that we miss the other place.

>> struct range memhp_get_addressable_range(void)
>> {
>> 	const u64 max_phys = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1;
>> 	struct range range = arch_get_mappable_range();
> 
> What would you suggest as the default fallback range if a platform
> does not define this callback.

Just the largest possible range until we implement them. IIRC, an s390x
version should be easy to add.

> 
>>
>> 	if (range.start > max_phys) {
>> 		range.start = 0;
>> 		range.end = 0;
>> 	}
>> 	range.end = max_t(u64, range.end, max_phys);
> 
> min_t instead ?

Yeah :)

> 
>>
>> 	return range;
>> }
>>
>>
>> That, we can use in check_hotplug_memory_addressable(), and also allow
>> add_memory*() users to make use of it.
> 
> So this check would happen twice during a hotplug ?

Right now it's like calling a function with wrong arguments - you just
don't have a clue what valid arguments are, because non-obvious errors
(besides -ENOMEM, which is a temporary error) pop up deep down the call
chain.

For example, virito-mem would use it to detect during device
initialization the usable device range, and warn the user accordingly.
It currently manually checks for MAX_PHYSMEM_BITS, but that's just ugly.
Failing at random add_memory() calls (permanently!) is not so nice.

In case of DIMMs, we could use it to detect if adding parts of a DIMM
won't work (and warn the user early). We could try to add as much as
possible.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ