[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7166cae-bf89-8bdd-5849-72b5949fc6cc@oracle.com>
Date: Fri, 19 Feb 2021 11:45:50 -0500
From: George Kennedy <george.kennedy@...cle.com>
To: Andrey Konovalov <andreyknvl@...gle.com>
Cc: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Konrad Rzeszutek Wilk <konrad@...nok.org>,
Will Deacon <will.deacon@....com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Peter Collingbourne <pcc@...gle.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Kevin Brodsky <kevin.brodsky@....com>,
Christoph Hellwig <hch@...radead.org>,
kasan-dev <kasan-dev@...glegroups.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Dhaval Giani <dhaval.giani@...cle.com>
Subject: Re: [PATCH] mm, kasan: don't poison boot memory
On 2/18/2021 7:09 PM, Andrey Konovalov wrote:
> On Fri, Feb 19, 2021 at 1:06 AM George Kennedy
> <george.kennedy@...cle.com> wrote:
>>
>>
>> On 2/18/2021 3:55 AM, David Hildenbrand wrote:
>>> On 17.02.21 21:56, Andrey Konovalov wrote:
>>>> During boot, all non-reserved memblock memory is exposed to the buddy
>>>> allocator. Poisoning all that memory with KASAN lengthens boot time,
>>>> especially on systems with large amount of RAM. This patch makes
>>>> page_alloc to not call kasan_free_pages() on all new memory.
>>>>
>>>> __free_pages_core() is used when exposing fresh memory during system
>>>> boot and when onlining memory during hotplug. This patch adds a new
>>>> FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through
>>>> free_pages_prepare() from __free_pages_core().
>>>>
>>>> This has little impact on KASAN memory tracking.
>>>>
>>>> Assuming that there are no references to newly exposed pages before they
>>>> are ever allocated, there won't be any intended (but buggy) accesses to
>>>> that memory that KASAN would normally detect.
>>>>
>>>> However, with this patch, KASAN stops detecting wild and large
>>>> out-of-bounds accesses that happen to land on a fresh memory page that
>>>> was never allocated. This is taken as an acceptable trade-off.
>>>>
>>>> All memory allocated normally when the boot is over keeps getting
>>>> poisoned as usual.
>>>>
>>>> Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
>>>> Change-Id: Iae6b1e4bb8216955ffc14af255a7eaaa6f35324d
>>> Not sure this is the right thing to do, see
>>>
>>> https://lkml.kernel.org/r/bcf8925d-0949-3fe1-baa8-cc536c529860@oracle.com
>>>
>>> Reversing the order in which memory gets allocated + used during boot
>>> (in a patch by me) might have revealed an invalid memory access during
>>> boot.
>>>
>>> I suspect that that issue would no longer get detected with your
>>> patch, as the invalid memory access would simply not get detected.
>>> Now, I cannot prove that :)
>> Since David's patch we're having trouble with the iBFT ACPI table, which
>> is mapped in via kmap() - see acpi_map() in "drivers/acpi/osl.c". KASAN
>> detects that it is being used after free when ibft_init() accesses the
>> iBFT table, but as of yet we can't find where it get's freed (we've
>> instrumented calls to kunmap()).
> Maybe it doesn't get freed, but what you see is a wild or a large
> out-of-bounds access. Since KASAN marks all memory as freed during the
> memblock->page_alloc transition, such bugs can manifest as
> use-after-frees.
It gets freed and re-used. By the time the iBFT table is accessed by
ibft_init() the page has been over-written.
Setting page flags like the following before the call to kmap() prevents
the iBFT table page from being freed:
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 0418feb..41c1bbd 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -287,9 +287,14 @@ static void __iomem *acpi_map(acpi_physical_address
pg_off, unsigned long pg_sz)
pfn = pg_off >> PAGE_SHIFT;
if (should_use_kmap(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+
if (pg_sz > PAGE_SIZE)
return NULL;
- return (void __iomem __force *)kmap(pfn_to_page(pfn));
+
+ page->flags |= ((1UL << PG_unevictable) | (1UL <<
PG_reserved) | (1UL << PG_locked));
+
+ return (void __iomem __force *)kmap(page);
} else
return acpi_os_ioremap(pg_off, pg_sz);
}
Just not sure of the correct way to set the page flags.
George
Powered by blists - more mailing lists