[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy2uEgyvs=5NdoZ3H208qeDGJ9OrTOeaz6HQ+1va+R3dUA@mail.gmail.com>
Date: Thu, 14 Mar 2019 23:28:32 +0530
From: Anup Patel <anup@...infault.org>
To: Mike Rapoport <rppt@...ux.ibm.com>
Cc: Anup Patel <Anup.Patel@....com>,
Palmer Dabbelt <palmer@...ive.com>,
Albert Ou <aou@...s.berkeley.edu>,
Atish Patra <Atish.Patra@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Christoph Hellwig <hch@...radead.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] RISC-V: Allow booting kernel from any 4KB aligned address
On Thu, Mar 14, 2019 at 12:23 PM Mike Rapoport <rppt@...ux.ibm.com> wrote:
>
> On Thu, Mar 14, 2019 at 02:36:01AM +0530, Anup Patel wrote:
> > On Thu, Mar 14, 2019 at 12:01 AM Mike Rapoport <rppt@...ux.ibm.com> wrote:
> > >
> > > On Tue, Mar 12, 2019 at 10:08:22PM +0000, Anup Patel wrote:
> > > > Currently, we have to boot RISCV64 kernel from a 2MB aligned physical
> > > > address and RISCV32 kernel from a 4MB aligned physical address. This
> > > > constraint is because initial pagetable setup (i.e. setup_vm()) maps
> > > > entire RAM using hugepages (i.e. 2MB for 3-level pagetable and 4MB for
> > > > 2-level pagetable).
> > > >
> > > > Further, the above booting contraint also results in memory wastage
> > > > because if we boot kernel from some <xyz> address (which is not same as
> > > > RAM start address) then RISCV kernel will map PAGE_OFFSET virtual address
> > > > lineraly to <xyz> physical address and memory between RAM start and <xyz>
> > > > will be reserved/unusable.
> > > >
> > > > For example, RISCV64 kernel booted from 0x80200000 will waste 2MB of RAM
> > > > and RISCV32 kernel booted from 0x80400000 will waste 4MB of RAM.
> > > >
> > > > This patch re-writes the initial pagetable setup code to allow booting
> > > > RISV32 and RISCV64 kernel from any 4KB (i.e. PAGE_SIZE) aligned address.
> > > >
> > > > To achieve this:
> > > > 1. We map kernel, dtb and only some amount of RAM (few MBs) using 4KB
> > > > mappings in setup_vm() (called from head.S)
> > > > 2. Once we reach paging_init() (called from setup_arch()) after
> > > > memblock setup, we map all available memory banks using 4KB
> > > > mappings and memblock APIs.
> > >
> > > I'm not really familiar with RISC-V, but my guess would be that you'd get
> > > worse TLB performance with 4KB mappings. Not mentioning the amount of
> > > memory required for the page table itself.
> >
> > I agree we will see a hit in TLB performance due to 4KB mappings.
> >
> > To address this we can create, 2MB (or 4MB on 32bit system) mappings
> > whenever load_pa is aligned to it otherwise we prefer 4KB mappings. In other
> > words, we create bigger mappings whenever possible and fallback to 4KB
> > mappings when not possible.
> >
> > This way if kernel is booted from 2MB (or 4MB) aligned address then we will
> > see good TLB performance for kernel addresses. Also, users are still free to
> > boot Linux RISC-V kernel from any 4KB aligned address.
> >
> > Of course, we will have to document this as part of Linux RISC-V booting
> > requirements under Documentation/ (which does not exist currently).
> >
> > >
> > > If the only goal is to utilize the physical memory below the kernel, it
> > > simply should not be reserved at the first place, something like:
> >
> > Well, our goal was two-fold:
> >
> > 1. We wanted to unify boot-time alignment requirements for 32bit and
> > 64bit RISC-V systems
>
> Can't they both start from 4MB aligned address provided the memory below
> the kernel can be freed?
Yes, they can both start from 4MB aligned address.
>
> > 2. Save memory by allowing users to place kernel just after the runtime
> > firmware at starting of RAM.
>
> If the firmware should be alive after kernel boot, it's memory is the only
> part that should be reserved below the kernel. Otherwise, the entire region
> <physical memory start> - <kernel start> can be free.
>
> Using 4K pages for the swapper_pg_dir is quite a change and I'm not
> convinced its really justified.
I understand your concern about TLB performance and more page
tables.
Not just 2MB/4MB mappings, we should be able to create even 1GB
mappings as well for good TLB performance.
I suggest we should use best possible mapping size (4KB, 2MB, or
1GB) based on alignment of kernel load address. This way users can
boot from any 4KB aligned address and setup_vm() will try to use
biggest possible mapping size.
For example, If the kernel load address is aligned to 2MB then we 2MB
mappings bigger mappings and use fewer page tables. Same thing
possible for 1GB mappings as well.
Regards,
Anup
Powered by blists - more mailing lists