[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87im4xe3pk.fsf@mpe.ellerman.id.au>
Date: Thu, 08 Apr 2021 15:37:59 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Leonardo Bras <leobras.c@...il.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Alexey Kardashevskiy <aik@...abs.ru>,
Leonardo Bras <leobras.c@...il.com>, brking@...ux.vnet.ibm.com
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] powerpc/iommu: Enable remaining IOMMU Pagesizes
present in LoPAR
Leonardo Bras <leobras.c@...il.com> writes:
> According to LoPAR, ibm,query-pe-dma-window output named "IO Page Sizes"
> will let the OS know all possible pagesizes that can be used for creating a
> new DDW.
>
> Currently Linux will only try using 3 of the 8 available options:
> 4K, 64K and 16M. According to LoPAR, Hypervisor may also offer 32M, 64M,
> 128M, 256M and 16G.
Do we know of any hardware & hypervisor combination that will actually
give us bigger pages?
> Enabling bigger pages would be interesting for direct mapping systems
> with a lot of RAM, while using less TCE entries.
>
> Signed-off-by: Leonardo Bras <leobras.c@...il.com>
> ---
> arch/powerpc/platforms/pseries/iommu.c | 49 ++++++++++++++++++++++----
> 1 file changed, 42 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
> index 9fc5217f0c8e..6cda1c92597d 100644
> --- a/arch/powerpc/platforms/pseries/iommu.c
> +++ b/arch/powerpc/platforms/pseries/iommu.c
> @@ -53,6 +53,20 @@ enum {
> DDW_EXT_QUERY_OUT_SIZE = 2
> };
A comment saying where the values come from would be good.
> +#define QUERY_DDW_PGSIZE_4K 0x01
> +#define QUERY_DDW_PGSIZE_64K 0x02
> +#define QUERY_DDW_PGSIZE_16M 0x04
> +#define QUERY_DDW_PGSIZE_32M 0x08
> +#define QUERY_DDW_PGSIZE_64M 0x10
> +#define QUERY_DDW_PGSIZE_128M 0x20
> +#define QUERY_DDW_PGSIZE_256M 0x40
> +#define QUERY_DDW_PGSIZE_16G 0x80
I'm not sure the #defines really gain us much vs just putting the
literal values in the array below?
> +struct iommu_ddw_pagesize {
> + u32 mask;
> + int shift;
> +};
> +
> static struct iommu_table_group *iommu_pseries_alloc_group(int node)
> {
> struct iommu_table_group *table_group;
> @@ -1099,6 +1113,31 @@ static void reset_dma_window(struct pci_dev *dev, struct device_node *par_dn)
> ret);
> }
>
> +/* Returns page shift based on "IO Page Sizes" output at ibm,query-pe-dma-window. See LoPAR */
> +static int iommu_get_page_shift(u32 query_page_size)
> +{
> + const struct iommu_ddw_pagesize ddw_pagesize[] = {
> + { QUERY_DDW_PGSIZE_16G, __builtin_ctz(SZ_16G) },
> + { QUERY_DDW_PGSIZE_256M, __builtin_ctz(SZ_256M) },
> + { QUERY_DDW_PGSIZE_128M, __builtin_ctz(SZ_128M) },
> + { QUERY_DDW_PGSIZE_64M, __builtin_ctz(SZ_64M) },
> + { QUERY_DDW_PGSIZE_32M, __builtin_ctz(SZ_32M) },
> + { QUERY_DDW_PGSIZE_16M, __builtin_ctz(SZ_16M) },
> + { QUERY_DDW_PGSIZE_64K, __builtin_ctz(SZ_64K) },
> + { QUERY_DDW_PGSIZE_4K, __builtin_ctz(SZ_4K) }
> + };
cheers
Powered by blists - more mailing lists