lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXOMVYowDxHBL8kg@ryzen>
Date: Fri, 23 Jan 2026 15:57:25 +0100
From: Niklas Cassel <cassel@...nel.org>
To: Koichiro Den <den@...inux.co.jp>
Cc: Manivannan Sadhasivam <mani@...nel.org>, bhelgaas@...gle.com,
	kwilczynski@...nel.org, frank.li@....com, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 0/5] PCI: endpoint: BAR subrange mapping support

On Fri, Jan 23, 2026 at 11:08:39PM +0900, Koichiro Den wrote:
> > 
> > So if we wanted to, a good number would be to have at least a few BARs of size
> > 128k or larger (so there could be two submaps), since I assume that some other
> > DWC controllers might also have have 64k min alignment.
> 
> I'm not entirely sure whether it's really acceptable to bump the hard-coded
> sizes for BAR0/1/2/3 (512, 512, 1024, 16384) to 128k, in other words,
> whether there were any reasons behind choosing such small default numbers.
> Let's see. I agree that 128k or larger should suffice for DWC-based
> platforms (as you mentioned, testing with two submaps).

When the PCI endpoint subsystem was created, most SoC were arm32, and many
of them had a very small PCIe aperture, like a few megabytes in total.

So if you were running two boards based on the same SoC, one as host and
one as endpoint, and perhaps you even had a PCIe switch on the host board,
you really did not want to have larger BARs than needed because it would
not fit inside the small address space dedicated to PCIe.

Additionally, the PCI endpoint subsystem allocates backing memory for
these BARs, some of these systems might have a very small amount of RAM.

However, I think that:
static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 };

512+512+1024+16384+131072+1048576 = 1 Mi + 146 Ki

Is still quite low...

When adding more and more features to the PCI endpoint subsystem,
these small BAR sizes will not be enough to evaluate new features.
E.g. for Resizable BARs, as per the PCIe specification, the minimum
possible size for a Resizable BAR is 1 MB.

I solved this by making sure that pci_epf_alloc_space() overrides
the requested size, to set it to 1MB, if BAR type is BAR_RESIZABLE:
https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?h=52132f3a63b33fd38ceef07392ed176db84d579f

If a few MB is too much for your host system, use a different host
system to test. (E.g. if you connect these arm32 boards to a PC,
and run pci_endpoint_test, larger BAR sizes would not be a problem,
assuming that the endpoint itself can allocate enough backing memory.)

So my suggestion is that we that we just bump the defaults...


I guess in worst case, if someone actually complains, I think a nice
solution would be do to like you are doing for vntb:
https://lore.kernel.org/linux-pci/20260118135440.1958279-34-den@valinux.co.jp/

i.e. pci-epf-test could have {barX_size} in configfs, one per BAR,
and then the user themselves could configure the BAR sizes that they
want to run pci-epf-test with, if the pci-epf-test default sizes are
not desirable, before starting the link. (Some tests like e.g. the
subrange mapping test should of course fail if there is not a single
BAR with BAR size larger than needed to test the feature.)

But if I were you, I would just bump the defaults, since the defaults
are currently overrided for BAR type FIXED_BAR and RESIZABLE_BAR anyway,
and just add the barX_size attributes in configfs if someone complains.


Kind regards,
Niklas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ