lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Jan 2022 02:55:51 +0000
From:   Li Chen <lchen@...arella.com>
To:     Tom Joseph <tjoseph@...ence.com>
CC:     Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Rob Herring <robh@...nel.org>,
        Krzysztof Wilczyński <kw@...ux.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: Why does cdns_pcie_ep_set_bar use sz > SZ_2G for is_64bits in
 pcie-cadence-ep.c?

Hi, Tom

> -----Original Message-----
> From: Tom Joseph [mailto:tjoseph@...ence.com]
> Sent: Thursday, January 20, 2022 9:11 PM
> To: Li Chen
> Cc: Lorenzo Pieralisi; Rob Herring; Krzysztof Wilczyński; Bjorn Helgaas; linux-
> pci@...r.kernel.org; linux-kernel@...r.kernel.org
> Subject: [EXT] RE: Why does cdns_pcie_ep_set_bar use sz > SZ_2G for is_64bits
> in pcie-cadence-ep.c?
> 
> Hi Li,
> 
>  For 64_bits ,  all the odd bars (BAR1, 3 ,5) will be disabled ( so as to use as upper
> bits).

Yes, I get it. 
> I see that the code is assuming 32_bits if size < 2G , so all bars could be enabled.
> 

Ok, but I still wonder why 2G instead of other sizes like 1G or 512M? IMO if there is no obvious limitation, 64 or 32 bit should be left to the user to decide(like bar_fixed_64bit and bar_fixed_size in pci_epc_features), instead of hardcode 2G here.

> As I understand, you have a use case where you want to set the bar as 64 bit,
> actually use small size.
> Is it possible to describe bit more about this use case (just curious)?

It's because our SoC use 0-64G AMBA address space for our dram(system memory), so if I want to use 32 bit bar like 16M bus address, I must reserve this 16M area with kernel's reserve-memory, otherwise endpoints like nvme will report unsupported request when it do dma and the dma address is also located under this 16M area. More details have been put in this thread: https://lore.kernel.org/lkml/CH2PR19MB40245BF88CF2F7210DCB1AA4A0669@CH2PR19MB4024.namprd19.prod.outlook.com/T/#m0dd09b7e6f868b9692185ec57c1986b3c786e8d3


So, if I don't want to reserve much memory for BAR, I have to use 64-bit bar, and it must be prefetch 64 bit, not non-prefetch in my case, because my virtual p2p bridge has three windows: io, mem(32bit), prefetch mem(64 bit, because CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS is set), and if the controller running under ep-mode use 64 non-prefetch, this region will fallback to bridge's 32-bit mem window but I don't(and cannot) reserve so much 32bit memory for it).


Regards,
Li

**********************************************************************
This email and attachments contain Ambarella Proprietary and/or Confidential Information and is intended solely for the use of the individual(s) to whom it is addressed. Any unauthorized review, use, disclosure, distribute, copy, or print is prohibited. If you are not an intended recipient, please contact the sender by reply email and destroy all copies of the original message. Thank you.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ