lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB52767BFBFAE3CB01C9CB29AE8CC89@BN9PR11MB5276.namprd11.prod.outlook.com>
Date:   Wed, 11 May 2022 03:36:09 +0000
From:   "Tian, Kevin" <kevin.tian@...el.com>
To:     Steve Wahl <steve.wahl@....com>
CC:     David Woodhouse <dwmw2@...radead.org>,
        Baolu Lu <baolu.lu@...ux.intel.com>,
        "Rodel, Jorg" <jroedel@...e.de>,
        Kyung Min Park <kyung.min.park@...el.com>,
        Will Deacon <will@...nel.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        "Mike Travis" <mike.travis@....com>,
        Dimitri Sivanich <sivanich@....com>,
        "Anderson, Russ" <russ.anderson@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] iommu/vt-d: Increase DMAR_UNITS_SUPPORTED

> From: Steve Wahl <steve.wahl@....com>
> Sent: Wednesday, May 11, 2022 3:07 AM
> 
> On Tue, May 10, 2022 at 01:16:26AM +0000, Tian, Kevin wrote:
> > > From: Steve Wahl <steve.wahl@....com>
> > > Sent: Friday, May 6, 2022 11:26 PM
> > >
> > > On Fri, May 06, 2022 at 08:12:11AM +0000, Tian, Kevin wrote:
> > > > > From: David Woodhouse <dwmw2@...radead.org>
> > > > > Sent: Friday, May 6, 2022 3:17 PM
> > > > >
> > > > > On Fri, 2022-05-06 at 06:49 +0000, Tian, Kevin wrote:
> > > > > > > From: Baolu Lu <baolu.lu@...ux.intel.com>
> > > > > > >
> > > > > > > > --- a/include/linux/dmar.h
> > > > > > > > +++ b/include/linux/dmar.h
> > > > > > > > @@ -19,7 +19,7 @@
> > > > > > > >   struct acpi_dmar_header;
> > > > > > > >
> > > > > > > >   #ifdef	CONFIG_X86
> > > > > > > > -# define	DMAR_UNITS_SUPPORTED	MAX_IO_APICS
> > > > > > > > +# define	DMAR_UNITS_SUPPORTED	640
> > > > > > > >   #else
> > > > > > > >   # define	DMAR_UNITS_SUPPORTED	64
> > > > > > > >   #endif
> > > > > >
> > > > > > ... is it necessary to permanently do 10x increase which wastes
> memory
> > > > > > on most platforms which won't have such need.
> > > > >
> > > > > I was just looking at that. It mostly adds about 3½ KiB to each struct
> > > > > dmar_domain.
> > > > >
> > > > > I think the only actual static array is the dmar_seq_ids bitmap which
> > > > > grows to 640 *bits* which is fairly negligible, and the main growth is
> > > > > that it adds about 3½ KiB to each struct dmar_domain for the
> > > > > iommu_refcnt[] and iommu_did[] arrays.
> > > >
> > > > Thanks for the quick experiment! though the added material is
> > > > negligible it's cleaner to me if having a way to configure it as
> > > > discussed below.
> > > >
> > > > >
> > > > > > Does it make more sense to have a configurable approach similar to
> > > > > > CONFIG_NR_CPUS? or even better can we just replace those static
> > > > > > arrays with dynamic allocation so removing this restriction
> > > > > > completely?
> > > > >
> > > > > Hotplug makes that fun, but I suppose you only need to grow the
> array
> > > > > in a given struct dmar_domain if you actually add a device to it that's
> > > > > behind a newly added IOMMU. I don't know if the complexity of
> making it
> > > > > fully dynamic is worth it though. We could make it a config option,
> > > > > and/or a command line option (perhaps automatically derived from
> > > > > CONFIG_NR_CPUS).
> > > >
> > > > either config option or command line option is OK to me. Probably
> > > > the former is simpler given no need to dynamically expand the
> > > > static array. btw though deriving from CONFIG_NR_CPUS could work
> > > > in this case it is unclear why tying the two together is necessary in
> > > > concept, e.g. is there guarantee that the number of IOMMUs must
> > > > be smaller than the number of CPUs in a platform?
> > > >
> > > > >
> > > > > If it wasn't for hotplug, I think we'd know the right number by the
> > > > > time we actually need it anyway, wouldn't we? Can we have a
> heuristic
> > > > > for how many DMAR units are likely to be hotplugged? Is it as simple
> as
> > > > > the ratio of present to not-yet-present CPUs in MADT?
> > > >
> > > > Probably. But I don't have enough knowledge on DMAR hotplug to
> > > > judge (e.g. whether it's strictly tied to CPU hotplug and if yes whether
> > > > there could be multiple IOMMUs hotplugged together with a CPU
> > > > socket)...
> > > >
> > > > Thanks
> > > > Kevin
> > >
> > > Would anyone be more comfortable if we only increase the limit where
> > > MAXSMP is set?
> > >
> > > i.e.
> > >
> > > #if defined(CONFIG_X86) && defined(CONFIG_MAXSMP)
> > > # define	DMAR_UNITS_SUPPORTED	640
> > > #elif defined(CONFIG_X86)
> > > # define	DMAR_UNITS_SUPPORTED	MAX_IO_APICS
> > > #else
> > > # define	DMAR_UNITS_SUPPORTED	64
> > > #endif
> > >
> > > Thank you all for your time looking at this.
> > >
> >
> > This works for your own configuration but it's unclear whether other
> > MAXSMP platforms have the exact same requirements (different
> > number of sockets, different ratio of #iommus/#sockets, etc.). In any
> > case since we are at it having a generic way to extend it makes more
> > sense to me.
> 
> So, to be clear, what you would like to see would be Kconfig entries
> to create a config option, say "NR_DMARS", set up so the default is:
> 
>      MAXSMP?  640

usually we do 2's power thus 1024 is more reasonable. If people do
care about the exact memory footprint they can always manually
change it.

>      X86_64?  128
>      X86_32?  64
>      other    64
> 
> And DMAR_UNITS_SUPPORTED gets removed, and everywhere it was used
> we
> use CONFIG_NR_DMARS in its place?

Let's keep DMAR_UNITS_SUPPORTED and just redefine it to be
CONFIG_NR_DMARS for less changes.

> 
> I can give that a shot but wanted to confirm this is what you'd want
> first.
> 
> Thanks,
> 
> --> Steve
> 
> --
> Steve Wahl, Hewlett Packard Enterprise

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ