lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGz0as//iEbRpHHs@agluck-desk3>
Date:   Tue, 23 May 2023 10:14:18 -0700
From:   Tony Luck <tony.luck@...el.com>
To:     James Morse <james.morse@....com>
Cc:     x86@...nel.org, linux-kernel@...r.kernel.org,
        Fenghua Yu <fenghua.yu@...el.com>,
        Reinette Chatre <reinette.chatre@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        H Peter Anvin <hpa@...or.com>,
        Babu Moger <Babu.Moger@....com>,
        shameerali.kolothum.thodi@...wei.com,
        D Scott Phillips OS <scott@...amperecomputing.com>,
        carl@...amperecomputing.com, lcherian@...vell.com,
        bobo.shaobowang@...wei.com, tan.shaopeng@...itsu.com,
        xingxin.hx@...nanolis.org, baolin.wang@...ux.alibaba.com,
        Jamie Iles <quic_jiles@...cinc.com>,
        Xin Hao <xhao@...ux.alibaba.com>, peternewman@...gle.com
Subject: Re: [PATCH v3 00/19] x86/resctrl: monitored closid+rmid together,
 separate arch/fs locking

Hi all,

Looking at the changes already applied, and those planned to support
new architectures, new features, and quirks in specific implementations,
it is clear to me that the original resctrl file system implementation
did not provide enough flexibility for all the additions that are
needed.

So I've begun musing with 20-20 hindsight on how resctrl could have
provided better abstract building blocks.

The concept of a "resource" structure with a list of domains for
specific instances of that structure on a platform still seems like
a good building block.

But sharing those structures across increasingly different implementations
of the underlying resource is resulting in extra gymnastic efforts to
make all the new uses co-exist with the old. E.g. the domain structure
has elements for every type of resource even though each instance is
linked to just one resource type.

I had begun this journey with a plan to just allow new features to
hook into the existing resctrl filesystem with a "driver" registration
mechanism:

https://lore.kernel.org/all/20230420220636.53527-1-tony.luck@intel.com/

But feedback from Reinette that this would be cleaner if drivers created
new resources, rather than adding a patchwork of callback functions with
special case "if (type == DRIVER)" sprinkled around made me look into
a more radical redesign instead of joining in the trend of making the
smallest set of changes to meet my goals.


Goals:
1) User interfaces for existing resource control features should be
unchanged.

2) Admin interface should have the same capabilities, but interfaces
may change. E.g. kernel command line and mount options may be replaced
by choosing which resource modules to load.

3) Should be easy to create new modules to handle big differences
between implementations, or to handle model specific features that
may not exist in the same form across multiple CPU generations.

Initial notes:

Core resctrl filesystem functionality will just be:

1) Mount/unmount of filesystem. Architecture hook to allocate monitor
and control IDs for the default group.

2) Creation/removal/rename of control and monitor directories (with
call to architecture specific code to allocate/free the control and monitor
IDs to attach to the directory.

3) Maintaining the "tasks" file with architecture code to update the
control and monitor IDs in the task structure.

4) Maintaining the "cpus" file - similar to "tasks"

5) Context switch code to update h/w with control/monitor IDs.

6) CPU hotplug interface to build and maintain domain list for each
registered resource.

7) Framework for "schemata" file. Calls to resource specific functions
to maintain each line in the file.

8) Resource registration interface for modules to add new resources
to the list (and remove them on module unload). Modules may add files
to the info/ hierarchy, and also to each mon_data/ directory and/or
to each control/control_mon directory.

9) Note that the core code starts with an empty list of resources.
System admins must load modules to add support for each resource they
want to use.


We'd need a bunch of modules to cover existing x86 functionality. E.g.
an "L3" one for standard L3 cache allocation, an "L3CDP" one to be used
instead of the plain "L3" one for code/data priority mode by creating
a separate resource for each of code & data.

Logically separate mbm_local, mbm_total, and llc_cache_occupancy modules
(though could combine the mbm ones because they both need a periodic
counter read to avoid wraparound). "MB" for memory bandwidth allocation.

The "mba_MBps" mount option would be replaced with a module that does
both memory bandwidth allocation and monitoring, with a s/w feedback loop.

Peter's workaround for the quirks of AMD monitoring could become a
separate AMD specific module. But minor differences (e.g. contiguous
cache bitmask Intel requirements) could be handled within a module
if desired.

Pseudo-locking would be another case to load a different module to
set up pseudo-locking and enforce the cache bitmask rules between resctrl
groups instead of the basic cache allocation one.

Core resctrl code could handle overlaps between modules that want to
control the same resource with a "first to load reserves that feature"
policy.

Are there additional ARM specific architectural requirements that this
approach isn't addressing? Could the core functionality be extended to
make life easier for ARM?

-Tony

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ