lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <783594b187e1d4dbeaafe9f186f9a1de8bbf15e4.camel@kernel.org>
Date:   Fri, 10 Sep 2021 16:17:44 +0300
From:   Jarkko Sakkinen <jarkko@...nel.org>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
        "Rafael J. Wysocki" <rafael@...nel.org>, linux-sgx@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/3] x86/sgx: Report SGX memory in
 /sys/devices/system/node/node*/meminfo

On Fri, 2021-09-10 at 08:51 +0200, Greg Kroah-Hartman wrote:
> On Fri, Sep 10, 2021 at 03:17:24AM +0300, Jarkko Sakkinen wrote:
> > The amount of SGX memory on the system is determined by the BIOS and it
> > varies wildly between systems.  It can be from dozens of MB's on desktops
> > or VM's, up to many GB's on servers.  Just like for regular memory, it is
> > sometimes useful to know the amount of usable SGX memory in the system.
> > 
> > Add SGX_MemTotal field to /sys/devices/system/node/node*/meminfo,
> > showing the total SGX memory in each NUMA node. The total memory for
> > each NUMA node is calculated by adding the sizes of contained EPC
> > sections together.
> > 
> > Introduce arch_node_read_meminfo(), which can optionally be rewritten by
> > the arch code, and rewrite it for x86 so it prints SGX_MemTotal.
> > 
> > Signed-off-by: Jarkko Sakkinen <jarkko@...nel.org>
> > ---
> > v4:
> > * A new patch.
> >  arch/x86/kernel/cpu/sgx/main.c | 14 ++++++++++++++
> >  arch/x86/kernel/cpu/sgx/sgx.h  |  6 ++++++
> >  drivers/base/node.c            | 10 +++++++++-
> >  3 files changed, 29 insertions(+), 1 deletion(-)
> 
> Where is the Documentation/ABI/ update for this new sysfs file?

It's has been existed for a long time, e.g.

 cat /sys/devices/system/node/node0/meminfo
Node 0 MemTotal:       32706792 kB
Node 0 MemFree:         5382988 kB
Node 0 MemUsed:        27323804 kB
Node 0 SwapCached:            8 kB
Node 0 Active:          3640612 kB
Node 0 Inactive:       21757684 kB
Node 0 Active(anon):    2833772 kB
Node 0 Inactive(anon):    14392 kB
Node 0 Active(file):     806840 kB
Node 0 Inactive(file): 21743292 kB
Node 0 Unevictable:       80640 kB
Node 0 Mlocked:           80640 kB
Node 0 Dirty:                36 kB
Node 0 Writeback:             0 kB
Node 0 FilePages:      22833124 kB
Node 0 Mapped:          1112832 kB
Node 0 AnonPages:       2645812 kB
Node 0 Shmem:            282984 kB
Node 0 KernelStack:       18544 kB
Node 0 PageTables:        46704 kB
Node 0 NFS_Unstable:          0 kB
Node 0 Bounce:                0 kB
Node 0 WritebackTmp:          0 kB
Node 0 KReclaimable:    1311812 kB
Node 0 Slab:            1542220 kB
Node 0 SReclaimable:    1311812 kB
Node 0 SUnreclaim:       230408 kB
Node 0 AnonHugePages:         0 kB
Node 0 ShmemHugePages:        0 kB
Node 0 ShmemPmdMapped:        0 kB
Node 0 FileHugePages:        0 kB
Node 0 FilePmdMapped:        0 kB
Node 0 HugePages_Total:     0
Node 0 HugePages_Free:      0
Node 0 HugePages_Surp:      0

This file is undocumented but the fields seem to reflect what is in
/proc/meminfo, so I added additional patch for documentation:

https://lore.kernel.org/linux-sgx/20210910001726.811497-3-jarkko@kernel.org/

I have no idea why things are how they are. I'm just merely trying to follow
the existing patterns. I'm also fully aware of the defacto sysfs pattern, i.e.
one value per file.

I figured, since the situation is how it is, that I end up doing this wrong
in a way or another, so this the anti-pattern I picked for my wrong doings
:-) I'm sorry about it.

> > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> > index 63d3de02bbcc..4c6da5f4a9d4 100644
> > --- a/arch/x86/kernel/cpu/sgx/main.c
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -717,6 +717,7 @@ static bool __init sgx_page_cache_init(void)
> >  		}
> >  
> >  		sgx_epc_sections[i].node =  &sgx_numa_nodes[nid];
> > +		sgx_numa_nodes[nid].size += size;
> >  
> >  		sgx_nr_epc_sections++;
> >  	}
> > @@ -790,6 +791,19 @@ int sgx_set_attribute(unsigned long *allowed_attributes,
> >  }
> >  EXPORT_SYMBOL_GPL(sgx_set_attribute);
> >  
> > +ssize_t arch_node_read_meminfo(struct device *dev,
> > +			       struct device_attribute *attr,
> > +			       char *buf, int len)
> > +{
> > +	struct sgx_numa_node *node = &sgx_numa_nodes[dev->id];
> > +
> > +	len += sysfs_emit_at(buf, len,
> > +			     "Node %d SGX_MemTotal:   %8lu kB\n",
> > +			     dev->id, node->size);
> 
> Wait, that is not how sysfs files work.  they are "one value per file"
> Please do not have multiple values in a single sysfs file, that is not
> acceptable at all.

Yeah, I'm wondering what would be the right corrective steps, given the
"established science".

> thanks,
> 
> greg k-h

/Jarkko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ