lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3a458d0-5f39-4374-957e-a2a3edf4983a@oracle.com>
Date: Wed, 23 Apr 2025 17:36:30 -0700
From: Libo Chen <libo.chen@...cle.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: akpm@...ux-foundation.org, peterz@...radead.org, mgorman@...e.de,
        mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
        tj@...nel.org, llong@...hat.com, sraithal@....com,
        venkat88@...ux.ibm.com, kprateek.nayak@....com, raghavendra.kt@....com,
        yu.c.chen@...el.com, tim.c.chen@...el.com, vineethr@...ux.ibm.com,
        chris.hyser@...cle.com, daniel.m.jordan@...cle.com,
        lorenzo.stoakes@...cle.com, mkoutny@...e.com, linux-mm@...ck.org,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/2] sched/numa: Add tracepoint that tracks the
 skipping of numa balancing due to cpuset memory pinning



On 4/23/25 17:18, Steven Rostedt wrote:
> On Wed, 23 Apr 2025 17:01:46 -0700
> Libo Chen <libo.chen@...cle.com> wrote:
> 
>> +++ b/include/trace/events/sched.h
>> @@ -745,6 +745,37 @@ TRACE_EVENT(sched_skip_vma_numa,
>>  		  __entry->vm_end,
>>  		  __print_symbolic(__entry->reason, NUMAB_SKIP_REASON))
>>  );
>> +
>> +TRACE_EVENT(sched_skip_cpuset_numa,
>> +
>> +	TP_PROTO(struct task_struct *tsk, nodemask_t *mem_allowed_ptr),
>> +
>> +	TP_ARGS(tsk, mem_allowed_ptr),
>> +
>> +	TP_STRUCT__entry(
>> +		__array( char,		comm,		TASK_COMM_LEN		)
>> +		__field( pid_t,		pid					)
>> +		__field( pid_t,		tgid					)
>> +		__field( pid_t,		ngid					)
>> +		__array( unsigned long, mem_allowed, BITS_TO_LONGS(MAX_NUMNODES))
>> +	),
>> +
>> +	TP_fast_assign(
>> +		memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
>> +		__entry->pid		 = task_pid_nr(tsk);
>> +		__entry->tgid		 = task_tgid_nr(tsk);
>> +		__entry->ngid		 = task_numa_group_id(tsk);
>> +		memcpy(__entry->mem_allowed, mem_allowed_ptr->bits,
>> +		       sizeof(__entry->mem_allowed));
> 
> Is mem_allowed->bits guaranteed to be the size of BITS_TO_LONGS(MAX_NUM_NODES)
> in size? If not, then memcpy will read beyond that size.
> 

Yes, evidence can be found in the definitions of nodemask_t and DECLARE_BITMAP:

// include/linux/nodemask_types.h 
typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t;

// include/linux/types.h
#define DECLARE_BITMAP(name,bits) \
	unsigned long name[BITS_TO_LONGS(bits)]



Thanks,
Libo
> -- Steve
> 
> 
>> +	),
>> +
>> +	TP_printk("comm=%s pid=%d tgid=%d ngid=%d mem_nodes_allowed=%*pbl",
>> +		  __entry->comm,
>> +		  __entry->pid,
>> +		  __entry->tgid,
>> +		  __entry->ngid,
>> +		  MAX_NUMNODES, __entry->mem_allowed)
>> +);
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ