lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 May 2021 13:17:11 -0700
From:   Andrei Vagin <avagin@...il.com>
To:     Peter Oskolkov <posk@...gle.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
        Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
        Peter Oskolkov <posk@...k.io>,
        Joel Fernandes <joel@...lfernandes.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrei Vagin <avagin@...gle.com>,
        Jim Newsome <jnewsome@...project.org>
Subject: Re: [RFC PATCH v0.1 7/9] sched/umcg: add UMCG server/worker API
 (early RFC)

On Thu, May 20, 2021 at 11:36:12AM -0700, Peter Oskolkov wrote:
> Implement UMCG server/worker API.
> 
> This is an early RFC patch - the code seems working, but
> more testing is needed. Gaps I plan to address before this
> is ready for a detailed review:
> 
> - preemption/interrupt handling;
> - better documentation/comments;
> - tracing;
> - additional testing;
> - corner cases like abnormal process/task termination;
> - in some cases where I kill the task (umcg_segv), returning
> an error may be more appropriate.
> 
> All in all, please focus more on the high-level approach
> and less on things like variable names, (doc) comments, or indentation.
> 
> Signed-off-by: Peter Oskolkov <posk@...gle.com>
> ---
>  include/linux/mm_types.h |   5 +
>  include/linux/syscalls.h |   5 +
>  kernel/fork.c            |  11 +
>  kernel/sched/core.c      |  11 +
>  kernel/sched/umcg.c      | 764 ++++++++++++++++++++++++++++++++++++++-
>  kernel/sched/umcg.h      |  54 +++
>  mm/init-mm.c             |   4 +
>  7 files changed, 845 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6613b26a8894..5ca7b7d55775 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -562,6 +562,11 @@ struct mm_struct {
>  #ifdef CONFIG_IOMMU_SUPPORT
>  		u32 pasid;
>  #endif
> +
> +#ifdef CONFIG_UMCG
> +	spinlock_t umcg_lock;
> +	struct list_head umcg_groups;
> +#endif
>  	} __randomize_layout;
>  
>  	/*
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 15de3e34ccee..2781659daaf1 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -1059,6 +1059,11 @@ asmlinkage long umcg_wait(u32 flags, const struct __kernel_timespec __user *time
>  asmlinkage long umcg_wake(u32 flags, u32 next_tid);
>  asmlinkage long umcg_swap(u32 wake_flags, u32 next_tid, u32 wait_flags,
>  				const struct __kernel_timespec __user *timeout);
> +asmlinkage long umcg_create_group(u32 api_version, u64, flags);
> +asmlinkage long umcg_destroy_group(u32 group_id);
> +asmlinkage long umcg_poll_worker(u32 flags, struct umcg_task __user **ut);
> +asmlinkage long umcg_run_worker(u32 flags, u32 worker_tid,
> +		struct umcg_task __user **ut);
>  
>  /*
>   * Architecture-specific system calls
> diff --git a/kernel/fork.c b/kernel/fork.c
> index ace4631b5b54..3a2a7950df8e 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1026,6 +1026,10 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  	seqcount_init(&mm->write_protect_seq);
>  	mmap_init_lock(mm);
>  	INIT_LIST_HEAD(&mm->mmlist);
> +#ifdef CONFIG_UMCG
> +	spin_lock_init(&mm->umcg_lock);
> +	INIT_LIST_HEAD(&mm->umcg_groups);
> +#endif
>  	mm->core_state = NULL;
>  	mm_pgtables_bytes_init(mm);
>  	mm->map_count = 0;
> @@ -1102,6 +1106,13 @@ static inline void __mmput(struct mm_struct *mm)
>  		list_del(&mm->mmlist);
>  		spin_unlock(&mmlist_lock);
>  	}
> +#ifdef CONFIG_UMCG
> +	if (!list_empty(&mm->umcg_groups)) {
> +		spin_lock(&mm->umcg_lock);
> +		list_del(&mm->umcg_groups);

I am not sure that I understand what is going on here. umsg_groups is
the head of a group list. list_del is usually called on list entries.

Should we enumirate all groups here and destroy them?

> +		spin_unlock(&mm->umcg_lock);
> +	}
> +#endif
>  	if (mm->binfmt)
>  		module_put(mm->binfmt->module);
>  	mmdrop(mm);

...

> +/**
> + * sys_umcg_create_group - create a UMCG group
> + * @api_version:           Requested API version.
> + * @flags:                 Reserved.
> + *
> + * Return:
> + * >= 0                - the group ID
> + * -EOPNOTSUPP         - @api_version is not supported
> + * -EINVAL             - @flags is not valid
> + * -ENOMEM             - not enough memory
> + */
> +SYSCALL_DEFINE2(umcg_create_group, u32, api_version, u64, flags)
> +{
> +	int ret;
> +	struct umcg_group *group;
> +	struct umcg_group *list_entry;
> +	struct mm_struct *mm = current->mm;
> +
> +	if (flags)
> +		return -EINVAL;
> +
> +	if (__api_version(api_version))
> +		return -EOPNOTSUPP;
> +
> +	group = kzalloc(sizeof(struct umcg_group), GFP_KERNEL);
> +	if (!group)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&group->lock);
> +	INIT_LIST_HEAD(&group->list);
> +	INIT_LIST_HEAD(&group->waiters);
> +	group->flags = flags;
> +	group->api_version = api_version;
> +
> +	spin_lock(&mm->umcg_lock);
> +
> +	list_for_each_entry_rcu(list_entry, &mm->umcg_groups, list) {
> +		if (list_entry->group_id >= group->group_id)
> +			group->group_id = list_entry->group_id + 1;
> +	}

pls take into account that we need to be able to save and restore umcg
groups from user-space. There is the CRIU project that allows to
checkpoint/restore processes.

> +
> +	list_add_rcu(&mm->umcg_groups, &group->list);

I think it should be:

	list_add_rcu(&group->list, &mm->umcg_groups);
> +
> +	ret = group->group_id;
> +	spin_unlock(&mm->umcg_lock);
> +
> +	return ret;
> +}
> +

Thanks,
Andrei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ