lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 16 Sep 2018 17:20:04 -0400
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Dongli Zhang <dongli.zhang@...cle.com>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Cc:     jgross@...e.com, paul.durrant@...rix.com, wei.liu2@...rix.com,
        konrad.wilk@...cle.com, roger.pau@...rix.com,
        srinivas.eeda@...cle.com
Subject: Re: [PATCH 2/6] xenbus: implement the xenwatch multithreading
 framework



On 9/14/18 3:34 AM, Dongli Zhang wrote:
>
> +
> +/* Running in the context of default xenwatch kthread. */
> +void mtwatch_create_domain(domid_t domid)
> +{
> +	struct mtwatch_domain *domain;
> +
> +	if (!domid) {
> +		pr_err("Default xenwatch thread is for dom0\n");
> +		return;
> +	}
> +
> +	spin_lock(&mtwatch_info->domain_lock);
> +
> +	domain = mtwatch_find_domain(domid);
> +	if (domain) {
> +		atomic_inc(&domain->refcnt);
> +		spin_unlock(&mtwatch_info->domain_lock);
> +		return;
> +	}
> +
> +	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);

Is there a reason (besides this being done under spinlock) for using 
GFP_ATOMIC? If domain_lock is the only reason I'd probably drop the lock 
and do GFP_KERNEL.


> +	if (!domain) {
> +		spin_unlock(&mtwatch_info->domain_lock);
> +		pr_err("Failed to allocate memory for mtwatch thread %d\n",
> +		       domid);
> +		return;

This needs to return an error code, I think. Or do you want to fall back 
to shared xenwatch thread?


> +	}
> +
> +	domain->domid = domid;
> +	atomic_set(&domain->refcnt, 1);
> +	mutex_init(&domain->domain_mutex);
> +	INIT_LIST_HEAD(&domain->purge_node);
> +
> +	init_waitqueue_head(&domain->events_wq);
> +	spin_lock_init(&domain->events_lock);
> +	INIT_LIST_HEAD(&domain->events);
> +
> +	list_add_tail_rcu(&domain->list_node, &mtwatch_info->domain_list);
> +
> +	hlist_add_head_rcu(&domain->hash_node,
> +			   &mtwatch_info->domain_hash[MTWATCH_HASH(domid)]);
> +
> +	spin_unlock(&mtwatch_info->domain_lock);
> +
> +	domain->task = kthread_run(mtwatch_thread, domain,
> +				   "xen-mtwatch-%d", domid);
> +
> +	if (!domain->task) {
> +		pr_err("mtwatch kthread creation is failed\n");
> +		domain->state = MTWATCH_DOMAIN_DOWN;


Why not clean up right here?

> +
> +		return;
> +	}
> +
> +	domain->state = MTWATCH_DOMAIN_UP;
> +}
> +


> +
>   void unregister_xenbus_watch(struct xenbus_watch *watch)
>   {
>   	struct xs_watch_event *event, *tmp;
> @@ -831,6 +1100,9 @@ void unregister_xenbus_watch(struct xenbus_watch *watch)
>   
>   	if (current->pid != xenwatch_pid)
>   		mutex_unlock(&xenwatch_mutex);
> +
> +	if (xen_mtwatch && watch->get_domid)
> +		unregister_mtwatch(watch);


I may not be understanding the logic flow here, but if we successfully 
removed/unregistered/purged the watch from mtwatch lists, do we still 
need to try removing it from watch_events list below?


-boris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ