lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eadfa82a-d7b2-9f90-8046-2559c4bc93f6@oracle.com>
Date:   Tue, 10 Jan 2017 11:36:12 -0500
From:   Boris Ostrovsky <boris.ostrovsky@...cle.com>
To:     Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
        xen-devel@...ts.xenproject.org
Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent
 xenstore accesses


>>> +static int process_msg(void)
>>> +{
>>> +	static struct xs_thread_state_read state;
>>> +	struct xb_req_data *req;
>>> +	int err;
>>> +	unsigned int len;
>>> +
>>> +	if (!state.in_msg) {
>>> +		state.in_msg = true;
>>> +		state.in_hdr = true;
>>> +		state.used = 0;
>>> +
>>> +		/*
>>> +		 * We must disallow save/restore while reading a message.
>>> +		 * A partial read across s/r leaves us out of sync with
>>> +		 * xenstored.
>>> +		 */
>>> +		mutex_lock(&xs_response_mutex);
>>> +
>>> +		if (!xb_data_to_read()) {
>>> +			/* We raced with save/restore: pending data 'gone'. */
>>> +			mutex_unlock(&xs_response_mutex);
>>> +			state.in_msg = false;

Just noticed: should in_hdr be set to false here as well?

>>> +			return 0;
>>> +		}

Or set it to true here.

>>> +	}
>>> +
>>> +	if (state.in_hdr) {
>>> +		if (state.used != sizeof(state.msg)) {
>>> +			err = xb_read((void *)&state.msg + state.used,
>>> +				      sizeof(state.msg) - state.used);
>>> +			if (err < 0)
>>> +				goto out;
>>> +			state.used += err;
>>> +			if (state.used != sizeof(state.msg))
>>> +				return 0;
>> Would it be possible to do locking at the caller? I understand that you
>> are trying to hold the lock across multiple invocations of this function
>> but it feels somewhat counter-intuitive and bug-prone.
> I think that would be difficult.
>
>> If it's not possible then at least please add a comment explaining
>> locking algorithm.
> Okay. Something like:
>
> /*
>  * xs_response_mutex is locked as long as we are processing one
>  * message. state.in_msg will be true as long as we are holding the
>  * lock in process_msg().


Then in_msg is the same as mutex_is_locked(&xs_response_mutex). And if
so, do we really need it?


-boris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ