lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a7695a9-a1cf-fc1a-cfa9-718a379a40af@suse.com>
Date:   Tue, 10 Jan 2017 17:46:33 +0100
From:   Juergen Gross <jgross@...e.com>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent
 xenstore accesses

On 10/01/17 17:36, Boris Ostrovsky wrote:
> 
>>>> +static int process_msg(void)
>>>> +{
>>>> +	static struct xs_thread_state_read state;
>>>> +	struct xb_req_data *req;
>>>> +	int err;
>>>> +	unsigned int len;
>>>> +
>>>> +	if (!state.in_msg) {
>>>> +		state.in_msg = true;
>>>> +		state.in_hdr = true;
>>>> +		state.used = 0;
>>>> +
>>>> +		/*
>>>> +		 * We must disallow save/restore while reading a message.
>>>> +		 * A partial read across s/r leaves us out of sync with
>>>> +		 * xenstored.
>>>> +		 */
>>>> +		mutex_lock(&xs_response_mutex);
>>>> +
>>>> +		if (!xb_data_to_read()) {
>>>> +			/* We raced with save/restore: pending data 'gone'. */
>>>> +			mutex_unlock(&xs_response_mutex);
>>>> +			state.in_msg = false;
> 
> Just noticed: should in_hdr be set to false here as well?

Doesn't matter: It is valid only if in_msg is true.

> 
>>>> +			return 0;
>>>> +		}
> 
> Or set it to true here.
> 
>>>> +	}
>>>> +
>>>> +	if (state.in_hdr) {
>>>> +		if (state.used != sizeof(state.msg)) {
>>>> +			err = xb_read((void *)&state.msg + state.used,
>>>> +				      sizeof(state.msg) - state.used);
>>>> +			if (err < 0)
>>>> +				goto out;
>>>> +			state.used += err;
>>>> +			if (state.used != sizeof(state.msg))
>>>> +				return 0;
>>> Would it be possible to do locking at the caller? I understand that you
>>> are trying to hold the lock across multiple invocations of this function
>>> but it feels somewhat counter-intuitive and bug-prone.
>> I think that would be difficult.
>>
>>> If it's not possible then at least please add a comment explaining
>>> locking algorithm.
>> Okay. Something like:
>>
>> /*
>>  * xs_response_mutex is locked as long as we are processing one
>>  * message. state.in_msg will be true as long as we are holding the
>>  * lock in process_msg().
> 
> 
> Then in_msg is the same as mutex_is_locked(&xs_response_mutex). And if
> so, do we really need it?

Yes. xs_response_mutex is used in suspend path, too.


Juergen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ