lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <078E22BB-54CD-4AA2-A240-1FD822189B44@netronome.com>
Date:   Thu, 11 Apr 2019 17:53:27 +0100
From:   Jiong Wang <jiong.wang@...ronome.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        daniel@...earbox.net, bpf@...r.kernel.org, netdev@...r.kernel.org,
        oss-drivers@...ronome.com
Subject: Re: [oss-drivers] Re: [PATCH/RFC v2 bpf-next 05/19] bpf: split read
 liveness into REG_LIVE_READ64 and REG_LIVE_READ32


> On 11 Apr 2019, at 17:44, Jakub Kicinski <jakub.kicinski@...ronome.com> wrote:
> 
> On Thu, 11 Apr 2019 07:13:03 +0100, Jiong Wang wrote:
>>>> @@ -1150,17 +1150,17 @@ static int mark_reg_read(struct bpf_verifier_env *env,
>>>> 				parent->var_off.value, parent->off);
>>>> 			return -EFAULT;
>>>> 		}
>>>> -		if (parent->live & REG_LIVE_READ)
>>>> +		if ((parent->live & REG_LIVE_READ) == flags)
>>>> 			/* The parentage chain never changes and
>>>> -			 * this parent was already marked as LIVE_READ.
>>>> +			 * this parent was already marked with all read bits.  
>>> 
>>> Do we have to propagate all read bits?  Read64 is strictly stronger
>>> than read32, as long as read64 is set on the parent we should be good?  
>> 
>> We should be good, but I doubt there is value to differentiate on this in this
>> kind of HOT function.
> 
> The entire if clause is an optimization.  I'm saying you can maintain it
> as more aggressive.

What I mean is I suspect the read width could be quite consistent in real program,
the percentage for doing extra check on read64 could actually be mishit for
most of the time, if the propagation iterations is big the extra check done each
time may overcome the propagation pruned.

I will do some benchmarking on this to see the real gain.

Regards,
Jiong


> 
>>>> @@ -6196,12 +6286,19 @@ static int propagate_liveness_reg(struct bpf_verifier_env *env,
>>>> 				  struct bpf_reg_state *reg,
>>>> 				  struct bpf_reg_state *parent_reg)
>>>> {
>>>> +	u8 parent_bits = parent_reg->live & REG_LIVE_READ;
>>>> +	u8 bits = reg->live & REG_LIVE_READ;
>>>> +	u8 bits_diff = parent_bits ^ bits;
>>>> +	u8 bits_prop = bits_diff & bits;
>>>> 	int err;
>>>> 
>>>> -	if (parent_reg->live & REG_LIVE_READ || !(reg->live & REG_LIVE_READ))
>>>> +	/* "reg" and "parent_reg" has the same read bits, or the bit doesn't
>>>> +	 * belong to "reg".
>>>> +	 */
>>>> +	if (!bits_diff || !bits_prop)  
>>> 
>>> bits_prop is a subset of bits_diff, no?  !bits_prop is always true
>>> if !bits_diff is true, no need to check both.  
>> 
>> Bits_prop is a subset of bits_diff WHEN it comes from “reg", we don’t want to
>> do the propagation when the diff comes from “parent_reg”, so, we need to check
>> both.
> 
> Not sure what you're saying, in this patch:
> 
> 	u8 bits_prop = bits_diff & bits;
> 
> Maybe you're talking about some patch down the line..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ