在对从字节数组到 s 的转换性能进行基准测试时uint32,我注意到从最低有效位开始时转换运行得更快:
package blah
import (
"testing"
"encoding/binary"
"bytes"
)
func BenchmarkByteConversion(t *testing.B) {
var i uint32 = 3419234848
buf := new(bytes.Buffer)
_ = binary.Write(buf, binary.BigEndian, i)
b := buf.Bytes()
for n := 0; n < t.N; n++ {
// Start with least significant bit: 0.27 nanos
value := uint32(b[3]) | uint32(b[2])<<8 | uint32(b[2])<<16 | uint32(b[0])<<24
// Start with most significant bit: 0.68 nanos
// value := uint32(b[0])<<24 | uint32(b[1])<<16 | uint32(b[2])<<8 | uint32(b[3])
_ = value
}
}
当我运行时,计算第一种方式go test -bench=.时每次迭代获得 0.27 纳米,计算第二种方式时每次迭代获得 0.68 纳米。为什么将数字放在一起时从最低有效位开始的速度是原来的两倍?valuevalue|
幕布斯6054654
相关分类