csproto
- CrowdStrike's Protocol Buffers library
csproto
is a Go module that provides a library for working with Protocol Buffers messages along with a protoc
plug-in for generating optimized marshaling and unmarshaling code for those messages.
Like many other companies, CrowdStrike extensively uses Protocol Buffers as an efficient wire format for communicating between disparate processes and services. Protocol Buffers' compatibility guarantees and smaller, more efficient binary encoding made it a natural fit for the problems we needed to solve.
As our data volume continued to grow, CrowdStrike started to run into performance limitations in Google's Protobuf library and transitioned to using Gogo Protobuf in an attempt to overcome the issues. This adjustment proved successful and "Use Gogo Protobuf" became the de facto guidance within our development teams.
Fast forward to 2020 when the maintainers of that library announced that they are looking for people to take over. Unfortunately, this also coincided with Google releasing V2 of their Protobuf API. As new and/or improved functionality was introduced in Google's library, the lack of active maintenance on Gogo inevitably led to incompatibilities.
This created a problem for CrowdStrike. We needed to update our system to no longer depend on Gogo Protobuf but we had a lot of direct dependencies on that code spread throughout our codebase. The solution we arrived at is this library. It provides the "core" pieces of the Protocol Buffers API as used by consumers without having those consumers directly depend on any particular runtime implementation.
Disclaimer: csproto
is an open source project, not a CrowdStrike product. As such, it carries no formal support, expressed or implied. The project is licensed under the MIT open source license.
Supporting Types Across Runtime Implementations
As part of their V2 API, Google also introduced significant changes to the protoc
code generation plug-in, protoc-gen-go
. One effect of this change was that code generated using the new plug-in uses the new API internally. An unfortunate side effect is that those types are no longer compatible with Gogo's API.
One technical limitation of Protocol Buffers is that deserializing a message requires knowledge of the actual message type because the encoded field values only contain the integer field tag. Due to this limitation, both Google and Gogo use reflection to read the struct tags on the generated Go types and to dynamically assign field values when unmarshaling Protobuf messages.
This dependence on reflection has created a scenario where passing a type generated by the new plug-in to Gogo's implementation of csproto.Unmarshal()
results in failures. Specifically, there are several new fields in the generated code and the reflection-based logic in Gogo's library doesn't know how to treat them. Additionally, several fields that are used by the V1 API, and consequently Gogo's library, are no longer generated.
A Minimal Protocol Buffers API
After a bit of digging, we came up with what we consider the smallest API necessary to support reading and writing Protocol Buffers messages that does not expose any dependence on the runtime implementations.
Size(msg interface{}) int
- Calculate the size, in bytes, required to hold the binary representation of
msg
- Calculate the size, in bytes, required to hold the binary representation of
Marshal(msg interface{}) ([]byte, error)
- Convert the contents of
msg
to the binary representation and return it
- Convert the contents of
Unmarshal(p []byte, msg interface{}) error
- Populate
msg
with the contents of the binary message inp
- Populate
HasExtension(msg interface{}, ext interface{}) bool
- Determine if
msg
contains a proto2 extension field
- Determine if
GetExtension(msg interface{}, ext interface{}) (interface{}, error)
- Return the value of a proto2 extension field from
msg
- Return the value of a proto2 extension field from
SetExtension(msg interface{}, ext interface{}, val interface{}) error
- Assign the value of a proto2 extension field on
msg
- Assign the value of a proto2 extension field on
There isn't any common interface shared between Google's two runtimes and Gogo's runtime so our library had to use the empty interface for all message and extension definition parameters.
With this minimal API, services and libraries are able to create and consume Protobuf-encoded messages without being tightly coupled to any specific runtime. Being able to do this was essential for CrowdStrike because it is simply impossible to update everything at once to change which runtime library is in use. Instead, we gradually updated all of our libraries and services to use this new runtime-independent API so that each of our development teams is able to change out their runtime and code generation dependencies independently.
Don't Recreate Everything
Our intent is not to fully recreate the Protocol Buffers runtime. Instead, csproto
is built to determine which existing runtime is the "correct" one for a given message and to delegate to that implementation.
We take advantage of the fact that the Go types can't change at runtime to minimize the impact of this indirection. The underlying type of the msg
parameter is inspected to determine which of the 3 supported runtimes (Google V1, Gogo, and Google V2) is correct and we store that value in a lookup dictionary so that any given type only has to be inspected once.
Even with this optimization, calling reflect.TypeOf()
on the message and performing the lookup has a cost, over 8% in some scenarios! At CrowdStrike's volume even that difference can add up to a non-trivial impact across the system so we needed to find a way to at least break even but, ideally, to better the performance.
The benchmarks below for proto2 marshaling were generated on a 2019 MacBook Pro:
$ go test -run='^$' -bench=. -benchmem
goos: darwin
goarch: amd64
pkg: github.com/CrowdStrike/csproto/example/proto2
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkEncodeGogo-12 1741834 667.4 ns/op 216 B/op 4 allocs/op
BenchmarkCustomEncodeGogo-12 1785268 669.2 ns/op 216 B/op 4 allocs/op
BenchmarkEncodeGoogleV1-12 1326734 921.7 ns/op 176 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV1-12 1315390 933.5 ns/op 176 B/op 1 allocs/op
BenchmarkEncodeGoogleV2-12 1329092 906.9 ns/op 176 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV2-12 1306638 923.3 ns/op 176 B/op 1 allocs/op
And for proto3:
$ go test -run='^$' -bench=. -benchmem
goos: darwin
goarch: amd64
pkg: github.com/CrowdStrike/csproto/example/proto3
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkEncodeGogo-12 3008721 394.1 ns/op 88 B/op 2 allocs/op
BenchmarkCustomEncodeGogo-12 2900726 400.1 ns/op 88 B/op 2 allocs/op
BenchmarkEncodeGoogleV1-12 3109386 388.2 ns/op 80 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV1-12 2990907 392.8 ns/op 80 B/op 1 allocs/op
BenchmarkEncodeGoogleV2-12 3290887 367.7 ns/op 80 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV2-12 3003828 398.3 ns/op 80 B/op 1 allocs/op
The table below shows the approximate cost of the indirection across the various combinations of Protobuf runtimes:
Cost | proto2 | proto3 |
---|---|---|
Gogo | +0.27% | +1.52% |
Google V1 | +1.28% | +1.18% |
Google V2 | +1.81% | +8.32% |
Optimized Protobuf Marshaling and Unmarshaling
The marshaling and unmarshaling implemented by both Google and Gogo necessarily relies on runtime reflection. Both implementations dynamically query the set of fields on the message type and read the associated Protobuf struct tags. This information is then used to match up the field tag and wire type in the encoded data to the corresponding field on the message and to assign the field values. This solution is generic and can be applied to any/all messages without any changes to the implementation but it is necessary slower because it has to inspect each message much more deeply.
Another common source of performance bottlenecks is repeated small allocations. It is, in most cases, far more efficient to allocate one buffer large enough to hold all of the data you need than to incrementally allocate many smaller buffers.
Before moving on, credit must be given to the Vitess team for their vtprotobuf
project which they covered in this blog from June of 2021. That project already implements these strategies and more, only with some constraints that didn't work for us. Specifically, vtprotobuf
is only compatible with things that are already using Google's V2 API. Given that the inception of this project for CrowdStrike was due to our dependency on Gogo Protobuf we weren't able to make use of their work. We also make significant usage of proto2 extensions, which may or may not be supported by the Vitess tooling.
Protocol Buffers Binary Codec
The first step to improving Protobuf serialization is to implement a binary encoder and decoder that avoids the issues noted in the last section. Additionally, the Protocol Buffer encoding spec has a much smaller surface area than the set of all valid Protobuf messages.
Encoder
The Encoder
type wraps a pre-allocated byte slice and sequentially writes encoded field values to it. It is up to the caller to ensure that the provided buffer is large enough to hold the full encoded value. As each encoded field is prefixed by the integer field tag and Protobuf wire type, Encoder
's API is provided as a set of EncodeXxx(tag int, val T)
methods, one for each supported type of value.
This snippet encodes a boolean true
value with a field tag of 1:
// Protobuf binary encoding will require 2 bytes, 1 for the tag/wire type and 1 for the value
buf := make([]byte, 2)
enc := csproto.NewEncoder(buf)
enc.EncodeBool(1, true)
// buf now contains {0x8, 0x1}
Encoding a full message is similar, but using csproto.Size()
to calculate the required buffer size.
msg := SomeMessage{
Name: csproto.String("example"),
Value: csproto.Int32(42),
// assign additional fields
}
siz := csproto.Size(msg)
buf := make([]byte, siz)
enc := csproto.NewEncoder(buf)
// encode each field sequentially
enc.EncodeString(1, msg.Name)
enc.EncodeInt32(2, msg.Value)
// ...
Decoder
Like Encoder
, the Decoder
type wraps a byte slice and sequentially reads field values from it. The Protobuf encoding does not require fields to be in tag order, or present at all for that matter, so decoding a message requires a for
loop combined with a switch
statement.
func decodeExample(p []byte) (SomeMessage, error) {
var (
msg SomeMessage
s string
i32 int32
)
dec := csproto.NewDecoder(p)
for dec.More() {
tag, wireType, err := dec.DecodeTag()
if err != nil {
return SomeMessage{}, err
}
switch tag {
case 1: // Name
if wireType != csproto.WireTypeLengthDelimited {
return SomeMessage{}, fmt.Errorf("invalid wire type %s, expected %s", wireType, csproto.WireTypeLengthDelimited)
}
s, err = dec.DecodeString()
if err != nil {
return SomeMessage{}, fmt.Errorf("unable to decode string: %w", err)
}
msg.Name = csproto.String(s)
case 2: // Value
if wireType != csproto.WireTypeVarint {
return SomeMessage{}, fmt.Errorf("invalid wire type %s, expected %s", wireType, csproto.WireTypeVarint)
}
i32, err = dec.DecodeInt32()
if err != nil {
return SomeMessage{}, fmt.Errorf("unable to decode int32: %w", err)
}
msg.Value = csproto.Int32(i32)
default: // unknown/unrecognized field, skip it
_, _ = dec.Skip(tag, wireType)
}
}
}
Notes:
- The cases in the
switch
statement usecsproto.String()
andcsproto.Int32()
to grab pointers to copies of the decoded values. - The example above simply throws away unknown fields which you shouldn't do in practice.
Safe vs Fast
By default, Decoder.DecodeString()
will make a full copy of the decoded string. This is the safest, most stable practice but it does come with a small cost in both time and allocations. For scenarios where maximum performance is more desirable, Decoder
supports a "fast" mode that uses unsafe
to return the bytes of wrapped buffer directly, saving the type conversion and allocation to create a new string
value.
...
dec := csproto.NewDecoder(p)
dec.SetMode(proto.DecoderModeFast)
...
s, err := dec.DecodeString()
...
Representative benchmarks from a 2019 MacBook Pro
...> go test -run='^$' -bench=DecodeString -benchmem ./proto
goos: darwin
goarch: amd64
pkg: github.com/CrowdStrike/csproto
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkSafeDecodeString-12 37183212 27.33 ns/op 16 B/op 1 allocs/op
BenchmarkFastDecodeString-12 127437440 9.211 ns/op 0 B/op 0 allocs/op
The trade off for the increased performance is that the behavior is undefined if the wrapped buffer is modified after decoding the field values from it.
Opting In
Now that we have a custom, optimized codec available, we need a way to seamlessly integrate it into the developer workflow. We do that by defining several new interfaces which our API functions will look for when marshaling or unmarshaling messages.
We define 4 single-method interfaces as integration points:
csproto.Sizer
Size() int
: calculates the size, in bytes, needed to hold the encoded contents of the messagecsproto.Size()
will call this method if the message satisfies the interface
csproto.Marshaler
Marshal() ([]byte, error)
: returns the binary encoding of the messagecsproto.Marshal()
will call this method if the message satisfies the interface
csproto.MarshalerTo
MarshalTo([]byte) error
: encodes the message into the provided buffercsproto.Marshal()
will call this method, after allocating a sufficiently sized buffer, if the messaage satisfies the interface
csproto.Unmarshaler
Unmarshal([]byte) error
: decodes the provided data into the messagecsproto.Unmarshal()
will call this method if the message satisfies the interface
With this in place developers have all of the parts needed to create a fully optimized implementation of Protocol Buffer marshaling and unmarshaling. We can make things even better, though, by capitalizing on the fact that the Protobuf IDL that developers have already written has all of the information we need to generate those optimized implementations.
protoc
plug-in
The The final piece of the puzzle is protoc-gen-fastmarshal
, a protoc
compiler plug-in that reads the Protobuf file descriptor and emits implementations of the Size
, Marshal
, MarshalTo
, and Unmarshal
methods for each message defined in the .proto
file.
Given this example message
message Example {
string name = 1;
int32 result = 2;
}
the generated code would be roughly as follows
// Size returns the size, in bytes, required to store the contents of m in Protocol Buffers
// binary format.
func (m *Example) Size() int {
if m == nil {
return 0
}
var (
sz, l int
)
// Name
if m.Name != nil {
// key + len + bytes
l = len(*m.Name)
sz += csproto.SizeOfVarint(uint64(1)) + csproto.SizeOfVarint(uint64(l)) + l
}
// Result
if m.Result != nil {
// key + varint
sz += csproto.SizeOfVarint(uint64(2)) + csproto.SizeOfVarint(uint64(*m.Result))
}
// unknown/unrecognized fields
sz += len(m.unknownFields)
return sz
}
// Marshal allocates a buffer, writes the contents of m to it using Protocol Buffers binary
// format, then returns the the buffer.
func (m *Example) Marshal() ([]byte, error) {
sz := m.Size()
buf := make([]byte, sz)
err := m.MarshalTo(buf)
return buf, err
}
// MarshalTo writes the contents of m into dest using Protocol Buffers binary format.
func (m *Example) MarshalTo(dest []byte) error {
var (
buf []byte
err error
)
enc := csproto.NewEncoder(dest)
if m.Name != nil {
enc.EncodeString(1, *m.Name)
}
if m.Result != nil {
enc.EncodeInt32(2, *m.Result)
}
if len(m.unknownFields) > 0 {
enc.EncodeRaw(m.unknownFields)
}
return nil
}
// Unmarshal decodes the Protocol Buffers binary format message in p and populates m with the
// result.
func (m *Example) Unmarshal(p []byte) error {
if len(p) == 0 {
return fmt.Errorf("cannot unmarshal from empty buffer")
}
var (
tag int
wt csproto.WireType
err error
)
dec := pbtools.NewDecoder(p)
for dec.More() {
tag, wt, err = dec.DecodeTag()
if err != nil {
return err
}
switch tag {
case 1: // Name
if wt != csproto.WireTypeLengthDelimited {
return fmt.Errorf("invalid message data, expected wire type 2 for tag 1, got %v", wt)
}
if v, err := dec.DecodeString(); err != nil {
return fmt.Errorf("unable to decode string value for tag 1: %w", err)
} else {
m.Name = csproto.String(v)
}
case 2: // Result
if wt != csproto.WireTypeVarint {
return fmt.Errorf("invalid message data, expected wire type 0 for tag 2, got %v", wt)
}
if v, err := dec.DecodeInt32(); err != nil {
return fmt.Errorf("unable to decode int32 value for tag 2: %w", err)
} else {
m.Result = csproto.Int32(v)
}
default: // unrecognized/unknown field
if skipped, err := dec.Skip(tag, wt); err != nil {
return fmt.Errorf("invalid operation skipping tag %v: %w", tag, err)
} else {
m.unknownFields = append(m.unknownFields, skipped)
}
}
}
return nil
}
Final Benchmarks
After invoking protoc-gen-fastmarshal
, the final benchmarks for our examples are:
$ go test -run='^$' -bench=. -benchmem ./proto2 ./proto3
goos: darwin
goarch: amd64
pkg: github.com/CrowdStrike/csproto/example/proto2
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkEncodeGogo-12 1357027 858.9 ns/op 352 B/op 2 allocs/op
BenchmarkCustomEncodeGogo-12 2116568 570.4 ns/op 176 B/op 1 allocs/op
BenchmarkEncodeGoogleV1-12 1267740 934.6 ns/op 176 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV1-12 1576399 762.9 ns/op 176 B/op 1 allocs/op
BenchmarkEncodeGoogleV2-12 1308109 913.9 ns/op 176 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV2-12 1641061 738.2 ns/op 176 B/op 1 allocs/op
PASS
ok github.com/CrowdStrike/csproto/example/proto2 12.247s
goos: darwin
goarch: amd64
pkg: github.com/CrowdStrike/csproto/example/proto3
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkEncodeGogo-12 3349590 352.8 ns/op 160 B/op 2 allocs/op
BenchmarkCustomEncodeGogo-12 5543668 221.5 ns/op 80 B/op 1 allocs/op
BenchmarkEncodeGoogleV1-12 3090246 389.5 ns/op 80 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV1-12 5504506 215.4 ns/op 80 B/op 1 allocs/op
BenchmarkEncodeGoogleV2-12 3222398 367.3 ns/op 80 B/op 1 allocs/op
BenchmarkCustomEncodeGoogleV2-12 5384648 218.6 ns/op 80 B/op 1 allocs/op
PASS
ok github.com/CrowdStrike/csproto/example/proto3 9.186s
As you can see in the table below, the optimized code is faster across the board.
Cost | proto2 | proto3 |
---|---|---|
Gogo | -33.6% | -37.2% |
Google V1 | -18.4% | -45.0% |
Google V2 | -19.2% | -40.5% |
gRPC
To use with gRPC you will need to register csproto as the encoder. NOTE: If messages do not implement Marshaler
or Unmarshaler
then an error will be returned. An example is below.
For more information, see the gRPC documentation.
import (
"github.com/CrowdStrike/csproto"
"google.golang.org/grpc/encoding"
_ "google.golang.org/grpc/encoding/proto"
)
func init() {
encoding.RegisterCodec(csproto.GrpcCodec{})
}
`protoc-gen-fastmarshal` generated code fails to unmarshal empty protov2 message
Title
protoc-gen-fastmarshal
generated code fails to unmarshal empty protov2 messageVersion
0.14.0
Description
Reflection-free generated code (
*.pb.fm.go
) doesn't seem to support "unmarshalling" empty messages.The Unmarshal method first validates that the buffer isn't empty:
This seems to not support empty messages.
Repro steps
*.pb.fm.go
files with--fastmarshal_opt=apiversion=v2,paths=source_relative
encoding.RegisterCodec(csproto.GrpcCodec{})
encoding.RegisterCodec(csproto.GrpcCodec{})
Handle double and float
This should handle double and float primitives.
I couldn't get repeated fields to work properly, the only hint I got is that adding +4 to Size() seems to fix the issue with repeated doubles but since I'm unsure why, I'm not going to add any magic constants.
float32 and double not supported?
Description
panic(fmt.Errorf("unsupported proto kind 'double'"))
while code is properly generated for every other type.
Repro steps
fastmarshal_out
fix: correct invalid generated code for empty messages or messages with float/double fields
specialname=...
param to allow for mappingSize
field in .proto toSize_
in the generated Go code when using GogoHandle WKTs in JSONMarshaler
Title
Handle well known types in JSON Marshaler.
Version
v0.7.1
Description
When JSON marshaling a proto message, I expected a timestamppb field to be encoded as a RFC 3339 string. Instead I observed timestamppb encoded as an object of
{"seconds":"","nanos":""}
.Repro steps
chore(deps): bump golangci/golangci-lint-action from 3.1.0 to 3.2.0
Bumps golangci/golangci-lint-action from 3.1.0 to 3.2.0.
Release notes
Sourced from golangci/golangci-lint-action's releases.
... (truncated)
Commits
537aa19
Expire cache periodically to avoid unbounded size (#466)f70e52d
build(deps): bump@actions/core
from 1.6.0 to 1.8.0 (#468)a304692
build(deps-dev): bump@typescript-eslint/eslint-plugin
(#469)eeca7c5
build(deps-dev): bump eslint from 8.14.0 to 8.15.0 (#467)dfbcd2a
build(deps): bump github/codeql-action from 1 to 2 (#459)4421331
build(deps-dev): bump@typescript-eslint/parser
from 5.20.0 to 5.22.0 (#464)5e6c1bb
build(deps-dev): bump typescript from 4.6.3 to 4.6.4 (#461)44eba43
build(deps-dev): bump@typescript-eslint/eslint-plugin
(#460)358a5e3
build(deps-dev): bump@typescript-eslint/eslint-plugin
(#457)b9c65a5
build(deps-dev): bump@typescript-eslint/parser
from 5.19.0 to 5.20.0 (#455)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)Empty messages generate bad syntax
Title
Empty messages generate bad syntax
Version
Latest
Description
Empty messages such as
generate code such as
Since
enc
is not used, the compilation fails.Repro steps
Using --fastmarshal_out=apiversion=v2
chore(deps): bump github.com/Masterminds/sprig/v3 from 3.2.2 to 3.2.3
Bumps github.com/Masterminds/sprig/v3 from 3.2.2 to 3.2.3.
Release notes
Sourced from github.com/Masterminds/sprig/v3's releases.
Changelog
Sourced from github.com/Masterminds/sprig/v3's changelog.
Commits
581758e
Updating the changelog for the 3.2.3 release5787448
Updating changelog for 3.2.2 release8489c3e
Merge pull request #354 from mattfarina/bump-crypto-v0.3.042ac6ac
Updating crypto libraryd65147b
Merge pull request #353 from mattfarina/bump-semver-3.2.092ac1ae
Updating semver packagece20d69
Merge pull request #313 from book987/masterf9a478a
Merge pull request #334 from aJetHorn/patch-158a4f65
Merge pull request #349 from mattfarina/bump-go-1932424cc
Merge pull request #347 from neelayu/patch-1Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)fix: use deducemsgType for MarshalJSON
Previously
MarshalJSON
would determine the correct marshalling action, by asserting the giveninterface{}
as specific interface implementations. Unfortunately,googlev1.Message
andgoog.Message
have overlapping interface implementations and therefore theMarshalJSON
function would never reach thegogo.Message
case.Fortunately,
deduceMsgType
already exists in the code base which addresses this issue, which solved the issue by using this function to determine the message type and then determining the following action via. a switch statement based on the output ofdeduceMsgType
.chore(deps): bump github.com/gofrs/uuid from 4.3.0+incompatible to 4.3.1+incompatible in /example
Bumps github.com/gofrs/uuid from 4.3.0+incompatible to 4.3.1+incompatible.
Release notes
Sourced from github.com/gofrs/uuid's releases.
Commits
e1079f3
Use legacy go versions compatible unix millisecond calculation (#104)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)chore(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.1 in /example
Bumps github.com/stretchr/testify from 1.8.0 to 1.8.1.
Commits
b747d7c
Bump github.com/stretchr/objx from 0.4.0 to 0.5.0 (#1283)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)feat: Enable partially/lazily decoding Protobuf messages
This PR adds a new
lazyproto
sub-package that provides the ability to lazily decode only a portion of a Protobuf message (identified by the integer field tags).The use case is for consumers that need to read only a few fields from a large message and decoding the entire message will be expensive, such as with deeply nested message fields.
The provided API consists of:
Def
type that defines which scalar fields should be extracted from the messageDecodeResult
type that wraps the partially decoded message data and provides methods to retrieve typed valuesDecode([]byte, *Def)
function that extracts the requested fields and returns alazyproto.DecodeResult
or an errorIt also includes several bug fixes in
decoder.go
for bounds-checking edge cases and when decoding 10-byte varint values that were exposed while building out this feature.Benchmark Encoder/Decoder against VT Protobuf lib
We have benchmarks against gogo, google/v1, and google/v2. Seems like just for the sake of completion, we should also benchmark against vtprotobuf to provide a wholistic view.