Connect
Connect is a slim library for building browser and gRPC-compatible HTTP APIs. You write a short Protocol Buffer schema and implement your application logic, and Connect generates code to handle marshaling, routing, compression, and content type negotiation. It also generates an idiomatic, type-safe client. Handlers and clients support three protocols: gRPC, gRPC-Web, and Connect's own protocol.
The Connect protocol is a simple, POST-only protocol that works over HTTP/1.1 or HTTP/2. It takes the best portions of gRPC and gRPC-Web, including streaming, and packages them into a protocol that works equally well in browsers, monoliths, and microservices. Calling a Connect API is as easy as using curl
. Try it with our live demo:
curl \
--header "Content-Type: application/json" \
--data '{"sentence": "I feel happy."}' \
https://demo.connect.build/buf.connect.demo.eliza.v1.ElizaService/Say
Handlers and clients also support the gRPC and gRPC-Web protocols, including streaming, headers, trailers, and error details. gRPC-compatible server reflection and health checks are available as standalone packages. Instead of cURL, we could call our API with grpcurl
:
go install github.com/fullstorydev/grpcurl/cmd/[email protected]
grpcurl \
-d '{"sentence": "I feel happy."}' \
demo.connect.build:443 \
buf.connect.demo.eliza.v1.ElizaService/Say
Under the hood, Connect is just Protocol Buffers and the standard library: no custom HTTP implementation, no new name resolution or load balancing APIs, and no surprises. Everything you already know about net/http
still applies, and any package that works with an http.Server
, http.Client
, or http.Handler
also works with Connect.
For more on Connect, see the announcement blog post, the documentation on connect.build (especially the Getting Started guide for Go), the demo service, or the protocol specification.
A small example
Curious what all this looks like in practice? From a Protobuf schema, we generate a small RPC package. Using that package, we can build a server:
package main
import (
"context"
"log"
"net/http"
"github.com/bufbuild/connect-go"
pingv1 "github.com/bufbuild/connect-go/internal/gen/connect/ping/v1"
"github.com/bufbuild/connect-go/internal/gen/connect/ping/v1/pingv1connect"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
)
type PingServer struct {
pingv1connect.UnimplementedPingServiceHandler // returns errors from all methods
}
func (ps *PingServer) Ping(
ctx context.Context,
req *connect.Request[pingv1.PingRequest],
) (*connect.Response[pingv1.PingResponse], error) {
// connect.Request and connect.Response give you direct access to headers and
// trailers. No context-based nonsense!
log.Println(req.Header().Get("Some-Header"))
res := connect.NewResponse(&pingv1.PingResponse{
// req.Msg is a strongly-typed *pingv1.PingRequest, so we can access its
// fields without type assertions.
Number: req.Msg.Number,
})
res.Header().Set("Some-Other-Header", "hello!")
return res, nil
}
func main() {
mux := http.NewServeMux()
// The generated constructors return a path and a plain net/http
// handler.
mux.Handle(pingv1connect.NewPingServiceHandler(&PingServer{}))
err := http.ListenAndServe(
"localhost:8080",
// For gRPC clients, it's convenient to support HTTP/2 without TLS. You can
// avoid x/net/http2 by using http.ListenAndServeTLS.
h2c.NewHandler(mux, &http2.Server{}),
)
log.Fatalf("listen failed: %v", err)
}
With that server running, you can make requests with any gRPC or Connect client. To write a client using connect-go
,
package main
import (
"context"
"log"
"net/http"
"github.com/bufbuild/connect-go"
pingv1 "github.com/bufbuild/connect-go/internal/gen/connect/ping/v1"
"github.com/bufbuild/connect-go/internal/gen/connect/ping/v1/pingv1connect"
)
func main() {
client := pingv1connect.NewPingServiceClient(
http.DefaultClient,
"http://localhost:8080/",
)
req := connect.NewRequest(&pingv1.PingRequest{
Number: 42,
})
req.Header().Set("Some-Header", "hello from connect")
res, err := client.Ping(context.Background(), req)
if err != nil {
log.Fatalln(err)
}
log.Println(res.Msg)
log.Println(res.Header().Get("Some-Other-Header"))
}
Of course, http.ListenAndServe
and http.DefaultClient
aren't fit for production use! See Connect's deployment docs for a guide to configuring timeouts, connection pools, observability, and h2c.
Ecosystem
- connect-grpchealth-go: gRPC-compatible health checks
- connect-grpcreflect-go: gRPC-compatible server reflection
- connect-demo: demonstration service powering demo.connect.build, including bidi streaming
- connect-web: TypeScript clients for web browsers
- Buf Studio: web UI for ad-hoc RPCs
- connect-crosstest: gRPC and gRPC-Web interoperability tests
Status
This module is a beta: we rely on it in production, but we may make a few changes as we gather feedback from early adopters. We're planning to tag a stable v1 in October, soon after the Go 1.19 release.
Support and versioning
connect-go
supports:
- The two most recent major releases of Go.
- APIv2 of Protocol Buffers in Go (
google.golang.org/protobuf
).
Within those parameters, Connect follows semantic versioning.
Legal
Offered under the Apache 2 license.
Unify Request and Response into Message
Originally, we expected the generic
Request
andResponse
types to diverge quite a bit. In practice, they've ended up nearly identical. The methods we anticipate adding (primarilyDisableCompression()
) apply equally to both.The code for the two types is so similar that we're often making near-identical changes to their code. (For example, supporting trailers required verbatim copies across the two types.)
This commit unifies the two types into
connect.Message
. We can then unifyAnyRequest
/AnyResponse
andReceiveRequest
/ReceiveResponse
. SinceRequest.Msg
was never @bufdev's favorite andMessage.Msg
is even worse, I've renamed toMessage.Body
- but I'm totally open to suggestions for a better field name.After this PR, we've slimmed down connect's exported API quite a bit. On my monitor, the GoDoc table of contents now fits (barely) on one screen.
Implement gRPC's standard interop tests
The first-party gRPC implementations have a standardized battery of interoperability tests. Many of them test particular flags or option schemes that may not apply to us, but let's see if we can get some useful test coverage from them.
https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md
Random Compression Method Selection
Say you register 4 compression methods on both the handler and the client.
It seems like the server picks the first compression method it recognizes, unless a specific method was used when sending the request.
However, looking at the client side the order the methods are sent is random since it collects names from a map.
As far as I can tell this makes it impossible to select an order of preference from the client side. Furthermore it appears to me that
gzip
cannot be removed from the pool, only replaced.More request information in Interceptor.WrapStreamContext()
Currently there is no way to enrich a stream context with information about the request. The
WrapStreamContext()
method only accepts a parent context and does not give us any information about the request.In particular, we need access to the endpoint spec and request headers.
Our primary use case for this is authentication, where we would like to be able to apply some generic token parsing and validation logic to all requests, and enrich context with authentication info.
Rename protoc-gen-go-connect to protoc-gen-goconnect/protoc-gen-connectgo?
The
go-.*
paradigm is really only used byprotoc-gen-go-grpc
, which is relatively new, and I'd argue that it's not productive. We generally want people to get into the habit of generating to a sub-directory named after their plugin, So we might want to have i.e/internal/gen/proto/goconnect
orinternal/gen/proto/connectgo
, and then name the plugin accordingly.Note that we did the same
go-.*
style with our internal plugins for bufbuild/buf (this is on me), so we should probably change that too once we agree on the naming scheme.Possible to retrieve peer info?
With
grpc-go
I can get peer info like so:Is it possible to do this with
connect-go
?I am trying to get the IP that the request is originating from.
Thanks
Can't detect MethodOptions
I am currently in the process of migrating an existing grpc api server to connect-go. Went smoothly so far, great work !
I am actually stuck at one point, in the current implementation i detect the specified MethodOptions per service and construct ACL Rules from them.
Example Proto
Then i use @jhump "github.com/jhump/protoreflect/grpcreflect" to load the ServiceDescriptors and iterate through. This requires a grpc.Server instance which is not available anymore.
Any hints howto get access to the MethodOptions with a connect-go implementation are welcome.
Support dynamic input type and output type
Is your feature request related to a problem? Please describe.
Trying to build a connect gateway, support post and grpc.
The proto is dynamic, generated from grpc reflect, but connect handler can not use the dynamic type due to handler code like this
I think connect-go can support dynamic pb type, the input and output type is based on reflect MessageType, maybe pass from
HandlerOption
, soNewUnaryHandler[any,any]()
can work as expected.Describe the solution you'd like
Describe alternatives you've considered
Additional context
connect protocol makes grpc eaiser, but for migration, need a gateway to translate the protocol for other languages that don't speak connect protocol.
How to implement a "full-stream" interceptor?
Is your feature request related to a problem? Please describe. I'm struggling to understand how to implement a logging/tracing interceptor for the full Client/Server side streams. Something similar to how it was done with
grpc-go
, example for Elasticsearch's APM tracing: https://github.com/elastic/apm-agent-go/blob/main/module/apmgrpc/server.go#L111If I embed the tracing-information using WrapStreamContext (which is also missing something like
Spec
, to be able to identify which stream is called), how would I go about closing the transaction on end?Describe the solution you'd like I'd like a solution that allows for tracking the duration and result of streams, similar to how it's possible for unary calls.
Fixed README examples; Set autogeneration of them using mdox tool.
Hey!
Huge fan here, this project is amazing! πͺπ½ It has lots of features (3 protocols in one), so I think examples has to be clear. I found the existing ones obsolete so I recreated them and committed with tooling that autogenerates them in README once changed.
make test
also checks if the examples are buildable, which will keep them up-to-date!Hopefully that will help make this project more accessible, cheers. Keep the good work!
I hope you don't mind putting auto-formatter for README. It ensures consistency and e.g puts all on single line (all IDEs handle that just fine, so no point in trying to manually adjust width of text)
Signed-off-by: Bartlomiej Plotka [email protected]
Evaluate Style Guide
Package Structure
codec
andcompress
packages are split out so that we can more easily add a separate package for theProtocol
abstraction (e.g. gRPC, gRPC-Web, Connect) without introducing an import cycle.clientstream
andhandlerstream
packages are split out so that we can tidy up names for these types. Otherwise, you'd have names likeNewClientClientStream
(the client's view of a client streaming endpoint).compress.Gzip
instead ofcompress.GzipCompressor
.compress/gzip.Compressor
.Method Naming
ReceivedHeader
method to distinguish itself from the request headers. Should we instead just name thisResponseHeader
for clarity?RequestHeader
andResponseHeader
.WrappedPingClient
andUnwrappedPingClient
interfaces.PingClient
is reserved for the type that users interact with.PingService
(acts upon generic types), andNewPingService
(acts uponany
). Comments are left in-line to describe how to implement the simple, non-generic method signatures.Future Proofing
Codec
andCompressor
) are propagated through theclientCfg
and into theprotocol
abstraction via a catch-allprotocolClientParams
. If we eventually plan to export theprotocol
abstraction and include it as aClientOption
, the relationship here is fuzzy.protocol
that doesn't interact with all of theprotocolClientParams
types - is it a no-op, an error, or a silent failure?protocol
individually to clear these relationships up, but we end up with some more repetition (i.e. we need to repeat the same options for similar protocols like gRPC and gRPC-Web). For example, each of the gRPC and gRPC-Webprotocols
would have aCodec
option.connect-api
branch, we were able to get around this because the protocols were separate client implementations, and they could each individually own what options they exposed (e.g. here).proto.Any
, and Akshay had some ideas around this - we could rename the methods to includeAny
so it leaves room for us to add other types later (e.g.AddAnyDetail
). The abstraction violation between the pluggableCodec
and theproto.Any
sucks, but I know we're at mercy to the gRPC protocol here.1.0 Features
health
andreflection
packages are valuable, but they aren't really necessary to begin with. We should consider whether or not these packages should exist in a separate repository (similar to gRPC middleware repositories).RegistrationName
in the generated code. If we were to drop this support to begin with, we'd need to reintroduce this as aHandlerOption
later, and that's tied to theconnect
library itself. It's not immediately obvious how this would work.health
andreflection
are staying where they are. We need these features for easy gRPC user adoption. To be clear,health
is non-optional.reflection
is a huge quality of life improvement and its (nearly) part of the gRPC specification at this point.connect.MaxHeaderBytes
is kinda nice, but doesn't feel necessary and is prone to change across different protocols.connect.ReceiveResponse
be in aninternal
package? It's only used twice and otherwise distracts from the API that users ought to interact with. This might already be your plan based on the conversations we had earlier about the user-facing API and connect internals.ReceiveRequest
needs to be exported for the generated code, so I can see an argument to export it for symmetry.IsValidHeaderKey
andIsValidHeaderValue
.Implementation Details
connect-api
branch, I left a note for myself about whether or not thediscard
helper function can hang forever (e.g.discard(cs.response.Body)
). This might have happened when I introduced the gRPC testing suite, but I can't recall. We need to make sure this can't happen.http.Response.Body
to completion to reuse the connection. This is also just an implementation detail, so it's not blocking regardless.pkg.go.dev example doesn't compile
Describe the bug
The example doesn't work here: https://pkg.go.dev/github.com/bufbuild/connect-go#example-package-Handler
Fails with the error
The solution here would be either
private
From the docs here
client side of server streaming call does not always drain HTTP response body
When invoking a server or bidi stream using the Connect or gRPC-web protocols, the response stream is only read to the end of the "end of stream" frame. It is never verified that the body contain no more. The standard gRPC protocol does not exhibit this issue, since it must drain the body fully and read trailers in order to get the RPC status.
This is an issue when trying to use HTTP middleware that wraps the body reader. The middleware will never detect that the response is finished, because the underlying reader is never fully drained (e.g. read until it returns a non-nil error, typically
io.EOF
). This also means that it is possible for the server to write additional content, and thus send back a corrupt/invalid response body, and the RPC client will not notice. (Unclear what action should be taken in this case -- like whether this should result in an RPC wire error, especially if the call otherwise succeeded.)Expose *http.Request to server Peer
In some cases, we need access to the underlying *http.Request in the handler.
For example, we need access to the underlying request.TLS to identify peer identity.
expose an API for constructing a wire error
When using
connect.NewError
, it is not possible create an error such thaterr.IsWireError()
will return true.This capability is particularly useful for dealing with bidi streaming APIs that represent a conversation, where each response message correlates with a request message. It is common for these kinds of APIs to support partial success by having a response message indicate a failure just for that one request message. Often such APIs use
google.rpc.Status
in the response message to indicate a possible failure. Sometimes they do not.For this sort of use, it is commonplace for the calling Go code to translate that status into an error. Since the error was received over the wire during an RPC call, it should technically be a wire error, so that any other handling/propagation logic higher up the stack can react to it correctly.
This could be addressed as easily as introducing a new
NewWireError
function with the same signature and semantics asNewError
except that the returned error will return true whenerr.IsWireError()
is called.Distinguish between "the server ended the stream" and "the connection dropped unexpectedly"
Is your feature request related to a problem? Please describe.
We're using the connect protocol to stream messages from a server to a client. I seem to have misconfigured the reverse proxy in between them, so the connection drops after 30 seconds.
What surprised me, though, was that in this case
(*ServerStreamForClient).Err()
returns nil, even though the client never received the end-of-stream message.It seems like when the connection dropped unexpectedly, the client receives an
io.EOF
, which is suppressed.If the server closed the stream and the client received the end-of-stream message, then the error is
errSpecialEnvelope
, which is also suppressed since it wrapsio.EOF
.It seems like it's not possible to distinguish between "the server ended the stream" and "the connection dropped unexpectedly".
Describe the solution you'd like
I would like
(*ServerStreamForClient).Err()
to only return nil if the client received the end-of-stream message from the server. If the client receivedio.EOF
before receiving the end-of-stream message, then I would like it to return the error.document in the FAQ how client and server might access authenticated identity of the remote
The
Peer
object only provides an address. For clients, it just echos back the host that was used in the HTTP request, without returning anything about the remote server's identity. It could at least return the resolved IP, to provide more information when multiple A records are available.But it also provides no way to get the authenticated identity, when using TLS. This is easily available in the response object returned from the
http.Client
or the request object provided to thehttp.Handler
.The gRPC version of this type has a generic
AuthInfo
field with an interface type, and users can then try to type assert to a specific implementation. The idea is that different authn mechanisms might be used to authenticate parties (like JWTs or other custom auth cookies for client authn), so the representation needs to be flexible enough so an interceptor could provide the identity (instead of hard-coding the representation used in mutually-authenticated TLS). Speaking of which, there is not a way for an authenticating interceptor to override the peer since there is no exported setter (or constructor which allows setting it).