Simple, reliable, interoperable. A better gRPC.

  • By Buf
  • Last update: Jan 2, 2023
  • Comments: 17


Build Report Card GoDoc

Connect is a slim library for building browser and gRPC-compatible HTTP APIs. You write a short Protocol Buffer schema and implement your application logic, and Connect generates code to handle marshaling, routing, compression, and content type negotiation. It also generates an idiomatic, type-safe client. Handlers and clients support three protocols: gRPC, gRPC-Web, and Connect's own protocol.

The Connect protocol is a simple, POST-only protocol that works over HTTP/1.1 or HTTP/2. It takes the best portions of gRPC and gRPC-Web, including streaming, and packages them into a protocol that works equally well in browsers, monoliths, and microservices. Calling a Connect API is as easy as using curl. Try it with our live demo:

curl \
    --header "Content-Type: application/json" \
    --data '{"sentence": "I feel happy."}' \

Handlers and clients also support the gRPC and gRPC-Web protocols, including streaming, headers, trailers, and error details. gRPC-compatible server reflection and health checks are available as standalone packages. Instead of cURL, we could call our API with grpcurl:

go install[email protected]
grpcurl \
    -d '{"sentence": "I feel happy."}' \ \

Under the hood, Connect is just Protocol Buffers and the standard library: no custom HTTP implementation, no new name resolution or load balancing APIs, and no surprises. Everything you already know about net/http still applies, and any package that works with an http.Server, http.Client, or http.Handler also works with Connect.

For more on Connect, see the announcement blog post, the documentation on (especially the Getting Started guide for Go), the demo service, or the protocol specification.

A small example

Curious what all this looks like in practice? From a Protobuf schema, we generate a small RPC package. Using that package, we can build a server:

package main

import (

  pingv1 ""

type PingServer struct {
  pingv1connect.UnimplementedPingServiceHandler // returns errors from all methods

func (ps *PingServer) Ping(
  ctx context.Context,
  req *connect.Request[pingv1.PingRequest],
) (*connect.Response[pingv1.PingResponse], error) {
  // connect.Request and connect.Response give you direct access to headers and
  // trailers. No context-based nonsense!
  res := connect.NewResponse(&pingv1.PingResponse{
    // req.Msg is a strongly-typed *pingv1.PingRequest, so we can access its
    // fields without type assertions.
    Number: req.Msg.Number,
  res.Header().Set("Some-Other-Header", "hello!")
  return res, nil

func main() {
  mux := http.NewServeMux()
  // The generated constructors return a path and a plain net/http
  // handler.
  err := http.ListenAndServe(
    // For gRPC clients, it's convenient to support HTTP/2 without TLS. You can
    // avoid x/net/http2 by using http.ListenAndServeTLS.
    h2c.NewHandler(mux, &http2.Server{}),
  log.Fatalf("listen failed: %v", err)

With that server running, you can make requests with any gRPC or Connect client. To write a client using connect-go,

package main

import (

  pingv1 ""

func main() {
  client := pingv1connect.NewPingServiceClient(
  req := connect.NewRequest(&pingv1.PingRequest{
    Number: 42,
  req.Header().Set("Some-Header", "hello from connect")
  res, err := client.Ping(context.Background(), req)
  if err != nil {

Of course, http.ListenAndServe and http.DefaultClient aren't fit for production use! See Connect's deployment docs for a guide to configuring timeouts, connection pools, observability, and h2c.



This module is a beta: we rely on it in production, but we may make a few changes as we gather feedback from early adopters. We're planning to tag a stable v1 in October, soon after the Go 1.19 release.

Support and versioning

connect-go supports:

Within those parameters, Connect follows semantic versioning.


Offered under the Apache 2 license.



  • 1

    Unify Request and Response into Message

    Originally, we expected the generic Request and Response types to diverge quite a bit. In practice, they've ended up nearly identical. The methods we anticipate adding (primarily DisableCompression()) apply equally to both.

    The code for the two types is so similar that we're often making near-identical changes to their code. (For example, supporting trailers required verbatim copies across the two types.)

    This commit unifies the two types into connect.Message. We can then unify AnyRequest/AnyResponse and ReceiveRequest/ReceiveResponse. Since Request.Msg was never @bufdev's favorite and Message.Msg is even worse, I've renamed to Message.Body - but I'm totally open to suggestions for a better field name.

    After this PR, we've slimmed down connect's exported API quite a bit. On my monitor, the GoDoc table of contents now fits (barely) on one screen.

  • 2

    Implement gRPC's standard interop tests

    The first-party gRPC implementations have a standardized battery of interoperability tests. Many of them test particular flags or option schemes that may not apply to us, but let's see if we can get some useful test coverage from them.

  • 3

    Random Compression Method Selection

    Say you register 4 compression methods on both the handler and the client.

    It seems like the server picks the first compression method it recognizes, unless a specific method was used when sending the request.

    However, looking at the client side the order the methods are sent is random since it collects names from a map.

    As far as I can tell this makes it impossible to select an order of preference from the client side. Furthermore it appears to me that gzip cannot be removed from the pool, only replaced.

  • 4

    More request information in Interceptor.WrapStreamContext()

    Currently there is no way to enrich a stream context with information about the request. The WrapStreamContext() method only accepts a parent context and does not give us any information about the request.

    In particular, we need access to the endpoint spec and request headers.

    Our primary use case for this is authentication, where we would like to be able to apply some generic token parsing and validation logic to all requests, and enrich context with authentication info.

  • 5

    Rename protoc-gen-go-connect to protoc-gen-goconnect/protoc-gen-connectgo?

    The go-.* paradigm is really only used by protoc-gen-go-grpc, which is relatively new, and I'd argue that it's not productive. We generally want people to get into the habit of generating to a sub-directory named after their plugin, So we might want to have i.e /internal/gen/proto/goconnect or internal/gen/proto/connectgo, and then name the plugin accordingly.

    Note that we did the same go-.* style with our internal plugins for bufbuild/buf (this is on me), so we should probably change that too once we agree on the naming scheme.

  • 6

    Possible to retrieve peer info?

    With grpc-go I can get peer info like so:

    p, ok := peer.FromContext(ctx)
    if !ok {
        return nil, errors.New("could not get peer from context")
    addr := p.Addr.String()

    Is it possible to do this with connect-go?

    I am trying to get the IP that the request is originating from.


  • 7

    Can't detect MethodOptions

    I am currently in the process of migrating an existing grpc api server to connect-go. Went smoothly so far, great work !

    I am actually stuck at one point, in the current implementation i detect the specified MethodOptions per service and construct ACL Rules from them.

    Example Proto

    service HealthService {
      rpc Get(HealthServiceGetRequest) returns (HealthServiceGetResponse) {
        option (visibility) = VISIBILITY_PUBLIC;

    Then i use @jhump "" to load the ServiceDescriptors and iterate through. This requires a grpc.Server instance which is not available anymore.

    Any hints howto get access to the MethodOptions with a connect-go implementation are welcome.

  • 8

    Support dynamic input type and output type

    Is your feature request related to a problem? Please describe.

    Trying to build a connect gateway, support post and grpc.

    The proto is dynamic, generated from grpc reflect, but connect handler can not use the dynamic type due to handler code like this

    request = &Request[Req]{
      Msg:    new(Req),
      spec:   receiver.Spec(),
      header: receiver.Header(),

    I think connect-go can support dynamic pb type, the input and output type is based on reflect MessageType, maybe pass from HandlerOption, so NewUnaryHandler[any,any]() can work as expected.

    Describe the solution you'd like

    Describe alternatives you've considered

    Additional context

    connect protocol makes grpc eaiser, but for migration, need a gateway to translate the protocol for other languages that don't speak connect protocol.

  • 9

    How to implement a "full-stream" interceptor?

    Is your feature request related to a problem? Please describe. I'm struggling to understand how to implement a logging/tracing interceptor for the full Client/Server side streams. Something similar to how it was done with grpc-go, example for Elasticsearch's APM tracing:

    If I embed the tracing-information using WrapStreamContext (which is also missing something like Spec, to be able to identify which stream is called), how would I go about closing the transaction on end?

    Describe the solution you'd like I'd like a solution that allows for tracking the duration and result of streams, similar to how it's possible for unary calls.

  • 10

    Fixed README examples; Set autogeneration of them using mdox tool.


    Huge fan here, this project is amazing! πŸ’ͺ🏽 It has lots of features (3 protocols in one), so I think examples has to be clear. I found the existing ones obsolete so I recreated them and committed with tooling that autogenerates them in README once changed.

    make test also checks if the examples are buildable, which will keep them up-to-date!

    Hopefully that will help make this project more accessible, cheers. Keep the good work!

    I hope you don't mind putting auto-formatter for README. It ensures consistency and e.g puts all on single line (all IDEs handle that just fine, so no point in trying to manually adjust width of text)

    Signed-off-by: Bartlomiej Plotka [email protected]

  • 11

    Evaluate Style Guide

    Package Structure

    • [x] The codec and compress packages are split out so that we can more easily add a separate package for the Protocol abstraction (e.g. gRPC, gRPC-Web, Connect) without introducing an import cycle.
    • [x] The clientstream and handlerstream packages are split out so that we can tidy up names for these types. Otherwise, you'd have names like NewClientClientStream (the client's view of a client streaming endpoint).
    • [x] We might want it to be compress.Gzip instead of compress.GzipCompressor.
      • Edit: We'll move this to compress/gzip.Compressor.

    Method Naming

    • [x] The stream has a ReceivedHeader method to distinguish itself from the request headers. Should we instead just name this ResponseHeader for clarity?
    • [x] Similarly, let's make it explicit to be RequestHeader and ResponseHeader.
    • [x] As discussed, we need to decide what we're doing with Simple/Full.
      • Client-side: WrappedPingClient and UnwrappedPingClient interfaces. PingClient is reserved for the type that users interact with.
      • Server-side: PingService (acts upon generic types), and NewPingService (acts upon any). Comments are left in-line to describe how to implement the simple, non-generic method signatures.

    Future Proofing

    • [x] Top-level abstractions (e.g. Codec and Compressor) are propagated through the clientCfg and into the protocol abstraction via a catch-all protocolClientParams. If we eventually plan to export the protocol abstraction and include it as a ClientOption, the relationship here is fuzzy.
      • What happens if we ever have a protocol that doesn't interact with all of the protocolClientParams types - is it a no-op, an error, or a silent failure?
      • We could tie these options to each protocol individually to clear these relationships up, but we end up with some more repetition (i.e. we need to repeat the same options for similar protocols like gRPC and gRPC-Web). For example, each of the gRPC and gRPC-Web protocols would have a Codec option.
      • In the connect-api branch, we were able to get around this because the protocols were separate client implementations, and they could each individually own what options they exposed (e.g. here).
    • [x] We still need to figure out error details. I know the gRPC protocol requires the proto.Any, and Akshay had some ideas around this - we could rename the methods to include Any so it leaves room for us to add other types later (e.g. AddAnyDetail). The abstraction violation between the pluggable Codec and the proto.Any sucks, but I know we're at mercy to the gRPC protocol here.

    1.0 Features

    • [x] The gRPC health and reflection packages are valuable, but they aren't really necessary to begin with. We should consider whether or not these packages should exist in a separate repository (similar to gRPC middleware repositories).
      • I know we need to be mindful of this w.r.t. including the RegistrationName in the generated code. If we were to drop this support to begin with, we'd need to reintroduce this as a HandlerOption later, and that's tied to the connect library itself. It's not immediately obvious how this would work.
      • Decision: health and reflection are staying where they are. We need these features for easy gRPC user adoption. To be clear, health is non-optional. reflection is a huge quality of life improvement and its (nearly) part of the gRPC specification at this point.
    • [x] connect.MaxHeaderBytes is kinda nice, but doesn't feel necessary and is prone to change across different protocols.
    • [x] Should connect.ReceiveResponse be in an internal package? It's only used twice and otherwise distracts from the API that users ought to interact with. This might already be your plan based on the conversations we had earlier about the user-facing API and connect internals.
      • It looks like ReceiveRequest needs to be exported for the generated code, so I can see an argument to export it for symmetry.
    • [x] Drop IsValidHeaderKey and IsValidHeaderValue.

    Implementation Details

    • [x] In the connect-api branch, I left a note for myself about whether or not the discard helper function can hang forever (e.g. discard(cs.response.Body)). This might have happened when I introduced the gRPC testing suite, but I can't recall. We need to make sure this can't happen.
      • Nothing to do here - this is a consequence of needing to read the http.Response.Body to completion to reuse the connection. This is also just an implementation detail, so it's not blocking regardless.
  • 12 example doesn't compile

    Describe the bug

    The example doesn't work here:

    Fails with the error

    go: downloading v1.4.1
    go: downloading v1.28.1
    package play.ground
    	prog.go:8:2: use of internal package not allowed
    package play.ground
    	prog.go:9:2: use of internal package not allowed

    The solution here would be either

    • Move protos from internal to something that isn't restrictive like private
    • Create a BSR module and remove internal/gen completely
    • Create another example in so that it is not rendered as a runnable example

    From the docs here

    To achieve this we can use a β€œwhole file example.” A whole file example is a file that ends in _test.go and contains exactly one example function, no test or benchmark functions, and at least one other package-level declaration. When displaying such examples godoc will show the entire file.

  • 13

    client side of server streaming call does not always drain HTTP response body

    When invoking a server or bidi stream using the Connect or gRPC-web protocols, the response stream is only read to the end of the "end of stream" frame. It is never verified that the body contain no more. The standard gRPC protocol does not exhibit this issue, since it must drain the body fully and read trailers in order to get the RPC status.

    This is an issue when trying to use HTTP middleware that wraps the body reader. The middleware will never detect that the response is finished, because the underlying reader is never fully drained (e.g. read until it returns a non-nil error, typically io.EOF). This also means that it is possible for the server to write additional content, and thus send back a corrupt/invalid response body, and the RPC client will not notice. (Unclear what action should be taken in this case -- like whether this should result in an RPC wire error, especially if the call otherwise succeeded.)

  • 14

    Expose *http.Request to server Peer

    In some cases, we need access to the underlying *http.Request in the handler.

    For example, we need access to the underlying request.TLS to identify peer identity.

  • 15

    expose an API for constructing a wire error

    When using connect.NewError, it is not possible create an error such that err.IsWireError() will return true.

    This capability is particularly useful for dealing with bidi streaming APIs that represent a conversation, where each response message correlates with a request message. It is common for these kinds of APIs to support partial success by having a response message indicate a failure just for that one request message. Often such APIs use google.rpc.Status in the response message to indicate a possible failure. Sometimes they do not.

    For this sort of use, it is commonplace for the calling Go code to translate that status into an error. Since the error was received over the wire during an RPC call, it should technically be a wire error, so that any other handling/propagation logic higher up the stack can react to it correctly.

    This could be addressed as easily as introducing a new NewWireError function with the same signature and semantics as NewError except that the returned error will return true when err.IsWireError() is called.

  • 16

    Distinguish between "the server ended the stream" and "the connection dropped unexpectedly"

    Is your feature request related to a problem? Please describe.

    We're using the connect protocol to stream messages from a server to a client. I seem to have misconfigured the reverse proxy in between them, so the connection drops after 30 seconds.

    What surprised me, though, was that in this case (*ServerStreamForClient).Err() returns nil, even though the client never received the end-of-stream message.

    It seems like when the connection dropped unexpectedly, the client receives an io.EOF, which is suppressed.

    If the server closed the stream and the client received the end-of-stream message, then the error is errSpecialEnvelope, which is also suppressed since it wraps io.EOF.

    It seems like it's not possible to distinguish between "the server ended the stream" and "the connection dropped unexpectedly".

    Describe the solution you'd like

    I would like (*ServerStreamForClient).Err() to only return nil if the client received the end-of-stream message from the server. If the client received io.EOF before receiving the end-of-stream message, then I would like it to return the error.

  • 17

    document in the FAQ how client and server might access authenticated identity of the remote

    The Peer object only provides an address. For clients, it just echos back the host that was used in the HTTP request, without returning anything about the remote server's identity. It could at least return the resolved IP, to provide more information when multiple A records are available.

    But it also provides no way to get the authenticated identity, when using TLS. This is easily available in the response object returned from the http.Client or the request object provided to the http.Handler.

    The gRPC version of this type has a generic AuthInfo field with an interface type, and users can then try to type assert to a specific implementation. The idea is that different authn mechanisms might be used to authenticate parties (like JWTs or other custom auth cookies for client authn), so the representation needs to be flexible enough so an interceptor could provide the identity (instead of hard-coding the representation used in mutually-authenticated TLS). Speaking of which, there is not a way for an authenticating interceptor to override the peer since there is no exported setter (or constructor which allows setting it).