protobufjs vs google-protobuf vs avsc vs msgpack-lite vs grpc-web vs flatbuffers
Binary Serialization and RPC Communication in JavaScript
protobufjsgoogle-protobufavscmsgpack-litegrpc-webflatbuffersSimilar Packages:

Binary Serialization and RPC Communication in JavaScript

avsc, flatbuffers, google-protobuf, msgpack-lite, and protobufjs are libraries for serializing structured data into compact binary formats, commonly used to reduce payload size and improve parsing performance in web and Node.js applications. grpc-web is distinct—it’s a client library for calling gRPC services from browsers, typically relying on Protocol Buffers (via google-protobuf or protobufjs) for message encoding. While all these packages aim to make data exchange more efficient than JSON, they differ significantly in schema requirements, performance characteristics, browser compatibility, and integration with backend systems.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
protobufjs43,180,03110,5112.96 MB7133 months agoBSD-3-Clause
google-protobuf3,055,849455927 kB7018 days ago(BSD-3-Clause AND Apache-2.0)
avsc1,166,9281,374264 kB268 months agoMIT
msgpack-lite972,3851,035-569 years agoMIT
grpc-web131,4629,19336.3 kB1846 months agoApache-2.0
flatbuffers025,672288 kB1496 months agoApache-2.0

Efficient Data Serialization and RPC in JavaScript: A Practical Guide

When building modern web applications, sending large JSON payloads can hurt performance—especially on mobile networks. That’s where binary serialization formats like Avro, FlatBuffers, MessagePack, and Protocol Buffers come in. They compress data more efficiently and parse faster than JSON. Meanwhile, grpc-web solves a different but related problem: how to call gRPC services from a browser. Let’s compare these tools based on real-world engineering trade-offs.

📦 Schema Requirements: Strict Contracts vs Flexible Encoding

Some libraries require a schema upfront; others work like enhanced JSON.

avsc uses Avro schemas (JSON-based) to define structure and validate data.

// avsc: Define and use an Avro schema
const avro = require('avsc');
const type = avro.Type.forSchema({
  type: 'record',
  name: 'User',
  fields: [{ name: 'name', type: 'string' }, { name: 'age', type: 'int' }]
});
const buf = type.toBuffer({ name: 'Alice', age: 30 });
const obj = type.fromBuffer(buf);

flatbuffers requires a .fbs schema file compiled to JavaScript using the flatc tool.

// flatbuffers: After compiling user.fbs
const builder = new flatbuffers.Builder();
const name = builder.createString('Alice');
user.User.startUser(builder);
user.User.addName(builder, name);
user.User.addAge(builder, 30);
const offset = user.User.endUser(builder);
builder.finish(offset);
const buf = builder.asUint8Array();

google-protobuf and protobufjs both use Protocol Buffer .proto files, but differ in how they load them.

// google-protobuf: Requires pre-generated JS classes
const { User } = require('./user_pb.js');
const user = new User();
user.setName('Alice');
user.setAge(30);
const buf = user.serializeBinary();
// protobufjs: Can load .proto dynamically
const protobuf = require('protobufjs');
protobuf.load('user.proto').then(root => {
  const User = root.lookupType('User');
  const errMsg = User.verify({ name: 'Alice', age: 30 });
  if (!errMsg) {
    const buf = User.encode({ name: 'Alice', age: 30 }).finish();
  }
});

msgpack-lite needs no schema—it encodes JavaScript objects directly.

// msgpack-lite: Schema-less encoding
const msgpack = require('msgpack-lite');
const buf = msgpack.encode({ name: 'Alice', age: 30 });
const obj = msgpack.decode(buf);

grpc-web doesn’t handle serialization itself—it delegates to a Protobuf library.

// grpc-web: Uses generated service clients
const client = new UserServiceClient('https://api.example.com');
const request = new GetUserRequest();
request.setId('123');
client.getUser(request, {}, (err, response) => {
  console.log(response.getName());
});

⚡ Performance: Speed vs Convenience

flatbuffers offers zero-copy reads: you access data directly from the buffer without full deserialization.

// flatbuffers: Read without full decode
const bb = new flatbuffers.ByteBuffer(buf);
const user = user.User.getRootAsUser(bb);
console.log(user.name(), user.age()); // Direct access

avsc and protobufjs provide fast encode/decode with clean object interfaces.

// avsc: Fast object round-trip
const obj = type.fromBuffer(type.toBuffer({ name: 'Alice', age: 30 }));

// protobufjs: Similar ergonomics
const obj = User.decode(buf).toObject();

google-protobuf is correct but slower due to its class-based API and larger runtime.

msgpack-lite is fast for simple objects but lacks validation—garbage in, garbage out.

🌐 Browser Compatibility and Bundle Impact

All packages work in modern browsers, but bundle sizes vary:

  • msgpack-lite (~20 KB minified) is the lightest.
  • avsc (~40 KB) includes full schema validation.
  • protobufjs (~60 KB) supports dynamic loading and reflection.
  • google-protobuf (~100+ KB) includes the full runtime.
  • flatbuffers (~30 KB) but requires manual memory management.
  • grpc-web adds ~50 KB plus your chosen Protobuf library.

If you’re targeting low-end mobile devices, msgpack-lite or avsc may be preferable over Protobuf options.

🔌 Integration with Backend Systems

  • Use google-protobuf or protobufjs if your backend uses gRPC or Protobuf.
  • Use avsc if your data platform (e.g., Kafka, Hadoop) uses Avro.
  • Use flatbuffers if your game server or embedded system already uses it.
  • Use msgpack-lite for internal microservices where schemas evolve rapidly.
  • Use grpc-web only if you’ve committed to gRPC and have a gRPC-Web proxy (like Envoy) in place.

🛠️ Error Handling and Debugging

  • avsc throws clear errors on schema mismatch.
  • protobufjs provides verify() to catch issues before encoding.
  • google-protobuf fails silently on missing fields (returns default values).
  • flatbuffers offers no runtime validation—errors manifest as corrupted data.
  • msgpack-lite never validates; decoding malformed buffers throws generic errors.

🔄 Real-World Recommendations

Scenario 1: Analytics Dashboard with Kafka Backend

You’re consuming Avro-encoded events from Kafka via a WebSocket gateway.

  • Best choice: avsc
  • Why? Native Avro support ensures correctness and leverages logical types.

Scenario 2: Real-Time Multiplayer Game

You need sub-millisecond latency for player state updates.

  • Best choice: flatbuffers
  • Why? Zero-copy access minimizes GC pressure and CPU usage.

Scenario 3: Enterprise App with gRPC Microservices

Your backend exposes gRPC services, and you want to call them from React.

  • Best choice: grpc-web + protobufjs
  • Why? protobufjs gives smaller bundles and better dev experience than google-protobuf.

Scenario 4: Internal Admin Tool with Rapid Iteration

Your team changes data shapes weekly and hates writing schemas.

  • Best choice: msgpack-lite
  • Why? No schema = no friction, and payloads are still smaller than JSON.

📊 Summary Table

PackageSchema RequiredZero-CopyBundle SizeBrowser-FriendlyBest For
avsc✅ (Avro)MediumData pipelines, Kafka apps
flatbuffers✅ (.fbs)Small✅ (with care)Games, real-time systems
google-protobuf✅ (.proto)LargeOfficial gRPC compatibility
grpc-webN/AN/AMedium+ProtobufCalling gRPC from browsers
msgpack-liteVery SmallInternal APIs, rapid prototyping
protobufjs✅ (.proto)MediumHigh-performance Protobuf in browsers

💡 Final Advice

Don’t pick a binary format just because it’s “faster.” Ask:

  • Does my backend already use one of these?
  • Do I need strict contracts or flexibility?
  • Am I optimizing for bundle size, CPU, or developer velocity?

If you’re starting fresh and don’t need gRPC, msgpack-lite or avsc offer the best balance of simplicity and efficiency. If you’re in a Protobuf ecosystem, protobufjs is almost always better than google-protobuf for frontend use. And never use grpc-web unless you’ve confirmed your infrastructure supports it—you’ll save yourself weeks of debugging.

How to Choose: protobufjs vs google-protobuf vs avsc vs msgpack-lite vs grpc-web vs flatbuffers

  • protobufjs:

    Choose protobufjs for high-performance Protocol Buffer handling in both Node.js and browsers, especially when bundle size and speed matter. It supports dynamic loading of .proto files and generates cleaner JavaScript objects than google-protobuf, making it better suited for frontend-heavy applications using gRPC or Protobuf over HTTP.

  • google-protobuf:

    Choose google-protobuf if you’re using Protocol Buffers and need official compatibility with Google’s toolchain, especially when working with gRPC services generated by protoc. It’s reliable and well-maintained, but produces larger bundles and slower runtime performance compared to alternatives like protobufjs.

  • avsc:

    Choose avsc if you're working with Apache Avro schemas and need fast, schema-aware serialization with built-in validation and support for logical types (like timestamps and decimals). It’s ideal for data pipelines or analytics platforms where Avro is already standardized, but less suitable if your team lacks Avro expertise or if you need minimal bundle size in the browser.

  • msgpack-lite:

    Choose msgpack-lite for a lightweight, schema-less alternative to JSON that offers smaller payloads and faster parsing without requiring predefined schemas. It’s great for internal APIs or caching layers where type safety isn’t enforced, but avoid it if you need strict contract validation or interoperability with strongly typed backend systems.

  • grpc-web:

    Choose grpc-web only when you need to call gRPC services directly from a browser-based frontend. It requires a gRPC-Web proxy (like Envoy) on the backend and depends on a Protocol Buffer implementation (google-protobuf or protobufjs) for message handling. Avoid it if your backend exposes REST or GraphQL APIs instead.

  • flatbuffers:

    Choose flatbuffers when ultra-low latency and zero-copy deserialization are critical—such as in real-time games, IoT dashboards, or high-frequency trading UIs. Be prepared to manage schema evolution manually and accept a steeper learning curve due to its unconventional API and lack of automatic object mapping.

README for protobufjs

protobuf.js
protobuf.js

Protocol Buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more, originally designed at Google (see).

protobuf.js is a pure JavaScript implementation with TypeScript support for Node.js and the browser. It's easy to use, does not sacrifice on performance, has good conformance and works out of the box with .proto files!

Contents

Installation

Node.js

npm install protobufjs --save
// Static code + Reflection + .proto parser
var protobuf = require("protobufjs");

// Static code + Reflection
var protobuf = require("protobufjs/light");

// Static code only
var protobuf = require("protobufjs/minimal");

The optional command line utility to generate static code and reflection bundles lives in the protobufjs-cli package and can be installed separately:

npm install protobufjs-cli --save-dev

Browsers

Pick the variant matching your needs and replace the version tag with the exact release your project depends upon. For example, to use the minified full variant:

<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.min.js"></script>
DistributionLocation
Fullhttps://cdn.jsdelivr.net/npm/protobufjs/dist/
Lighthttps://cdn.jsdelivr.net/npm/protobufjs/dist/light/
Minimalhttps://cdn.jsdelivr.net/npm/protobufjs/dist/minimal/

All variants support CommonJS and AMD loaders and export globally as window.protobuf.

Usage

Because JavaScript is a dynamically typed language, protobuf.js utilizes the concept of a valid message in order to provide the best possible performance (and, as a side product, proper typings):

Valid message

A valid message is an object (1) not missing any required fields and (2) exclusively composed of JS types understood by the wire format writer.

There are two possible types of valid messages and the encoder is able to work with both of these for convenience:

  • Message instances (explicit instances of message classes with default values on their prototype) naturally satisfy the requirements of a valid message and
  • Plain JavaScript objects that just so happen to be composed in a way satisfying the requirements of a valid message as well.

In a nutshell, the wire format writer understands the following types:

Field typeExpected JS type (create, encode)Conversion (fromObject)
s-/u-/int32
s-/fixed32
number (32 bit integer)value | 0 if signed
value >>> 0 if unsigned
s-/u-/int64
s-/fixed64
Long-like (optimal)
number (53 bit integer)
Long.fromValue(value) with long.js
parseInt(value, 10) otherwise
float
double
numberNumber(value)
boolbooleanBoolean(value)
stringstringString(value)
bytesUint8Array (optimal)
Buffer (optimal under node)
Array.<number> (8 bit integers)
base64.decode(value) if a string
Object with non-zero .length is assumed to be buffer-like
enumnumber (32 bit integer)Looks up the numeric id if a string
messageValid messageMessage.fromObject(value)
repeated TArray<T>Copy
map<K, V>Object<K,V>Copy
  • Explicit undefined and null are considered as not set if the field is optional.
  • Maps are objects where the key is the string representation of the respective value or an 8 characters long hash string for Long-likes.

Toolset

With that in mind and again for performance reasons, each message class provides a distinct set of methods with each method doing just one thing. This avoids unnecessary assertions / redundant operations where performance is a concern but also forces a user to perform verification (of plain JavaScript objects that might just so happen to be a valid message) explicitly where necessary - for example when dealing with user input.

Note that Message below refers to any message class.

  • Message.verify(message: Object): null|string
    verifies that a plain JavaScript object satisfies the requirements of a valid message and thus can be encoded without issues. Instead of throwing, it returns the error message as a string, if any.

    var payload = "invalid (not an object)";
    var err = AwesomeMessage.verify(payload);
    if (err)
      throw Error(err);
    
  • Message.encode(message: Message|Object [, writer: Writer]): Writer
    encodes a message instance or valid plain JavaScript object. This method does not implicitly verify the message and it's up to the user to make sure that the payload is a valid message.

    var buffer = AwesomeMessage.encode(message).finish();
    
  • Message.encodeDelimited(message: Message|Object [, writer: Writer]): Writer
    works like Message.encode but additionally prepends the length of the message as a varint.

  • Message.decode(reader: Reader|Uint8Array): Message
    decodes a buffer to a message instance. If required fields are missing, it throws a util.ProtocolError with an instance property set to the so far decoded message. If the wire format is invalid, it throws an Error.

    try {
      var decodedMessage = AwesomeMessage.decode(buffer);
    } catch (e) {
        if (e instanceof protobuf.util.ProtocolError) {
          // e.instance holds the so far decoded message with missing required fields
        } else {
          // wire format is invalid
        }
    }
    
  • Message.decodeDelimited(reader: Reader|Uint8Array): Message
    works like Message.decode but additionally reads the length of the message prepended as a varint.

  • Message.create(properties: Object): Message
    creates a new message instance from a set of properties that satisfy the requirements of a valid message. Where applicable, it is recommended to prefer Message.create over Message.fromObject because it doesn't perform possibly redundant conversion.

    var message = AwesomeMessage.create({ awesomeField: "AwesomeString" });
    
  • Message.fromObject(object: Object): Message
    converts any non-valid plain JavaScript object to a message instance using the conversion steps outlined within the table above.

    var message = AwesomeMessage.fromObject({ awesomeField: 42 });
    // converts awesomeField to a string
    
  • Message.toObject(message: Message [, options: ConversionOptions]): Object
    converts a message instance to an arbitrary plain JavaScript object for interoperability with other libraries or storage. The resulting plain JavaScript object might still satisfy the requirements of a valid message depending on the actual conversion options specified, but most of the time it does not.

    var object = AwesomeMessage.toObject(message, {
      enums: String,  // enums as string names
      longs: String,  // longs as strings (requires long.js)
      bytes: String,  // bytes as base64 encoded strings
      defaults: true, // includes default values
      arrays: true,   // populates empty arrays (repeated fields) even if defaults=false
      objects: true,  // populates empty objects (map fields) even if defaults=false
      oneofs: true    // includes virtual oneof fields set to the present field's name
    });
    

For reference, the following diagram aims to display relationships between the different methods and the concept of a valid message:

Toolset Diagram

In other words: verify indicates that calling create or encode directly on the plain object will [result in a valid message respectively] succeed. fromObject, on the other hand, does conversion from a broader range of plain objects to create valid messages. (ref)

Examples

Using .proto files

It is possible to load existing .proto files using the full library, which parses and compiles the definitions to ready to use (reflection-based) message classes:

// awesome.proto
package awesomepackage;
syntax = "proto3";

message AwesomeMessage {
    string awesome_field = 1; // becomes awesomeField
}
protobuf.load("awesome.proto", function(err, root) {
    if (err)
        throw err;

    // Obtain a message type
    var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

    // Exemplary payload
    var payload = { awesomeField: "AwesomeString" };

    // Verify the payload if necessary (i.e. when possibly incomplete or invalid)
    var errMsg = AwesomeMessage.verify(payload);
    if (errMsg)
        throw Error(errMsg);

    // Create a new message
    var message = AwesomeMessage.create(payload); // or use .fromObject if conversion is necessary

    // Encode a message to an Uint8Array (browser) or Buffer (node)
    var buffer = AwesomeMessage.encode(message).finish();
    // ... do something with buffer

    // Decode an Uint8Array (browser) or Buffer (node) to a message
    var message = AwesomeMessage.decode(buffer);
    // ... do something with message

    // If the application uses length-delimited buffers, there is also encodeDelimited and decodeDelimited.

    // Maybe convert the message back to a plain object
    var object = AwesomeMessage.toObject(message, {
        longs: String,
        enums: String,
        bytes: String,
        // see ConversionOptions
    });
});

Additionally, promise syntax can be used by omitting the callback, if preferred:

protobuf.load("awesome.proto")
    .then(function(root) {
       ...
    });

Using JSON descriptors

The library utilizes JSON descriptors that are equivalent to a .proto definition. For example, the following is identical to the .proto definition seen above:

// awesome.json
{
  "nested": {
    "awesomepackage": {
      "nested": {
        "AwesomeMessage": {
          "fields": {
            "awesomeField": {
              "type": "string",
              "id": 1
            }
          }
        }
      }
    }
  }
}

JSON descriptors closely resemble the internal reflection structure:

Type (T)ExtendsType-specific properties
ReflectionObjectoptions
NamespaceReflectionObjectnested
RootNamespacenested
TypeNamespacefields
EnumReflectionObjectvalues
FieldReflectionObjectrule, type, id
MapFieldFieldkeyType
OneOfReflectionObjectoneof (array of field names)
ServiceNamespacemethods
MethodReflectionObjecttype, requestType, responseType, requestStream, responseStream
  • Bold properties are required. Italic types are abstract.
  • T.fromJSON(name, json) creates the respective reflection object from a JSON descriptor
  • T#toJSON() creates a JSON descriptor from the respective reflection object (its name is used as the key within the parent)

Exclusively using JSON descriptors instead of .proto files enables the use of just the light library (the parser isn't required in this case).

A JSON descriptor can either be loaded the usual way:

protobuf.load("awesome.json", function(err, root) {
    if (err) throw err;

    // Continue at "Obtain a message type" above
});

Or it can be loaded inline:

var jsonDescriptor = require("./awesome.json"); // exemplary for node

var root = protobuf.Root.fromJSON(jsonDescriptor);

// Continue at "Obtain a message type" above

Using reflection only

Both the full and the light library include full reflection support. One could, for example, define the .proto definitions seen in the examples above using just reflection:

...
var Root  = protobuf.Root,
    Type  = protobuf.Type,
    Field = protobuf.Field;

var AwesomeMessage = new Type("AwesomeMessage").add(new Field("awesomeField", 1, "string"));

var root = new Root().define("awesomepackage").add(AwesomeMessage);

// Continue at "Create a new message" above
...

Detailed information on the reflection structure is available within the API documentation.

Using custom classes

Message classes can also be extended with custom functionality and it is also possible to register a custom constructor with a reflected message type:

...

// Define a custom constructor
function AwesomeMessage(properties) {
    // custom initialization code
    ...
}

// Register the custom constructor with its reflected type (*)
root.lookupType("awesomepackage.AwesomeMessage").ctor = AwesomeMessage;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

(*) Besides referencing its reflected type through AwesomeMessage.$type and AwesomeMesage#$type, the respective custom class is automatically populated with:

  • AwesomeMessage.create
  • AwesomeMessage.encode and AwesomeMessage.encodeDelimited
  • AwesomeMessage.decode and AwesomeMessage.decodeDelimited
  • AwesomeMessage.verify
  • AwesomeMessage.fromObject, AwesomeMessage.toObject and AwesomeMessage#toJSON

Afterwards, decoded messages of this type are instanceof AwesomeMessage.

Alternatively, it is also possible to reuse and extend the internal constructor if custom initialization code is not required:

...

// Reuse the internal constructor
var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage").ctor;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

Using services

The library also supports consuming services but it doesn't make any assumptions about the actual transport channel. Instead, a user must provide a suitable RPC implementation, which is an asynchronous function that takes the reflected service method, the binary request and a node-style callback as its parameters:

function rpcImpl(method, requestData, callback) {
    // perform the request using an HTTP request or a WebSocket for example
    var responseData = ...;
    // and call the callback with the binary response afterwards:
    callback(null, responseData);
}

Below is a working example with a typescript implementation using grpc npm package.

const grpc = require('grpc')

const Client = grpc.makeGenericClientConstructor({})
const client = new Client(
  grpcServerUrl,
  grpc.credentials.createInsecure()
)

const rpcImpl = function(method, requestData, callback) {
  client.makeUnaryRequest(
    method.name,
    arg => arg,
    arg => arg,
    requestData,
    callback
  )
}

Example:

// greeter.proto
syntax = "proto3";

service Greeter {
    rpc SayHello (HelloRequest) returns (HelloReply) {}
}

message HelloRequest {
    string name = 1;
}

message HelloReply {
    string message = 1;
}
...
var Greeter = root.lookup("Greeter");
var greeter = Greeter.create(/* see above */ rpcImpl, /* request delimited? */ false, /* response delimited? */ false);

greeter.sayHello({ name: 'you' }, function(err, response) {
    console.log('Greeting:', response.message);
});

Services also support promises:

greeter.sayHello({ name: 'you' })
    .then(function(response) {
        console.log('Greeting:', response.message);
    });

There is also an example for streaming RPC.

Note that the service API is meant for clients. Implementing a server-side endpoint pretty much always requires transport channel (i.e. http, websocket, etc.) specific code with the only common denominator being that it decodes and encodes messages.

Usage with TypeScript

The library ships with its own type definitions and modern editors like Visual Studio Code will automatically detect and use them for code completion.

The npm package depends on @types/node because of Buffer and @types/long because of Long. If you are not building for node and/or not using long.js, it should be safe to exclude them manually.

Using the JS API

The API shown above works pretty much the same with TypeScript. However, because everything is typed, accessing fields on instances of dynamically generated message classes requires either using bracket-notation (i.e. message["awesomeField"]) or explicit casts. Alternatively, it is possible to use a typings file generated for its static counterpart.

import { load } from "protobufjs"; // respectively "./node_modules/protobufjs"

load("awesome.proto", function(err, root) {
  if (err)
    throw err;

  // example code
  const AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

  let message = AwesomeMessage.create({ awesomeField: "hello" });
  console.log(`message = ${JSON.stringify(message)}`);

  let buffer = AwesomeMessage.encode(message).finish();
  console.log(`buffer = ${Array.prototype.toString.call(buffer)}`);

  let decoded = AwesomeMessage.decode(buffer);
  console.log(`decoded = ${JSON.stringify(decoded)}`);
});

Using generated static code

If you generated static code to bundle.js using the CLI and its type definitions to bundle.d.ts, then you can just do:

import { AwesomeMessage } from "./bundle.js";

// example code
let message = AwesomeMessage.create({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Using decorators

The library also includes an early implementation of decorators.

Note that decorators are an experimental feature in TypeScript and that declaration order is important depending on the JS target. For example, @Field.d(2, AwesomeArrayMessage) requires that AwesomeArrayMessage has been defined earlier when targeting ES5.

import { Message, Type, Field, OneOf } from "protobufjs/light"; // respectively "./node_modules/protobufjs/light.js"

export class AwesomeSubMessage extends Message<AwesomeSubMessage> {

  @Field.d(1, "string")
  public awesomeString: string;

}

export enum AwesomeEnum {
  ONE = 1,
  TWO = 2
}

@Type.d("SuperAwesomeMessage")
export class AwesomeMessage extends Message<AwesomeMessage> {

  @Field.d(1, "string", "optional", "awesome default string")
  public awesomeField: string;

  @Field.d(2, AwesomeSubMessage)
  public awesomeSubMessage: AwesomeSubMessage;

  @Field.d(3, AwesomeEnum, "optional", AwesomeEnum.ONE)
  public awesomeEnum: AwesomeEnum;

  @OneOf.d("awesomeSubMessage", "awesomeEnum")
  public which: string;

}

// example code
let message = new AwesomeMessage({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Supported decorators are:

  • Type.d(typeName?: string)   (optional)
    annotates a class as a protobuf message type. If typeName is not specified, the constructor's runtime function name is used for the reflected type.

  • Field.d<T>(fieldId: number, fieldType: string | Constructor<T>, fieldRule?: "optional" | "required" | "repeated", defaultValue?: T)
    annotates a property as a protobuf field with the specified id and protobuf type.

  • MapField.d<T extends { [key: string]: any }>(fieldId: number, fieldKeyType: string, fieldValueType. string | Constructor<{}>)
    annotates a property as a protobuf map field with the specified id, protobuf key and value type.

  • OneOf.d<T extends string>(...fieldNames: string[])
    annotates a property as a protobuf oneof covering the specified fields.

Other notes:

  • Decorated types reside in protobuf.roots["decorated"] using a flat structure, so no duplicate names.
  • Enums are copied to a reflected enum with a generic name on decorator evaluation because referenced enum objects have no runtime name the decorator could use.
  • Default values must be specified as arguments to the decorator instead of using a property initializer for proper prototype behavior.
  • Property names on decorated classes must not be renamed on compile time (i.e. by a minifier) because decorators just receive the original field name as a string.

ProTip! Not as pretty, but you can use decorators in plain JavaScript as well.

Additional documentation

Protocol Buffers

protobuf.js

Community

Performance

The package includes a benchmark that compares protobuf.js performance to native JSON (as far as this is possible) and Google's JS implementation. On an i7-2600K running node 6.9.1 it yields:

benchmarking encoding performance ...

protobuf.js (reflect) x 541,707 ops/sec ±1.13% (87 runs sampled)
protobuf.js (static) x 548,134 ops/sec ±1.38% (89 runs sampled)
JSON (string) x 318,076 ops/sec ±0.63% (93 runs sampled)
JSON (buffer) x 179,165 ops/sec ±2.26% (91 runs sampled)
google-protobuf x 74,406 ops/sec ±0.85% (86 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 0.9% ops/sec slower (factor 1.0)
          JSON (string) was 41.5% ops/sec slower (factor 1.7)
          JSON (buffer) was 67.6% ops/sec slower (factor 3.1)
        google-protobuf was 86.4% ops/sec slower (factor 7.3)

benchmarking decoding performance ...

protobuf.js (reflect) x 1,383,981 ops/sec ±0.88% (93 runs sampled)
protobuf.js (static) x 1,378,925 ops/sec ±0.81% (93 runs sampled)
JSON (string) x 302,444 ops/sec ±0.81% (93 runs sampled)
JSON (buffer) x 264,882 ops/sec ±0.81% (93 runs sampled)
google-protobuf x 179,180 ops/sec ±0.64% (94 runs sampled)

  protobuf.js (reflect) was fastest
   protobuf.js (static) was 0.3% ops/sec slower (factor 1.0)
          JSON (string) was 78.1% ops/sec slower (factor 4.6)
          JSON (buffer) was 80.8% ops/sec slower (factor 5.2)
        google-protobuf was 87.0% ops/sec slower (factor 7.7)

benchmarking combined performance ...

protobuf.js (reflect) x 275,900 ops/sec ±0.78% (90 runs sampled)
protobuf.js (static) x 290,096 ops/sec ±0.96% (90 runs sampled)
JSON (string) x 129,381 ops/sec ±0.77% (90 runs sampled)
JSON (buffer) x 91,051 ops/sec ±0.94% (90 runs sampled)
google-protobuf x 42,050 ops/sec ±0.85% (91 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 4.7% ops/sec slower (factor 1.0)
          JSON (string) was 55.3% ops/sec slower (factor 2.2)
          JSON (buffer) was 68.6% ops/sec slower (factor 3.2)
        google-protobuf was 85.5% ops/sec slower (factor 6.9)

These results are achieved by

  • generating type-specific encoders, decoders, verifiers and converters at runtime
  • configuring the reader/writer interface according to the environment
  • using node-specific functionality where beneficial and, of course
  • avoiding unnecessary operations through splitting up the toolset.

You can also run the benchmark ...

$> npm run bench

and the profiler yourself (the latter requires a recent version of node):

$> npm run prof <encode|decode|encode-browser|decode-browser> [iterations=10000000]

Note that as of this writing, the benchmark suite performs significantly slower on node 7.2.0 compared to 6.9.1 because moths.

Compatibility

  • Works in all modern and not-so-modern browsers except IE8.
  • Because the internals of this package do not rely on google/protobuf/descriptor.proto, options are parsed and presented literally.
  • If typed arrays are not supported by the environment, plain arrays will be used instead.
  • Support for pre-ES5 environments (except IE8) can be achieved by using a polyfill.
  • Support for Content Security Policy-restricted environments (like Chrome extensions without unsafe-eval) can be achieved by generating and using static code instead.
  • If a proper way to work with 64 bit values (uint64, int64 etc.) is required, just install long.js alongside this library. All 64 bit numbers will then be returned as a Long instance instead of a possibly unsafe JavaScript number (see).
  • For descriptor.proto interoperability, see ext/descriptor

Building

To build the library or its components yourself, clone it from GitHub and install the development dependencies:

$> git clone https://github.com/protobufjs/protobuf.js.git
$> cd protobuf.js
$> npm install

Building the respective development and production versions with their respective source maps to dist/:

$> npm run build

Building the documentation to docs/:

$> npm run docs

Building the TypeScript definition to index.d.ts:

$> npm run build:types

Browserify integration

By default, protobuf.js integrates into any browserify build-process without requiring any optional modules. Hence:

  • If int64 support is required, explicitly require the long module somewhere in your project as it will be excluded otherwise. This assumes that a global require function is present that protobuf.js can call to obtain the long module.

    If there is no global require function present after bundling, it's also possible to assign the long module programmatically:

    var Long = ...;
    
    protobuf.util.Long = Long;
    protobuf.configure();
    
  • If you have any special requirements, there is the bundler for reference.

License: BSD 3-Clause License