Ably now supports the Pusher and Pubnub protocols, in addition to the native Ably protocol.
However, there are times when using another protocol may be more appropriate or convenient:
- You may have an IoT device that has MQTT built-in
- You want to migrate over to Ably, but have already integrated Pubnub or Pusher client libraries into your product , and don’t want to have to swap out client libraries on the platforms you’re using for Ably ones all at once
- You’d like to mix-and-match client libraries from different providers, or use a platform that doesn’t have a native Ably library for it yet, since different providers support different sets of platforms.
We’ve built protocol adapters as part of the Ably service, to make all these protocols easily interoperate.
Today we’re releasing the Pusher adaptor and the Pubnub adaptor. The MQTT adaptor isn’t quite ready for release yet, but will follow in a couple of months. As for others, we’ll build them in response to demand — if you have a protocol you want us to support, let us know!
For a demonstration, check out https://realtime-pong.herokuapp.com/ , which takes a game of Pong, borrowed from Pubnub (thanks!), and creates five instances of it, each using a different realtime backend: Ably client (direct), Pusher client with Ably, and PubNub client with Ably — and, just for fun, Pusher client (direct) and Pubnub client (direct). All of them are simultaneously controllable from the same controller, so you can see how well they work. (The Ably-backed ones all use the same Ably channel behind the scene, to demonstrate interoperability).
Note that we do still recommend the use of native Ably client libraries where possible. While using the adapter gives you some of the advantages of Ably over Pusher or Pubnub (e.g. transparent pay-what-you-use pricing and message queues) many others (e.g. connection state continuity, fallback host support, history, flexible channel namespaces, powerful token authentication) require the use of the Ably client libraries. For example, Pubnub client libraries can only receive messages through long polling, so are never going to be as efficient for subscribing to high volumes of messages as native Ably client libraries (which use websockets as a primary transport, with fallbacks as needed).
How they work
Protocol adapters use different endpoints from the default Ably ones, which route users to the protocol adapter layer that also runs in each of our datacenters. For example, Pubnub clients use the pubnub.ably.io endpoint. The protocol adapters, each of which is a separate Elixir service, run as a middleware between our routers and the main Ably service, translating requests into the Ably protocol and sending them on to the Ably service, and translating any data received from Ably back into the client’s protocol. Latency based DNS ensures that the client connecting to pubnub.ably.io is routed to the closest datacenter.
Of course, some things are easier to translate than others. The Pusher protocol, like the Ably protocol, is a stateful connection and channel oriented websocket protocol (and supports a strict subset of Ably features), and so the translation layer is quite light. By contrast, the Pubnub protocol is very different (rather than being connection-oriented, it operates through stateless long polls), so there’s a bigger impedance mismatch, and the adapter has to do more. Even with the REST API, since every Pubnub operation needs to be translated to an equivalent set of Ably operations, some things can also be expensive in terms of the time they take to return and the number of API requests they count against your package quota: for example, to do a Pubnub ‘global herenow’ request (which returns every presence member in every active channel in your app), behind the scenes the adapter needs to do a request for a list of active channels, followed by a presence request for each of those channels, for a total of n+1 requests (for n active channels). But that’s a pathological example; most operations map across more cleanly.