Integrating Nstream into Your Data Pipeline with Kafka, Akka, or Pulsar

More than 80% of large-scale technical integration projects fail, often because of unrecognized complexity, unrealistic deadlines, and unforeseen technology changes.

However, organizations don’t need to overhaul their tech stack to benefit from streaming data applications, as Nstream’s versatility allows developers to build a streaming data application from scratch or to integrate it into their current technology stack.

Building a streaming data application from scratch

Nstream is the fastest way to build streaming data applications that enable real-time visibility, live modeling of complex business operations, and proactive decision-making at scale.

Nstream is powered by SwimOS, an open-source software platform architected as a distributed operating system designed with developers in mind. Anyone can use SwimOS’s configurable templates to build stateful, real-time data applications from the ground up. While intermediate programming skills are helpful for using SwimOS, developers do not need to have any streaming data programming experience.

Nstream also enables you to build streaming data applications without needing other data systems, such as stream processing frameworks, additional databases, additional application servers, or data visualization software. As a result, you can build a streaming application within minutes, not months.

How to integrate Nstream with an existing technology stack

If you’re not starting from the ground up, integrating Nstream with architectures that include Kafka and/or Flink is quite simple. The Nstream Platform uses configurable data connectors and pre-built application components to integrate into many popular data streaming technologies seamlessly.

The Nstream Platform is built from first principles meant to reduce the duplicative nature of design and complexity. As a result, this vertically-integrated software stack can quickly run in production at scale with enterprise customers without sacrificing performance. With Nstream, successful integration can take as little as a few hours or days instead of weeks.

When using SwimOS, developers can also call the Kafka APIs from their Java code, where Swim is a library. Developers can then populate Swim from callbacks from the subscriptions. This process is the same for Flink when it outputs to Kafka.

This streamlined integration process is possible because Nstream already provides support for:

  • Data sources, including Java Message Service, Kafka (open source or Confluent), MongoDB, NATS, Pulsar, RabbitMQ, relational databases (MySQL, Oracle, Postgres, etc.), and REST APIs.
  • Data toolkits like Akka.
  • Data formats, including Avro, CSV, JSON, Protobuf, and XML.

The Nstream Platform is also open and extensible, allowing developers to integrate with any third-party tools (e.g., Customers have built integrations to proprietary socket APIs, OBD2 software, etc.).

For organizations not using one of the streaming technologies mentioned above, Nstream uses simple APIs to create additional plugins to integrate the streaming technology with the SwimOS platform.

Using Nstream to enhance existing architectures

Nstream isn’t meant to replace your existing data architecture or to operate as a clunky add-on. Streaming data applications with Nstream can seamlessly integrate into your existing architecture to further enhance your current capabilities.

Here are several examples of architectures that Nstream can build upon.

Nstream + Kafka

Fully managed Kafka services help an organization transition from batch to real-time data processing. Kafka brokers act as a useful buffer between the real world and applications, but stateless operations like data aggregations and joins using stream processing operators can quickly become expensive or intractable.

To unlock the full potential of streaming data, organizations can turn to Nstream to surface valuable insights from relational data in real-time without having to fetch/query state for external streams when processing the event. It is the easiest way to build applications that continuously analyze streaming data from Apache Kafka.

Nstream + Akka

Unlike Kafka, Akka has a brokerless architecture. However, Akka itself still has limitations that prevent it from operating in true real-time on its own: Akka actors are only accessible via the Actor Context. This makes it impossible for them to reach an actor from a client that wants to observe it (e.g., from another application or browser).

Nstream enhances Akka applications by enabling the automatic linking of distributed state. Web Agents (or stateful entities) in Nstream are similar to Akka Actors, as Web Agents provide a stateful, concurrent computational model with thread-safe access to state. This allows applications built on Akka to achieve real-time tracking of individual entities.

If you’re using Akka for a microservice, you can emit events into SwimOS/Nstream. In this way, Akka acts as a data source. Creating a Web Agent that notifies an Akka Actor by sending a message to its ActorRef is also possible.

Nstream + Pulsar

Pulsar services provide a stateless approach to event stream processing and analyzing data to surface insights. Like Kafka, Pulsar services require an event broker, which can then be connected to Nstream. This allows businesses to execute business logic using data from multiple topics in real-time.

Nstream applications use streaming joins, allowing you to join, filter, and tag multiple streams of data at scale while avoiding the high latency that comes from traditional stream-to-stream joins using typical data architectures.

Adapt Nstream to your infrastructure

When creating real-time streaming applications, Nstream doesn’t want — or need — to replace every tool you already have. Instead, the platform can work within your existing ecosystem. Watch this video to see how Kafka, Flink, and SwimOS work together to provide real-time data visibility for a retail company.