Streambased Documentation
  • Home
  • Overview
    • Key Concepts
  • Streambased Cloud
    • Streambased Cloud UI
      • Create your first Streambased cluster
      • Create your first Streambased API Key
      • Running your first A.S.K Query
      • Exploring your data using S.S.K
    • Iceberg Service for Kafka - I.S.K.
      • Overview
      • Architecture
      • Usage
      • Quick Start
    • Analytics Service for Kafka - A.S.K.
      • Overview
      • Architecture
      • Connecting to Streambased A.S.K.
        • Connect Superset to Streambased A.S.K.
        • Connect Jupyter to Streambased A.S.K.
        • Connect a JDBC Client to Streambased A.S.K.
        • Connect an ODBC client to Streambased A.S.K.
        • Connect a Python Application (SQL Alchemy) to Streambased A.S.K.
    • Storage Service for Kafka - S.S.K.
      • Overview
      • Connecting to Streambased S.S.K.
        • Connecting a S3 compatible client to Streambased S.S.K.
        • Connect a S3manager to Streambased S.S.K.
  • Streambased Platform
    • Overview
    • Requirements
    • Step by Step Installation
    • Configuration
    • Connecting Analytical Applications to Streambased
      • Connect Superset to Streambased
      • Connect Jupyter to Streambased
      • Connect a JDBC Client to Streambased
      • Connect an ODBC client to Streambased
      • Connect a Python Application (SQL Alchemy) to Streambased
Powered by GitBook
On this page
  • High level architecture
  • Iceberg Architecture duality
  • Stream Table Duality
  • Indexing
  • Logical Architecture diagram
  1. Streambased Cloud
  2. Iceberg Service for Kafka - I.S.K.

Architecture

PreviousOverviewNextUsage

Last updated 1 month ago

I.S.K.(Iceberg Service for Kafka) is an Iceberg projection over real-time event based data stored in operational systems. Materialised at runtime, ISK allows for the decoupling of the physical structure of the data from the logical, table based, representation. With I.S.K. you can:

  • Access up to date (zero latency) data from analytical applications.

  • Impose structure on read in your analytical applications

  • Logically repartition with no physical data movement

  • Surface multiple “views” on the data with different characteristics defined at runtime.

  • Take advantage of analytical practices such as indexing, column statistics etc. to achieve massive performance enhancements.

High level architecture

Iceberg Architecture duality

Custom Rest Iceberg Catalog implementation that generates Metadata file on the fly during LoadTable call based on underlying Streaming system topic.

Metadata file content is returned by catalog on the fly during the above operation - metadata files are ephemeral and not actually stored anywhere.

Ephemeral manifest list with reference to manifest file

Ephemeral manifest file - computed at request time with data splits relevant to the query. Allows for per-query optimization of data partitioning, efficient pruning, schema on read.

Ephemeral data files - served over S3 compatible interface but in reality served by data read directly off the source streaming system with per request pruning / splits.

Stream Table Duality

I.S.K. maps common streaming concepts to Iceberg specific terms. This mapping means that event streaming data from Apache Pulsar or Apache Kafka can be seamlessly represented in the tools and systems that consume Iceberg data. The below table shows these mappings:

Kafka Unit

I.S.K. Unit

Description

Cluster

Namespace

Each Kafka or Pulsar cluster is represented as an Iceberg namespace to indicate the separation of resources between them.

Topic

Table

As logical collections of data points both topics and tables are equivalent.

Message

Row

An individual message within a topic can be viewed as a row within a table when combined with a schema to indicate a consistent structure.

Message field

Column

A field within an event streaming message represents a single attribute of a datapoint, A similar concept to a column within a table row in Iceberg

Indexing

The optimal performance characteristics for consumers of event streaming systems and the systems that typically consume Apache Iceberg tables are very different. Event streaming systems expect to regularly iterate through a small volume of messages, to process the message content and then to move onto the next batch immediately. Analytical cases typically operate in a much less regular pattern over higher volumes of data.

To address this disparity I.S.K. employs a set of indexing techniques to selectively read only the data required by the query. For instance given the predicate “WHERE customer_name = ‘X’”, I.S.K. indexing technology will drastically reduce the number of messages to be read from the underlying event streaming by pruning the read requirement to only messages that contain the required field value.

The above is far from an exhaustive account of the performance enhancing techniques employed by I.S.K. and this set is evolving as new use cases are addressed.

Logical Architecture diagram

Request processing flow:

  1. Background indexing of data in the streaming system - maintaining the up-to-date index of events.

  2. When Iceberg query (i.e. Spark SQL SELECT … FROM DataTopic WHERE customer_name=’X’) is executed

    1. Analytical Application queries ISK for Table metadata.

    2. ISK generates the table metadata at runtime by querying Streambased indexer service.

    3. ISK returns table metadata back to Analytical Application.

  3. Table metadata returned in the 2nd step has locations to manifest files (S3 paths)

    1. Analytical application reads the manifest files directly from Storage service (SSK)

    2. SSK queries ISK for manifest files - ISK generates manifest files with partitions / splits to enable efficient data pruning applicable to the specific query

    3. SSK returns generated manifests back to Analytical Application

  4. Analytical Application reads data files from Storage based on the manifest files received above

    1. SSK fetches data from source streaming system - applying skips / splits to read only the data matching indexing / query predicate

    2. SSK aggregates individual events into batch data files in memory in streaming fashion (whole data file is not held in memory)

    3. SSK returns events batched as data files back to the Analytical Application