Query across Kafka & Iceberg

In order to follow this page, it's advised that you run the demo first.

In the last section we showed Kafka data and Iceberg data living separately, and three logical views: a view of Kafka, a view of Iceberg and a view of Kafka & Iceberg. Each of those logical views allows us to query the dataset using SQL.

The hotset

Within the Jupyter notebook, if we execute the SQL query below we can see the tables in the hotset database. We're going to surface Kafka data in an Iceberg table format.

spark.sql("SHOW TABLES IN hotset").show()

Running this we can see:

+---------+------------+-----------+
|namespace|   tableName|isTemporary|
+---------+------------+-----------+
|   hotset|    accounts|      false|
|   hotset|   customers|      false|
|   hotset|transactions|      false|
+---------+------------+-----------+

The rows of our tableName column match the different topics in our Kafka cluster (again, found on AKHQarrow-up-right).

Let's see five messages from the transactions topic by running:

spark.sql("SELECT * FROM isk.hotset.transactions LIMIT 5").show()

You are now looking at a Kafka topic viewed as an Iceberg table.

+--------------------+---------+---------------+-----------------+--------+--------------------+--------------------+---------------+------------+--------------------+
|       TransactionID|AccountID|TransactionType|TransactionAmount|BranchID|CustomerFlaggedFraud|     TransactionTime|kafka_partition|kafka_offset|            kafka_ts|
+--------------------+---------+---------------+-----------------+--------+--------------------+--------------------+---------------+------------+--------------------+
|20391fe0-0237-235...|     5070|     Withdrawal|          5534.48|     129|               false| 2025-10-23 12:34:46|              0|      500000|2025-11-28 13:09:...|
|dfba7354-8e95-9a6...|     8905|     Withdrawal|          3628.58|     116|               false|2025-10-23 12:34:...|              0|      500001|2025-11-28 13:09:...|
|5e9fde7d-5a1c-d3b...|     7707|        Deposit|           257.14|     121|                true|2025-10-23 12:34:...|              0|      500002|2025-11-28 13:09:...|
|c4ba5a1b-0828-277...|     1997|        Deposit|          8249.96|     126|                true|2025-10-23 12:34:...|              0|      500003|2025-11-28 13:09:...|
|f704e321-f062-04c...|     9343|        Deposit|          7337.52|     130|                true|2025-10-23 12:34:...|              0|      500004|2025-11-28 13:09:...|
+--------------------+---------+---------------+-----------------+--------+--------------------+--------------------+---------------+------------+--------------------+

circle-info

Note: as this demo uses live generated data your Kafka messages will differ from those displayed throughout this demo. The metadata does not change, however, nor does the branches table within Iceberg.

The view-based approach took the query, identified the relevant Kafka data, and served it back in an Iceberg-friendly format.

We can also see the Kafka topic's schema by describing the hotset:

Notice the additional metadata:

Kafka metadata is also available in Iceberg tables, giving you greater visibility over your data.

The coldset

Remember, our Iceberg data currently lives within MinIO, and as expected we can see these tables too:

The branches table has no corresponding Kafka topic; it exclusively lives in MinIO, along with the other folders:

From the previous page

Merged

Now, let's look at the data merged from both Kafka and Iceberg. To begin, we run the following query:

We'll see:

Looking at the message count is another excellent way of visualising the combined data. Run these three queries in one cell:

First, we see how many transactions have been stored in the coldset:

Second, we see how many messages we have when we merge the hotset with the coldset:

The difference between the coldset and merged represents the number of messages stored in Kafka. The final output shows us the contents of the hotset:

circle-info

Note the difference of 10 between 21,650 and 21,660. As this is a real-time demo, 10 new messages arrived in between the second and third SQL queries! Your own numbers will differ, but will demonstrate the same point.

To round this off, let's display 10 messages from the transactions topic within the merged table:

Above, we have seen how Streambased enables you to have a unified view of your data, combining live Kafka messages with historic Iceberg data.

But you'll likely also want to make dashboards with this data. That is easily achieved with Streambased too, as we'll explore next.

Last updated