top of page
Iona Star Background.jpg
Iona Star Background.jpg

Craxel on Indexing Algorithms for Real-time Knowledge Graphs, New Hires and European Expansion, AI at the Edge, and in the Next Decade

  • kevinm26
  • 2 hours ago
  • 7 min read

A little while back we were excited to share the news of our investment in Craxel, introducing its high performance Black Forest knowledge platform that leverages cutting edge algorithms to efficiently deliver fast, contextualised data to AI applications. 


That blog post included the first part of a Q&A with the company’s Founder and CEO David Enga, which outlined Black Forest’s capabilities and the O(1) indexing algorithm that creates multidimensional knowledge graphs, as well as target applications for the platform. Below, the Q&A is concluded with a deeper dive into Black Forest and O(1), along with news of a new hire and Craxel’s expansion into Europe, as well as planned business focuses for 2026. And there’s also some peering into the next decade.

 

Can you delve a bit deeper into the O(1) algorithm and why it is important to Black Forest and its performance?


The O(1) algorithm is the core breakthrough behind Black Forest. It’s a constant-time, multi-dimensional hash and indexing architecture that lets it index and organise complex data—vectors, triples, timelines, spatial and semantic—as it arrives, making it immediately addressable. That’s what powers everything else Black Forest can do: real-time ingest, fast vector search, real-time knowledge graph creation, and secure encrypted queries, without the scaling costs and delays of legacy systems.


Legacy systems slow down as dataset size grows. Ours doesn’t. We’ve eliminated the bottlenecks that have defined the last 50 years of data infrastructure. Craxel’s algorithm doesn’t care how big the dataset is; it decouples performance from total dataset size, so that the time to receive a query response is directly related to the number of results for the query, not the size of the overall data set.


Craxel’s indexing technology enables highly selective queries without brute force or a need for massive computation. That’s how the Black Forest engine powers real-time knowledge graphs, fast vector search, and encrypted query, all with lower cost and latency. It replaces inefficient legacy indexing structures with something fundamentally more efficient, mathematically elegant, massively parallel, and built for the scale of modern data environments.


Everyone else scales by adding more hardware. We scaled by using better math. So Black Forest customers see dramatic reductions in infrastructure: fewer servers, less power consumption, and less storage overhead. We’ve replaced pure compute with a constant-time indexing model that allows the system to scale efficiently even as data volumes grow exponentially.


You mentioned that the O(1) algorithm creates knowledge graphs at line speed, Can you explain what a knowledge graph is?


One of the biggest challenges in AI is maintaining context as data volumes grow and decisions need to happen faster. Most systems store information, but they do not preserve the relationships between pieces of information in a way that can be accessed immediately. That means context has to be reconstructed after the fact, which takes time and compute. As geopolitical and commercial competition accelerates, that delay becomes a real disadvantage.


A knowledge graph solves this by storing entities and their relationships directly, along with time and provenance, so context already exists as part of the data. This is critical in critical areas like space situational awareness, where you need to continuously understand how satellites, debris, and sensor observations relate. Or more down to earth in retail, where agent-driven systems must understand relationships between customers, inventory, and supply chains in real time.


What makes Black Forest different is that the knowledge graph is built and indexed at the moment data is written, and it stays immediately queryable regardless of scale. Traditional systems slow down and require reindexing as they grow. With Black Forest, ingest, indexing, and retrieval all occur in constant time, so new information becomes operational immediately. That allows AI systems to act on complete, current context, which is increasingly what determines who moves first and who falls behind.


Why does Black Forest create Knowledge Graphs from its data inputs?


AI systems and human analysts both rely on context to understand what’s actually happening, and knowledge graphs are the most effective way to capture that context by preserving the relationships between data as it’s created.


Black Forest builds multidimensional knowledge graphs at line speed, fusing timelines, relationships, entities, and vector embeddings as the data arrives. The graph is built continuously as part of ingest, not reconstructed later through batch indexing or post-processing.


Fraud is a good example. It’s not enough to know that a transaction occurred. You need to understand how it relates to prior activity across accounts, devices, locations, and behavior. Traditional relational databases store those events, but uncovering relationships requires complex joins and processing that become slower as data grows. Most knowledge graph systems still rely on separate indexing pipelines, which means the graph is always catching up to the data.


That delay limits their usefulness. Fraud needs to be detected as it happens, not after the fact. Black Forest builds and indexes the knowledge graph continuously, so those relationships are immediately visible and queryable at any scale. That allows organisations to identify patterns and act in real time, which is becoming increasingly important as data volumes continue to grow.


What mechanisms exist for AI agents and applications to access the data ingested into Black Forest, including the Knowledge Graphs that are created?


Black Forest provides a unified, AI-ready knowledge asset that connects fragmented data sources and provides direct access to structured information for analytics and decision-making.  It maps, indexes, and contextualises data across all sources to offer relationship-centric and ontology-aware retrieval in order to deliver unified results to applications and agents.


Data stored in Black Forest knowledge graphs can be accessed and retrieved through high-performance querying via a secure API integration, specifically designed to handle massive-scale, multi-dimensional time-series data. Since the platform enables instant indexing of records as they are ingested, they are immediately available for retrieval and analysis.


For legacy systems, SQL access is also supported. Black Forest can store specific ingested data as relational tables and can also provide a single point of access via SQL to source datasets if required.


You’ve been recruiting recently, including for a new office in Europe. Can you share some more about your expansion recently and plans for 2026?


Yes! Our expansion is being driven directly by customer deployments and the need to support operational systems in multiple regions. A key part of that effort has been bringing on Gary Connolly to help lead our European presence. Gary has spent his career working at the forefront of high-speed data infrastructure, helping some of the world’s largest organisations solve problems where microseconds matter. He understands both the technical and operational realities of deploying systems that have to perform reliably at extreme scale, and he brings long standing relationships across sectors that are now facing similar challenges with AI and data growth.


His role is focused on establishing a technical and customer presence in Europe so we can support deployments locally and work directly with organisations integrating Black Forest into mission-critical environments. We are seeing strong demand in sectors where data sovereignty, latency, and scale are not optional, including space, defense, and financial infrastructure.


For 2026, the priority is deliberate growth around active deployments. We are continuing to add engineering and deployment capability and expanding partnerships that allow Black Forest to operate within sovereign cloud, on-premise, and Edge environments. The common theme across these customers is that they have reached the limits of conventional data architecture and need a fundamentally more scalable and efficient way to maintain and retrieve context in real time.


Apart from geographic expansion, broadly where do you expect to be investing your energy and resources this coming year?


Most of our investment is going into scaling deployments and expanding the team to support customers operating at very large data volumes. These environments involve continuous ingest from operational systems, sensors, and transaction streams, where data needs to be correlated and retrieved immediately without slowing down as it grows.


We are seeing strong growth both through major systems integrators in the public sector and directly with large private sector organisations. Across both, the challenge is the same. Conventional architectures rely on compute-heavy indexing, duplication, and ever-increasing compute just to keep systems responsive, which is becoming too costly and increasingly non-performant at scale. Black Forest allows them to ingest, correlate, and securely search massive datasets in constant time, giving them a fundamentally more scalable foundation.


It is also significantly more cost efficient. Because our patented indexing architecture knows exactly where data resides, retrieval does not depend on scanning or large compute clusters. It functions more like a GPS for data. That precision allows organisations to dramatically reduce cloud infrastructure and compute costs while maintaining immediate access to fully connected data. This is critical as organisations are being forced to scale data access and performance while controlling costs, not expanding them.


You list Edge and IoT as solution areas for Black Forest. What are your expectations in coming months for more distributed, or decentralised, AI?


There are practical limits to centralizing everything. Many of the fastest-growing data sources exist at the Edge, including satellites, autonomous systems, industrial sensors, and everyday devices like smartphones, which continuously generate location, image, and transaction data. Moving all of that data to centralised cloud infrastructure introduces delay, bandwidth constraints, and significant cost.


What we are seeing is a shift toward a hybrid model, where systems at the Edge need the ability to ingest, organise, and correlate their own data locally so it can be used immediately.


Black Forest was designed for that environment. Because ingest, indexing, and retrieval occur in constant time, systems can maintain a fully connected view of their data at the point of collection and synchronise only what is necessary. This is critical in national security, where forward sensors and space assets must correlate activity in real time; in cybersecurity, where threats must be detected at the point of network activity; and in retail and logistics, where stores and fulfillment systems must continuously respond to live demand. It allows organisations at the Edge to act immediately while reducing dependence on centralised systems and the cost of moving massive volumes of data.


Now beyond the year ahead, can you turn your imagination to the next decade. What does the AI-driven world of 2030 and beyond look like?

By 2030, autonomous systems will be part of everyday infrastructure. Self-driving vehicles, robotic logistics and manufacturing, persistent sensing from space, and autonomous cyber and defense systems will all be operating continuously. Those systems will depend on the ability to ingest and understand massive amounts of data in real time. The organisations and nations that can do that reliably will have a clear operational and economic advantage.


At the same time, there is a growing physical constraint around energy. The current model of storing data in multiple places and repeatedly reprocessing it just to make it usable consumes enormous amounts of compute and power. As data volumes continue to grow, reducing the energy and infrastructure required to manage and access that data will become a national and commercial priority.


Craxel’s role is to provide a more efficient foundation. Black Forest organises data so it is immediately accessible as it is created, without requiring constant reindexing or large compute overhead to find it later. That allows organisations to operate at massive scale while using significantly less infrastructure and energy. As autonomy, robotics, and space-based systems expand, the ability to maintain instant, efficient access to data will increasingly separate organisations that can operate at global scale from those that cannot.


 
 
 
bottom of page