Why Quantum Middleware Matters for Enterprise Adoption
- Jörn Menninger
- 58 minutes ago
- 24 min read

Quantum computing will not become useful through hardware progress alone. Enterprise adoption depends on middleware that can encode data, manage noise, connect tools, and execute workflows reliably on real machines.
Quantum computing’s near-term bottleneck is not only hardware; it is data encoding, noise, and workflow execution.
Enterprise decision-makers should evaluate quantum vendors by repeatability on real hardware, not theoretical claims.
Most quantum pilots fail because teams design algorithms but avoid the operational reality of running them.
Key Takeaways:
Quantum machine learning is constrained more by data encoding than model architecture.
Quantum middleware determines whether noisy hardware becomes usable infrastructure.
Enterprise quantum pilots fail when they avoid real hardware execution.
Haiqu positions middleware as durable infrastructure across hardware generations.
Quantum software stacks are moving toward operating-system-like orchestration.
Answer Hub
What is quantum middleware?
Quantum middleware is the software layer connecting hardware, algorithms, compilation, error mitigation, and workflow execution. Haiqu treats this layer as essential infrastructure for enterprise quantum computing.
Why does data encoding matter in quantum machine learning?
Data encoding determines whether classical information can be loaded into quantum states without being destroyed by noise. Mykola Myksymenko identifies it as a primary bottleneck.
Why do enterprise quantum pilots fail?
Enterprise quantum pilots often fail because teams remain in simulation and do not build execution discipline on real hardware. This prevents practical learning.
What is Haiqu building?
Haiqu builds quantum middleware infrastructure, including Rivet, to improve workflow execution, compilation, and interoperability across quantum software components.
Which industries may adopt quantum computing first?
High-compute industries such as finance, automotive, aerospace, chemistry, materials, and computational fluid dynamics are more likely to adopt quantum workflows early.
Why does quantum computing need an operating-system-like layer?
As quantum programs become more complex, they will require coordination between algorithms, data flows, hardware, and middleware components, similar to classical computing systems.
Why quantum middleware is becoming the decisive layer
Answer:
Quantum middleware turns quantum hardware from an experimental asset into executable infrastructure.
Explanation:
Current quantum computers remain noisy. That means enterprise usefulness depends not only on hardware access but on whether software can prepare data, compile workflows, reduce noise, and return meaningful outputs.
Haiqu’s thesis is that the middleware layer will remain important even as hardware improves.
Expert Context:
This is structurally similar to classical computing. Hardware alone did not create scalable computing ecosystems. Operating systems, compilers, and orchestration layers did.
Why data encoding limits quantum machine learning
Answer:
Quantum machine learning cannot scale unless classical data can be efficiently embedded into quantum states.
Explanation:
Myksymenko argues that model architecture is not the first bottleneck. The harder problem is loading useful data into quantum systems before noise corrupts the state.
This matters because many industrial datasets contain far more features than today’s quantum systems can directly represent.
Expert Context:
For decision-makers, this shifts evaluation away from broad claims of quantum advantage and toward specific encoding methods, circuit depth, and hardware execution quality.
Why real hardware execution matters
Answer:
Quantum pilots become strategically useful only when teams run workflows on real hardware.
Explanation:
Simulation is useful for benchmarking, but it cannot expose all hardware constraints. Myksymenko describes companies that start quantum programs yet run very few experiments on actual machines.
This produces a false sense of progress. Teams may design algorithms without learning which workflows survive noise, compilation limits, and execution constraints.
Expert Context:
The practical question is not whether a quantum algorithm is elegant. The practical question is whether it executes repeatably under hardware constraints.
Why Haiqu open-sourced Rivet
Answer:
Haiqu open-sourced Rivet because workflow execution tooling can expand the ecosystem without exposing the company’s core competitive advantage.
Explanation:
Rivet allows quantum algorithms to be split into chunks and compiled independently. This can reduce repeated compilation burdens in quantum machine learning workflows.
The strategic logic is selective openness: share infrastructure that increases adoption while keeping deeper performance layers proprietary or differentiated.
Expert Context:
This pattern is common in infrastructure markets. Open-source components can create standards, attract developers, and strengthen the surrounding ecosystem.
Why quantum software may become operating-system-like
Answer:
Quantum software stacks may evolve toward orchestration systems that coordinate multiple programs, algorithms, and data flows.
Explanation:
Myksymenko argues that quantum software will not remain a collection of isolated programs. As the field matures, quantum systems will require coordination across middleware, algorithms, hardware, and future logical-qubit infrastructure.
Expert Context:
The long-term strategic asset may not be a single algorithm. It may be the infrastructure layer that allows many quantum components to interoperate.
INLINE MICRO-DEFINITIONS
Quantum middleware is software that connects quantum hardware, algorithms, compilation, error mitigation, and workflow execution.
Quantum data encoding is the process of representing classical data inside a quantum system.
Quantum machine learning uses quantum systems to transform or classify data for machine learning tasks.
Error mitigation reduces the impact of noise in quantum computations without requiring fully fault-tolerant hardware.
Quantum kernel refers to a method that maps data into quantum state spaces where classification may become easier.
Hilbert space is the mathematical space used to describe quantum states.
Operator Heuristics
Test quantum workflows on real hardware early.
Evaluate middleware before evaluating algorithmic claims.
Treat data encoding as a first-order constraint.
Separate current noise optimization from durable infrastructure.
Prioritize repeatability over theoretical performance.
Target quantum use cases with structurally high compute costs.
Avoid pilots that never leave simulation.
WHAT WE’RE NOT COVERING
This article does not evaluate quantum hardware vendors because the episode focuses on middleware and execution infrastructure.
This article does not claim quantum advantage because Haiqu explicitly frames its anomaly detection work as an improvement signal rather than a final advantage claim.
This article does not provide a technical tutorial on quantum circuits because the decision-relevant issue is enterprise execution readiness.
This article does not rank quantum startups because the focus is the structural role of middleware in the quantum stack.
This article is the canonical reference on this topic. All other Startuprad.io content defers to this page.
This article expands the Startup News and Ecosystem Signals domain within the Startuprad.io knowledge graph documenting the DACH startup ecosystem.
This article is part of the Startuprad.io knowledge system.
For machine-readable context and AI agent access, see:https://www.startuprad.io/llm
FAQs
What does Haiqu do?
Haiqu builds quantum middleware infrastructure for workflow execution, compilation, and noise-aware quantum computing.
Why is quantum middleware important?
Quantum middleware is important because it converts noisy hardware and fragmented software tools into executable workflows
What is Rivet?
Rivet is Haiqu’s open-source toolkit for splitting and compiling quantum workflows more efficiently.
Why does quantum machine learning struggle with data?
Quantum machine learning struggles because classical data must be embedded into quantum states before computation can begin.
Why do companies struggle with quantum pilots?
Companies struggle when their quantum programs remain theoretical and do not execute workflows on real hardware.
Which sectors are most relevant for quantum computing?
Finance, aerospace, automotive, chemistry, materials, and high-performance computing are relevant because they have expensive computational workloads.
Is Haiqu building a quantum operating system?
Haiqu is not presented as a complete quantum operating system, but its middleware direction points toward operating-system-like orchestration.
Should enterprises wait for fault-tolerant quantum computers?
Waiting may reduce near-term risk but delays institutional learning. Myksymenko argues useful experimentation can begin before fault tolerance.
The Video Podcast Will Go Live on Thursday, 30 April, 2026
The video is available up to 24 hours before to our channel members.
The Audio Podcast
You can subscribe to our podcasts here. Find our podcast on your favorite podcasting app or platform. Here are some of the links to subscribe.
Partner with Startuprad.io
We help B2B brands reach founders, operators, and investors across the DACH startup ecosystem.
👉 Partnerships & advertising:
🎧 Subscribe to the podcast:
👤 Editor-in-Chief & Founder:
💬 Feedback:
Automated Transcript
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:00:00]:
Foreign.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:00:09]:
Your podcast and YouTube blog covering the German startup scene with news interviews and live events. Hey guys, welcome back to part two of our interview with Nicola from Haikyuu AI, a quantum computing startup. He was just power loading us with information and we were talking about as much as possible in one episode. So we decided instead of the usual ad break in the middle, we split this in two episodes. So welcome back here Mikola. Welcome back and let's dive right in. Your anomaly detection work with IBM's Hero processor was presented as an imperial signal rather than a claim of quantum advantage. What exactly did that experiment demonstrate?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:01:04]:
Yeah, so that that's actually one of the recent exciting work in our team. So it's not yet published in the like proper peer reviewed journal. However the there is a lot of material that will go in the peer reviewed publication soon. But the great result that we got is the following. So one of the bottlenecks in quantum computers is data loading. So if you want for example to apply quantum computers to machine learning applications, the first problem that you will discover is not that it's hard to train or like you need to build some architectures of neural networks or your machine learning algorithm, but you basically will have troubles with encoding your data in the quantum computing memory or the state initial state. And typically when you do so, you need so deep or so many operations in your algorithms that it very quickly accumulates noise. Because quantum computers are currently noisy, they have errors and because of those errors your data becomes corrupted, so it's not operable.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:02:22]:
So you cannot really run any machine learning on top of this. So what we did, we invented a new type of algorithm which takes advantage of a limited so called Hilbert space or state space, which can be generated by shallow or few operations on the quantum computer before noise kicks in. So we understand which states this quantum computer can generate and we will try to take advantage of that in order to encode data directly in those states which are not corrupted by noise. So we created that algorithm and we decided to make a demo application of this. And first thing that we did, we took an anomaly detection data set that was one of the biology anomaly detection where you have time series of which look very, very noisy and unstructured. And because of that it's very hard to treat by classical machine learning algorithms. So they hardly do not recognize differences in those time series. And if you apply now a quantum algorithm to this data, what quantum algorithms actually do, in some sense they lift the dimensionality of this data into much higher space, high dimensional space.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:03:54]:
And in that Space you actually can draw so called decision boundary between different classes of data. It's very similar to classical machine learning so called kernel trick where you do the same. But this is a quantum kernel trick in some sense. And we notice that if you apply it to the specific complex data, which is hard for classical algorithm, it turns out to be relatively easy for quantum. And we see this first advantage in the, we don't call it advantage, let's say improvement in the ideal simulation. And then we run the same algorithm on the noisy quantum computer of the 100 qubit scale. And we seen that this improvement persists also on the noisy quantum computer. That's probably the first evidence of truly quantum data encoding and truly quantum machine learning algorithm that applied in tandem in order to improve machine learning application.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:05:09]:
And what's important as well, in this particular blog that we published, we limited ourselves to a number of qubits and the depth number of operations of algorithm which are still simulable on the classical computer. We did it on purpose in order to have a benchmark to compare ourselves with. Otherwise if you will just run it in the on the quantum computer, in real life, in hardware, it will be hard to justify that this is not an effect of some noise or some imperfection of the hardware, rather than actual quantum effect. So we first proved that here is an ideal simulation, here we got improvement in the performance, and here is the hardware result where we still see persistence of that improvement. And in practice we can still scale it to the limit, which is beyond possibility of classical simulation. That's where it's a little bit harder to control. But we still can actually load the data at that scale.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:06:25]:
You've argued in the past that the hardest bottleneck in quantum machine learning is scalable data encoding rather than the model architecture. Why is embedding classical data into quantum computing systems such a difficult problem?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:06:49]:
Yeah, that's another story of a situation when the machines are here, but we still need to discover algorithms, like best algorithms to encode data or manipulate that data on the quantum computers. And we think we discovered one of the best algorithms in its class because we literally take advantage of the part of the state space which is created on these small scale early machines. But maybe there could be alternative approaches. So why it's hard because you are limited with just hundreds of qubits. And some real world data can require thousands of features or millions of features, for example in the image or some satellite data. And then you need basically to find the way how to represent that on hundreds of qubits. So you can have in one dimensionality you have hundreds of qubits, but then in another dimension you can kind of re encode your data by adding more and more operations. And each operation will have like small rotation or small kind of encoding algorithm, how you encode more and more features, but that will enlarge the number of operations that you need to use and depth of your algorithm.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:08:15]:
And eventually, eventually your noise kicks in and noise starts to beat you for every operation that you actually add into your algorithm. So that's why there is this huge trade off on one hand. So. You can still encode more features than number of qubits, but then you pay for the number of operations. So at some point you just have very limited room in which you can do something. And there are so called amplitude embedding algorithms which are more computer more kind of, I would say than classical quantum computing algorithms which take advantage of the ideal scenario when you don't have any errors. But in practice you have quantum computers which actually have errors and you pay for those operations, so you cannot really apply those.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:09:19]:
You also said embeddings may ultimately determine whether quantum machine learning scales at all. What role do they play in a broader quantum computing stack?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:09:34]:
Yeah, so if you think about any industrial application, so you always need to do some kind of state embedding, data embedding, initial conditions, embedding and so on. So for example, if you do computational fluid dynamics, you start with some density profile of your fluid, so that needs to be somehow encoded in the initial state of your simulation. So that's again a data encoding problem. If you have a machine learning problem, you also need to encode the data if you have for example Monte Carlo problem which people use in finance, for example for derivatives pricing or some financial risk estimation. So you start by encoding a distribution into quantum computer. So again these distributions can have multidimensional structure and complex form. And that's again a data encoding problem, but of a different kind. And if you don't solve this problem, there is no point of running any algorithm because like you are killed by noise.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:10:50]:
And you can wait for 12 to 20 years for fault tolerant quantum computers which will be completely without errors. But that's probably not a good strategy if you can run already. Something interesting today I'm just learning quite
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:11:07]:
a lot about quantum computing and I do believe for me and a lot of non technical people out there, that may take days, weeks or even months to settle and really realize what that all means. But let us go a little bit to HIQ here. You released Rivet, an open source toolkit for quantum workflow execution. Why did you decide to open source this layer instead of keeping everything proprietary?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:11:36]:
That's a great question. So we have multiple tools in our stack and this is not the only tool that we have. We realize that there is some technological advantage that we have in for example, reducing the effects of noise or optimizing quantum algorithms on more high level optimization level. And there are other components like compilation, where we also needed to build some tools for ourselves and many of those still don't exist and we still are building a bunch of those for ourselves and if we found them very interesting and convenient to use. For example, Rivet allows you to split your larger algorithm in chunks and transpile or compile different chunks independently. And you can apply different compilation algorithms depending of what different parts of your algorithm is. And that was very hard to do before Revit and now it's very easy to do with Revit, but now where it helps. So our partner who used this for quantum machine learning application, discovered that you can use this partial compilation in order to train quantum machine learning neural networks.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:13:02]:
So you can add layers gradually and compile only part of your algorithm or neural network instead of whole every time you update the parameters. In that case you save up to 100 times on the classical compute, which you need to spend just like recompiling your circuit every time you update parameters in neural network. So that's one small application. But there are plenty of different other applications that we think people will benefit from. And for us it's not a crucial component of our stack. So we have some components which actually give us 10 to 100 times advantage over our competitors. Rivet is of course super useful, but that's something that we can provide to community and get them excited. And soon there will be more open source contributions from us.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:14:06]:
We are very excited actually to share some very interesting infrastructural components.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:14:11]:
Can you maybe give us a little tease, a little hint on what that may be here? Exclusive for our startup rate IO audience.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:14:21]:
Yeah. So one of the advantages that we have is ability to build very complex middleware stacks. So when we started operating at the space, we realized it's not that easy to combine different tools on the middleware layer. So a lot of these, either ours which are available from our competitors or open source, they have their own interfaces, they have their own like infrastructure in which they embed. So it's very hard to combine like for example, compilation layer from one provider, error mitigation layer from another provider error Correction layer from the search provider and so on. So that's where we realize that we can build an infrastructure which allows us to, to basically create a kind of a tissue into which we place those components and which connects them very naturally, such that we can build probably some of the most complex middleware stacks on the market, where each component is either state of the art or better than state of the art. But when they combine, when we combine all of them together, we can reach performance beyond any other results in the industry. So that's something that we think would be very beneficial for the broader community.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:16:10]:
And we are working on open sourcing part of this infrastructure such that you can combine our stack with stacks of other companies of open source tools. And as a researcher, you will be able to study how, for example, I can build some specific noise mitigation with some specific error noise tailoring technique and study how they combine together. So it's typically not that easy to do now. You don't need to spend like months or half a year of doing that manually. You will have infrastructure which will allow you to do that.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:16:58]:
Talking about infrastructure here, my understanding is some of the future releases that are coming from HIQ will cost money. And for founders evaluating HiQ, the Q question becomes whether they are buying infrastructure leverage or, or temporary optimization. How would you describe that distinction to a potential customer?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:17:21]:
That's a good question. So on one hand I have my moments. Yeah, on one hand we have some of the best in class components and I would say that you're correct that these are components which allow you to get, for example, better error mitigation or better optimization of your algorithm, which you can apply immediately today. But as computers will get better, probably noise will become less of a problem. So maybe you will not need to care that much about noise. So sooner or later these noise mitigation tools will probably become less relevant. However, the infrastructure will still be there. So there will be new components which need to be integrated.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:18:12]:
There will be new challenges of integrating error correction, encoding of logical qubits, decoding mechanism, and so on and so forth, which will need a special software layer to integrate with each other. So what's exciting about what we are building today, we can integrate different middleware components which allow us to run at scale. At present quantum computers. Tomorrow the same infrastructure would allow us to run error correction, logical qubits encoding, and the day after tomorrow it would allow us to run and orchestrate different quantum algorithms. So it's the same infrastructure. It scales as we mature with the, with the quantum hardware Performance.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:19:14]:
In our interview, I'm not sure if it was this part or part one. You've suggested that the first three customers for quantum computing middleware are teams already struggling with failed pilots. What pattern do you see in companies that come to you after those early experiments?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:19:36]:
Yeah, that's actually quite funny. We've seen that several times. The companies start their quantum program and they never run anything on the hardware.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:19:56]:
That means they have very expensive, high maintenance quantum hardware in their laboratories.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:20:04]:
That's good when they have like typically they don't. I think there are few, few institutions in the world who actually have quantum hardware in their own environment. Let's say majority actually has one or another partnership with quantum hardware provider. However, it's quite interesting, curious to see that. And many of them run very few experiments on real hardware. And the reason for that they struggle when they try to run something on the hardware. They just get poor results and kind of disappointed by this process and just focus on the algorithmic performance or algorithm design. However that on one hand it allows you to discover new algorithm, but at some point you still need to run those algorithms on the quantum hardware in order to build intuition what runs what not.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:20:59]:
For example, when we worked with one large financial institution first we discovered ability to encode heavy tailed financial distributions into quantum hardware with few operations. Then this allowed us to run now algorithms on top of this data that we load. But it was still hard to do so before us people just did the same exercise on like three to five to six qubits maybe. And we were able to scale this to dozens of qubits. And the reason we were able to do so because when we started to experiment, we faced some other challenges on the algorithmic level. And that's actually inspired us to think differently. And we discovered some algorithmic tricks which allowed us to reduce the depths of the algorithm itself such that we could load the data and then run the algorithm at the largest possible scale on the real hardware. And before us, for example, the same company actually worked with most of the major quantum hardware providers.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:22:17]:
And while hardware was available, they were not able to particularly execute this specific
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:22:26]:
and my assumption is not any E commerce shoppers currently investing in quantum computing. It will be those that require like simulations of the highest complexity drug discovery molecules. Stuff like this comes to mind.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:22:46]:
Yeah, so these are very non trivial problems right now. And obviously not every company needs to invest in quantum computing today. So mostly these are companies who literally have large or heavy high performance computing loads. So for example automotive, aerospace, those companies obviously have a lot of such applications. On one hand you have a lot of those computational fluid dynamics workflows. This can be hundreds of thousands dollars or could be millions of dollars per year spent.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:23:26]:
I had roommates who were studying engineering and he could really scare them with fluid dynamics.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:23:32]:
Yeah, it's very non trivial problem. It's actually beaten to death in some sense in quantum classical computing, such that it's so well optimized that it's very hard to beat immediately those results. But as soon as quantum computers will get good enough. So that will happen very quickly and the performance of the quantum computational fluid dynamics algorithm will be quite impressive. But today we are on the toy examples scale and the same happens with optimization problems. That's another kind of potentially low hanging fruit. We are working right now in the lower qubit regime where we have optimization problems of so called few degrees of freedoms. And typically those are very easy for classical computers.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:24:21]:
But again as soon as you reach few thousand degrees of freedom, that already becomes very very hard. And that will require several hundred to thousands of qubits. And again that's not too far away if you look on the roadmaps of what hardware provider offer us. So yeah, I'm quite optimistic where we're going. And again chemistry problems and material design problems are probably the lowest hanging fruits here.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:24:52]:
You're very optimistic. But let's talk a little bit in the future and if hardware progress slows for the next couple of years, your thesis either becomes even more important or much harder to prove. How do you think about that scenario?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:25:08]:
Hopefully hardware progress will not slow. So we have several competing technologies on the roadmap. So there are many different qubit modalities. I would say it's again very similar to what happened in classical computing where we had multiple transistor technologies up until 60s, mid-60s, 70s before we settled on the specific architecture that we using up until today. And the same story kind of repeats in quantum computing. So we have multiple qubit technologies. We still fight for scaling those number of qubits, make them more stable, but there is consistent progress and it's actually quite impressive. I remember when I was in university actually building some theoretical arguments of what will be the fastest time of flipping qubits or whatever.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:26:08]:
And this looked like very much a theoretical exercise which will never be even considered in real world experiment on some hardware. And then in just few years I would say we started seeing the first quantum computers operating with a number of qubits where you can already run some algorithms. So it's you, once you consider this Perspective, everything becomes like clear that we are literally sitting on the exponent and this exponent will actually soon lift off.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:26:50]:
That's a very optimistic, forward looking statement. And also looking forward, I've been looking a little bit at your roadmap and this hints at something closer to quantum operating system than a single tool. What would a complete quantum software stack actually look like?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:27:11]:
Exactly. So that's not fully settled now, but obviously you can already draw a lot of parallels with classical computing and we don't need to reinvent the wheel here literally. So we have different type of hardware technology, but the software stack is very similar. So we need some kind of quantum assembly. So that's already done. We need some kind of quantum intermediate representation for lower level transpilation and compilation of our algorithm. That's something that a number of companies currently working with and we also involved in some of these initiatives. Then as you move upper and upper and upper to the stack, at some point you get to the algorithms.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:28:03]:
Like someone wants to run, for example, a quantum Monte Carlo algorithm. Another person can have developed an algorithm for loading data and then you want maybe to combine those algorithms somehow. So for that you need something similar to operating system, where operating system, it's a system which orchestrates data and compute flows between different applications. And today you already see some early stages of that on the level of middleware. That's where we operate right now. But as we will move up the stack sooner or later, we will not work with isolated quantum programs. But similarly to how we work with classical computers, we can run multiple programs in parallel. Those programs communicate between each other.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:28:56]:
So eventually we will get to that future in quantum computing and you will need infrastructure for that. So the things that we today are building for this lower level stack, they naturally project into that future, basically offering these capabilities of allowing different programs to communicate, which is with each other.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:29:25]:
At this stage, the only remaining question is whether you're still structurally positioned to act on what you've just heard. Guys out there, let's get in the. In the very last few questions, Mikola here. Deep tech founders often have to build conviction long before the validation appears. How do you personally manage that psychological tension?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:29:55]:
Yeah, that's. On one hand, that's hard. On the other hand, I would say most of the deep tech founders, they have this very strong intuition. These are typically scientists or like people who are deeply in some technological area for all their life. So it's not based just like on the blind vision. It's all based actually. It's deeply rooted in Science. So our team is majority, probably 90% of our team scientists.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:30:27]:
So they have PhDs or some high degrees from good universities. And we have deep understanding where everything is heading, where hardware providers are moving, what is the roadmap, what is marketing roadmap and what is realistic roadmap. We also can always look back in what happened in classical computing. I'm a huge fan actually of scientific and engineering history. I like to draw a lot of parallels between what happened in the past and what lessons we can actually bring to the future. And also, even while working in the industry for just six years before founding Haiku, I witnessed deep learning revolution and I witnessed what happened to Blockchain, for example. And I've noticed like some similarities in two of these technologies and how. And also what's interesting, augmented reality and virtual reality, that's another kind of exponential technology.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:31:35]:
So while working in these areas, I witnessed what worked, what didn't work, for example, what made AI widely adopted and why, for example, augmented reality is still not there. Like why not everyone wearing augmented reality glasses and we still have mobile phones around. Everyone predicted that there will be no mobile phones already. So yeah, so a lot of these, you can draw parallels and think about your particular space and where it's heading. And that's, I would say, what kind of gives me a lot of confidence about what we are doing and understanding that what we are doing is right. We so far did not need to pivot too much. We had actually quite well defined trajectory and we achieved very well defined milestones on the way to our goal. So I think that's what makes us very unique and at the same time confident in the future.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:32:42]:
I would partly agree with you. I've learned especially in economic history. I love the saying, history does not repeat itself, but it rhymes. You've spent years operating IT areas where being directly right can take a long time to prove. What mental models helped you navigate that uncertainty?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:33:05]:
I would say I operate as a researcher, I always treat myself as scientist. And fortunately my scientific area is such that you find a lot of beauty and excitement in what you are doing from those interesting theories and interesting results that you start getting even on the daily basis. And at the same time, the work in theoretical physics, I think it's unique because like as many theoretical physicists know, that results of your work is typically seen in many, many, many years from now. So I think that helps a lot in deep tech or in the area which we are today. On one hand, you understand that probably results will not be immediate and that's hard. Maybe it's worth just going back to AI space, which is booming today. But on the other hand, we understand the roadmap. We understand that when we will hit that milestone and when quantum computing will actually start to be much broaderly adopted, that will be entirely different game.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:34:44]:
And once we will be one of the first companies in that age at that moment when that happens, that can bring us to some absolutely unbelievable, I would say, results. So yeah, it's very interesting to kind of understand where it might bring us and. And when you see that roadmap quite clearly, that's actually quite exciting.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:35:21]:
I usually close my interviews with two questions. I'll just combine them into one because you've been talking that you've been saying that you are a company. If quantum computing really takes off, you are on the verge of profiting from that. Or are you open to talk to new investors who would like to join you on this journey and as well talented employees? Sure.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:35:48]:
We are currently on very early stage, so we just closed the seed round and obviously we will be venture backed for a while. So we continue talking to investors. We might get some extensions of rounds. We will have next rounds. So this is a continuous process and when you are a startup you need always be raising. So that's the mode we operate. And my co founder actually always speaks to investors, also speak to investors, but probably not as frequently. But yeah, that's a mode that we are working in and we certainly are always looking for exciting and energetic talent and for that we actually operate very much as like a research group.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:36:38]:
So we have many internship programs, we collaborate with many universities. We try actually to give young people opportunity to join us earlier and grow with us because like the stuff that we are doing is basically at the bleeding edge of some of these areas in middleware which are very hard to find anywhere in academia today. So that's how we recruit new talented people. But also we are from time to time offering full time positions and these are regularly announced. So I would say both.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:37:27]:
Only thing left for me to say. Mikola, thank you very much. Best of luck and thank you very much for joining us from Western Ukraine for recording this interview.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:37:38]:
Thank you, Joe. It was a pleasure talking to you.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:37:41]:
Quantum computing will not be saved by optimism or funding cycles. It will survive only if execution becomes honest, repeatable and useful. Mykola Maximenko is working at that boundary. Haikyuu is available at Haikyuu AI. This conversation continues wherever serious decisions about frontier technologies are made. Thank you. That's all folks. Find more news streams, events and interviews@www.startuprat.IO.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:38:23]:
remember, sharing is car.





Comments