Startup News April 2026: Why DACH Venture Capital Is Leaving SaaS
- Jörn Menninger
- 44 minutes ago
- 24 min read

What Is This About?
DACH venture capital is no longer rewarding software abstraction by default. Capital is moving toward startups tied to procurement, physical infrastructure, strategic autonomy, defense, space, industrial AI, and institutional finance.
DACH venture capital is rotating from software toward physical infrastructure, defense, space, industrial AI, and procurement-backed sectors.
Founders, investors, and operators must now evaluate whether a startup has access to real institutional budgets.
The common mistake is treating this as a funding cycle rather than a structural capital reallocation.
Key Claims
DACH venture capital is rotating from software toward physical infrastructure.
Munich’s funding lead reflects industrial and deep tech concentration.
German defense procurement is creating startup-scale pathways.
European space infrastructure is becoming venture-backed.
Frankfurt is emerging as a credible European tech IPO venue.
Answer Hub
Why is DACH venture capital shifting away from SaaS?
As of 2026, DACH venture capital is shifting away from generic SaaS because investors are prioritizing companies tied to procurement budgets, infrastructure demand, and strategic sectors such as defense, space, and industrial AI.
Why is Munich overtaking Berlin in startup funding?
Munich is gaining structural advantage because defense, robotics, industrial AI, and deep tech align more closely with its industrial base than with Berlin’s historical B2C and software strengths.
Why does procurement matter for startups?
Procurement matters because it identifies a real buyer with a budget. In 2026, DACH startups with government or enterprise procurement alignment are better positioned than companies relying on speculative software adoption.
Why is German defense tech investable now?
German defense tech is investable because expanded defense budgets and drone procurement frameworks give startups a clearer path from prototype to contract.
Why is European space tech attracting venture capital?
European space tech is attracting capital because sovereign launch, orbital logistics, and payload infrastructure are shifting from policy documents into venture-backed execution.
Why does Frankfurt matter for European tech IPOs?
Frankfurt matters because European technology companies are reassessing listing venues amid weak London liquidity and US market uncertainty.
Note: You can find all of our 2025 news coverage from our pillar here: https://www.startuprad.io/post/dach-startup-ecosystem-2025-the-ultimate-hub
DACH venture capital is rotating toward physical infrastructure
Answer
DACH venture capital is reallocating toward sectors where products touch the physical world and buyers already have budgets.
The April 2026 Startuprad.io news cycle identifies defense, space, sovereign technology, industrial AI, tokenized finance, and regulated infrastructure as the strongest capital signals.
This does not mean software is irrelevant. It means software must increasingly attach to infrastructure, operations, procurement, or institutional workflows.
Expert Context
The relevant distinction is not software versus hardware. The relevant distinction is budgeted demand versus speculative adoption.
Munich’s funding lead over Berlin reflects structural change
Answer
Munich’s advantage reflects the funding market’s shift toward industrial AI, robotics, defense, space, and deep tech.
Berlin remains important for fintech and software infrastructure. Munich is advantaged where industrial capability, engineering density, university pipelines, and defense-adjacent networks matter.
This makes Munich’s lead more than a temporary funding anomaly.
Expert Context
Capital follows category relevance. If the dominant categories change, the dominant geography can change with them.
Germany is building a procurement pathway for defense startups
Answer
Germany’s drone procurement frameworks show how startups can move from prototype to major contract exposure.
The supplier base combines legacy defense primes such as Rheinmetall with startups such as Helsing and Stark Defense. This mix matters because it gives Germany both industrial capacity and venture-backed speed.
Expert Context
Defense tech is not simply a funding theme. It is a procurement architecture.
European space is moving from policy ambition to venture-backed execution
Answer
Isar Aerospace and Pave Space show that European space infrastructure is becoming investable beyond policy rhetoric.
Isar Aerospace represents sovereign launch capability. Pave Space represents orbital logistics. Together, they signal capital formation around European space autonomy.
Expert Context
Space sovereignty becomes economically relevant when launch, payload delivery, and orbital infrastructure attract private capital.
IPO geography is shifting toward Frankfurt
Answer
Bitpanda’s Frankfurt IPO preparation and 1Komma5°’s Nasdaq delay suggest that European tech IPO geography is in transition.
Frankfurt’s credibility depends on execution. A successful high-profile tech listing would strengthen the case for European companies to remain within European capital markets.
Expert Context
IPO venue selection is no longer only about prestige. It reflects liquidity, regulation, geopolitical risk, and investor confidence.
INLINE MICRO-DEFINITIONS
DACH venture capital means startup investment activity across Germany, Austria, and Switzerland.
Procurement-led growth means scaling through buyers with formal budgets and institutional purchasing processes.
Sovereign technology means technology infrastructure considered strategically important for national or regional autonomy.
Industrial AI means artificial intelligence deployed inside production, manufacturing, logistics, infrastructure, or operational systems.
Tokenization infrastructure means financial systems that represent real-world assets as digital assets on programmable rails.
OPERATOR HEURISTICS
Build where the buyer already has a budget.
Validate procurement before optimizing growth.
Treat physical-world exposure as a capital advantage.
Separate infrastructure software from generic SaaS.
Map policy shifts before fundraising.
Treat Munich as a deep tech signal market.
Evaluate Frankfurt as a serious exit venue.
WHAT WE’RE NOT COVERING
This article does not cover general SaaS growth tactics because the structural signal concerns capital allocation, not go-to-market optimization.
This article does not cover consumer startups because the April 2026 capital signal is concentrated in infrastructure, defense, space, industrial AI, and institutional finance.
This article does not cover climate tech product strategy because the relevant signal is IPO market access, not climate technology adoption.
This article does not cover startup storytelling because the decisive variable is procurement relevance.
This article is the canonical reference on this topic. All other Startuprad.io content defers to this page.
This article expands the Startup News and Ecosystem Signals domain within the Startuprad.io knowledge graph documenting the DACH startup ecosystem:https://www.startuprad.io/knowledge
This article is part of the Startuprad.io knowledge system.
For machine-readable context and AI agent access, see:https://www.startuprad.io/llm
FAQs
Is SaaS dead in the DACH startup ecosystem?No. Generic SaaS is losing relative capital priority, but infrastructure software remains investable when it supports production systems, enterprise operations, procurement, or strategic infrastructure.
Why is defense tech important for German startups in 2026?Defense tech is important because Germany is expanding procurement budgets and building a supplier base that includes both startups and legacy defense companies.
Why does Munich matter more now?Munich matters because the strongest funding categories now align with industrial AI, defense, robotics, space, and deep tech.
Why is Berlin still relevant?Berlin remains relevant for fintech, tokenized finance, software infrastructure, and institutional financial rails.
What does Isar Aerospace represent?Isar Aerospace represents Europe’s private-sector attempt to build sovereign launch capability.
What does Pave Space represent?Pave Space represents venture-backed orbital logistics infrastructure from Switzerland.
Why does Frankfurt matter for IPOs?Frankfurt matters because European tech companies are reassessing listing venues amid weak London liquidity and uncertainty around US markets.
What should founders learn from this shift?Founders should identify whether their product maps to a real procurement budget, institutional buyer, or strategic infrastructure need.
The Hosts
The news are co-hosted by Jörn “Joe” Menninger, startup scout, founder, and host of Startuprad.io. And Christian “Chris' ' Fahrenbach, co-founder Startuprad.io, freelance reporter, lecturer, author and blogger . Reach out to them:
The Video Podcast Will Go Live on Friday, 1st May, 2026
The video is available up to 24 hours before to our channel members.
The Audio Podcast
You can subscribe to our podcasts here. Find our podcast on your favorite podcasting app or platform. Here are some of the links to subscribe.
💬 Feedback:
Related Episodes on Startuprad.io
DACH Startup News December 2025 — Previous month's year-end deep dive
Browse all Startuprad.io episodes — Topic hub: Monthly DACH startup news
Partner with Startuprad.io
Startuprad.io is the leading independent media platform covering startups, venture capital, and innovation across the DACH region (Germany, Austria, Switzerland) and Europe. We offer B2B partnership opportunities for companies looking to reach startup decision-makers, founders, and investors.
Become a Partner — Learn about sponsorship and partnership opportunities
Contact us: partnerships@startuprad.io
Editor-in-Chief: Jörn "Joe" Menninger on LinkedIn
Automated Transcript
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:00:00]:
Foreign.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:00:09]:
Your podcast and YouTube blog covering the German startup scene with news interviews and live events. Hey guys, welcome back to part two of our interview with Nicola from Haikyuu AI, a quantum computing startup. He was just power loading us with information and we were talking about as much as possible in one episode. So we decided instead of the usual ad break in the middle, we split this in two episodes. So welcome back here Mikola. Welcome back and let's dive right in. Your anomaly detection work with IBM's Hero processor was presented as an imperial signal rather than a claim of quantum advantage. What exactly did that experiment demonstrate?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:01:04]:
Yeah, so that that's actually one of the recent exciting work in our team. So it's not yet published in the like proper peer reviewed journal. However the there is a lot of material that will go in the peer reviewed publication soon. But the great result that we got is the following. So one of the bottlenecks in quantum computers is data loading. So if you want for example to apply quantum computers to machine learning applications, the first problem that you will discover is not that it's hard to train or like you need to build some architectures of neural networks or your machine learning algorithm, but you basically will have troubles with encoding your data in the quantum computing memory or the state initial state. And typically when you do so, you need so deep or so many operations in your algorithms that it very quickly accumulates noise. Because quantum computers are currently noisy, they have errors and because of those errors your data becomes corrupted, so it's not operable.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:02:22]:
So you cannot really run any machine learning on top of this. So what we did, we invented a new type of algorithm which takes advantage of a limited so called Hilbert space or state space, which can be generated by shallow or few operations on the quantum computer before noise kicks in. So we understand which states this quantum computer can generate and we will try to take advantage of that in order to encode data directly in those states which are not corrupted by noise. So we created that algorithm and we decided to make a demo application of this. And first thing that we did, we took an anomaly detection data set that was one of the biology anomaly detection where you have time series of which look very, very noisy and unstructured. And because of that it's very hard to treat by classical machine learning algorithms. So they hardly do not recognize differences in those time series. And if you apply now a quantum algorithm to this data, what quantum algorithms actually do, in some sense they lift the dimensionality of this data into much higher space, high dimensional space.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:03:54]:
And in that Space you actually can draw so called decision boundary between different classes of data. It's very similar to classical machine learning so called kernel trick where you do the same. But this is a quantum kernel trick in some sense. And we notice that if you apply it to the specific complex data, which is hard for classical algorithm, it turns out to be relatively easy for quantum. And we see this first advantage in the, we don't call it advantage, let's say improvement in the ideal simulation. And then we run the same algorithm on the noisy quantum computer of the 100 qubit scale. And we seen that this improvement persists also on the noisy quantum computer. That's probably the first evidence of truly quantum data encoding and truly quantum machine learning algorithm that applied in tandem in order to improve machine learning application.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:05:09]:
And what's important as well, in this particular blog that we published, we limited ourselves to a number of qubits and the depth number of operations of algorithm which are still simulable on the classical computer. We did it on purpose in order to have a benchmark to compare ourselves with. Otherwise if you will just run it in the on the quantum computer, in real life, in hardware, it will be hard to justify that this is not an effect of some noise or some imperfection of the hardware, rather than actual quantum effect. So we first proved that here is an ideal simulation, here we got improvement in the performance, and here is the hardware result where we still see persistence of that improvement. And in practice we can still scale it to the limit, which is beyond possibility of classical simulation. That's where it's a little bit harder to control. But we still can actually load the data at that scale.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:06:25]:
You've argued in the past that the hardest bottleneck in quantum machine learning is scalable data encoding rather than the model architecture. Why is embedding classical data into quantum computing systems such a difficult problem?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:06:49]:
Yeah, that's another story of a situation when the machines are here, but we still need to discover algorithms, like best algorithms to encode data or manipulate that data on the quantum computers. And we think we discovered one of the best algorithms in its class because we literally take advantage of the part of the state space which is created on these small scale early machines. But maybe there could be alternative approaches. So why it's hard because you are limited with just hundreds of qubits. And some real world data can require thousands of features or millions of features, for example in the image or some satellite data. And then you need basically to find the way how to represent that on hundreds of qubits. So you can have in one dimensionality you have hundreds of qubits, but then in another dimension you can kind of re encode your data by adding more and more operations. And each operation will have like small rotation or small kind of encoding algorithm, how you encode more and more features, but that will enlarge the number of operations that you need to use and depth of your algorithm.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:08:15]:
And eventually, eventually your noise kicks in and noise starts to beat you for every operation that you actually add into your algorithm. So that's why there is this huge trade off on one hand. So. You can still encode more features than number of qubits, but then you pay for the number of operations. So at some point you just have very limited room in which you can do something. And there are so called amplitude embedding algorithms which are more computer more kind of, I would say than classical quantum computing algorithms which take advantage of the ideal scenario when you don't have any errors. But in practice you have quantum computers which actually have errors and you pay for those operations, so you cannot really apply those.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:09:19]:
You also said embeddings may ultimately determine whether quantum machine learning scales at all. What role do they play in a broader quantum computing stack?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:09:34]:
Yeah, so if you think about any industrial application, so you always need to do some kind of state embedding, data embedding, initial conditions, embedding and so on. So for example, if you do computational fluid dynamics, you start with some density profile of your fluid, so that needs to be somehow encoded in the initial state of your simulation. So that's again a data encoding problem. If you have a machine learning problem, you also need to encode the data if you have for example Monte Carlo problem which people use in finance, for example for derivatives pricing or some financial risk estimation. So you start by encoding a distribution into quantum computer. So again these distributions can have multidimensional structure and complex form. And that's again a data encoding problem, but of a different kind. And if you don't solve this problem, there is no point of running any algorithm because like you are killed by noise.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:10:50]:
And you can wait for 12 to 20 years for fault tolerant quantum computers which will be completely without errors. But that's probably not a good strategy if you can run already. Something interesting today I'm just learning quite
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:11:07]:
a lot about quantum computing and I do believe for me and a lot of non technical people out there, that may take days, weeks or even months to settle and really realize what that all means. But let us go a little bit to HIQ here. You released Rivet, an open source toolkit for quantum workflow execution. Why did you decide to open source this layer instead of keeping everything proprietary?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:11:36]:
That's a great question. So we have multiple tools in our stack and this is not the only tool that we have. We realize that there is some technological advantage that we have in for example, reducing the effects of noise or optimizing quantum algorithms on more high level optimization level. And there are other components like compilation, where we also needed to build some tools for ourselves and many of those still don't exist and we still are building a bunch of those for ourselves and if we found them very interesting and convenient to use. For example, Rivet allows you to split your larger algorithm in chunks and transpile or compile different chunks independently. And you can apply different compilation algorithms depending of what different parts of your algorithm is. And that was very hard to do before Revit and now it's very easy to do with Revit, but now where it helps. So our partner who used this for quantum machine learning application, discovered that you can use this partial compilation in order to train quantum machine learning neural networks.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:13:02]:
So you can add layers gradually and compile only part of your algorithm or neural network instead of whole every time you update the parameters. In that case you save up to 100 times on the classical compute, which you need to spend just like recompiling your circuit every time you update parameters in neural network. So that's one small application. But there are plenty of different other applications that we think people will benefit from. And for us it's not a crucial component of our stack. So we have some components which actually give us 10 to 100 times advantage over our competitors. Rivet is of course super useful, but that's something that we can provide to community and get them excited. And soon there will be more open source contributions from us.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:14:06]:
We are very excited actually to share some very interesting infrastructural components.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:14:11]:
Can you maybe give us a little tease, a little hint on what that may be here? Exclusive for our startup rate IO audience.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:14:21]:
Yeah. So one of the advantages that we have is ability to build very complex middleware stacks. So when we started operating at the space, we realized it's not that easy to combine different tools on the middleware layer. So a lot of these, either ours which are available from our competitors or open source, they have their own interfaces, they have their own like infrastructure in which they embed. So it's very hard to combine like for example, compilation layer from one provider, error mitigation layer from another provider error Correction layer from the search provider and so on. So that's where we realize that we can build an infrastructure which allows us to, to basically create a kind of a tissue into which we place those components and which connects them very naturally, such that we can build probably some of the most complex middleware stacks on the market, where each component is either state of the art or better than state of the art. But when they combine, when we combine all of them together, we can reach performance beyond any other results in the industry. So that's something that we think would be very beneficial for the broader community.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:16:10]:
And we are working on open sourcing part of this infrastructure such that you can combine our stack with stacks of other companies of open source tools. And as a researcher, you will be able to study how, for example, I can build some specific noise mitigation with some specific error noise tailoring technique and study how they combine together. So it's typically not that easy to do now. You don't need to spend like months or half a year of doing that manually. You will have infrastructure which will allow you to do that.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:16:58]:
Talking about infrastructure here, my understanding is some of the future releases that are coming from HIQ will cost money. And for founders evaluating HiQ, the Q question becomes whether they are buying infrastructure leverage or, or temporary optimization. How would you describe that distinction to a potential customer?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:17:21]:
That's a good question. So on one hand I have my moments. Yeah, on one hand we have some of the best in class components and I would say that you're correct that these are components which allow you to get, for example, better error mitigation or better optimization of your algorithm, which you can apply immediately today. But as computers will get better, probably noise will become less of a problem. So maybe you will not need to care that much about noise. So sooner or later these noise mitigation tools will probably become less relevant. However, the infrastructure will still be there. So there will be new components which need to be integrated.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:18:12]:
There will be new challenges of integrating error correction, encoding of logical qubits, decoding mechanism, and so on and so forth, which will need a special software layer to integrate with each other. So what's exciting about what we are building today, we can integrate different middleware components which allow us to run at scale. At present quantum computers. Tomorrow the same infrastructure would allow us to run error correction, logical qubits encoding, and the day after tomorrow it would allow us to run and orchestrate different quantum algorithms. So it's the same infrastructure. It scales as we mature with the, with the quantum hardware Performance.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:19:14]:
In our interview, I'm not sure if it was this part or part one. You've suggested that the first three customers for quantum computing middleware are teams already struggling with failed pilots. What pattern do you see in companies that come to you after those early experiments?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:19:36]:
Yeah, that's actually quite funny. We've seen that several times. The companies start their quantum program and they never run anything on the hardware.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:19:56]:
That means they have very expensive, high maintenance quantum hardware in their laboratories.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:20:04]:
That's good when they have like typically they don't. I think there are few, few institutions in the world who actually have quantum hardware in their own environment. Let's say majority actually has one or another partnership with quantum hardware provider. However, it's quite interesting, curious to see that. And many of them run very few experiments on real hardware. And the reason for that they struggle when they try to run something on the hardware. They just get poor results and kind of disappointed by this process and just focus on the algorithmic performance or algorithm design. However that on one hand it allows you to discover new algorithm, but at some point you still need to run those algorithms on the quantum hardware in order to build intuition what runs what not.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:20:59]:
For example, when we worked with one large financial institution first we discovered ability to encode heavy tailed financial distributions into quantum hardware with few operations. Then this allowed us to run now algorithms on top of this data that we load. But it was still hard to do so before us people just did the same exercise on like three to five to six qubits maybe. And we were able to scale this to dozens of qubits. And the reason we were able to do so because when we started to experiment, we faced some other challenges on the algorithmic level. And that's actually inspired us to think differently. And we discovered some algorithmic tricks which allowed us to reduce the depths of the algorithm itself such that we could load the data and then run the algorithm at the largest possible scale on the real hardware. And before us, for example, the same company actually worked with most of the major quantum hardware providers.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:22:17]:
And while hardware was available, they were not able to particularly execute this specific
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:22:26]:
and my assumption is not any E commerce shoppers currently investing in quantum computing. It will be those that require like simulations of the highest complexity drug discovery molecules. Stuff like this comes to mind.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:22:46]:
Yeah, so these are very non trivial problems right now. And obviously not every company needs to invest in quantum computing today. So mostly these are companies who literally have large or heavy high performance computing loads. So for example automotive, aerospace, those companies obviously have a lot of such applications. On one hand you have a lot of those computational fluid dynamics workflows. This can be hundreds of thousands dollars or could be millions of dollars per year spent.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:23:26]:
I had roommates who were studying engineering and he could really scare them with fluid dynamics.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:23:32]:
Yeah, it's very non trivial problem. It's actually beaten to death in some sense in quantum classical computing, such that it's so well optimized that it's very hard to beat immediately those results. But as soon as quantum computers will get good enough. So that will happen very quickly and the performance of the quantum computational fluid dynamics algorithm will be quite impressive. But today we are on the toy examples scale and the same happens with optimization problems. That's another kind of potentially low hanging fruit. We are working right now in the lower qubit regime where we have optimization problems of so called few degrees of freedoms. And typically those are very easy for classical computers.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:24:21]:
But again as soon as you reach few thousand degrees of freedom, that already becomes very very hard. And that will require several hundred to thousands of qubits. And again that's not too far away if you look on the roadmaps of what hardware provider offer us. So yeah, I'm quite optimistic where we're going. And again chemistry problems and material design problems are probably the lowest hanging fruits here.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:24:52]:
You're very optimistic. But let's talk a little bit in the future and if hardware progress slows for the next couple of years, your thesis either becomes even more important or much harder to prove. How do you think about that scenario?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:25:08]:
Hopefully hardware progress will not slow. So we have several competing technologies on the roadmap. So there are many different qubit modalities. I would say it's again very similar to what happened in classical computing where we had multiple transistor technologies up until 60s, mid-60s, 70s before we settled on the specific architecture that we using up until today. And the same story kind of repeats in quantum computing. So we have multiple qubit technologies. We still fight for scaling those number of qubits, make them more stable, but there is consistent progress and it's actually quite impressive. I remember when I was in university actually building some theoretical arguments of what will be the fastest time of flipping qubits or whatever.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:26:08]:
And this looked like very much a theoretical exercise which will never be even considered in real world experiment on some hardware. And then in just few years I would say we started seeing the first quantum computers operating with a number of qubits where you can already run some algorithms. So it's you, once you consider this Perspective, everything becomes like clear that we are literally sitting on the exponent and this exponent will actually soon lift off.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:26:50]:
That's a very optimistic, forward looking statement. And also looking forward, I've been looking a little bit at your roadmap and this hints at something closer to quantum operating system than a single tool. What would a complete quantum software stack actually look like?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:27:11]:
Exactly. So that's not fully settled now, but obviously you can already draw a lot of parallels with classical computing and we don't need to reinvent the wheel here literally. So we have different type of hardware technology, but the software stack is very similar. So we need some kind of quantum assembly. So that's already done. We need some kind of quantum intermediate representation for lower level transpilation and compilation of our algorithm. That's something that a number of companies currently working with and we also involved in some of these initiatives. Then as you move upper and upper and upper to the stack, at some point you get to the algorithms.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:28:03]:
Like someone wants to run, for example, a quantum Monte Carlo algorithm. Another person can have developed an algorithm for loading data and then you want maybe to combine those algorithms somehow. So for that you need something similar to operating system, where operating system, it's a system which orchestrates data and compute flows between different applications. And today you already see some early stages of that on the level of middleware. That's where we operate right now. But as we will move up the stack sooner or later, we will not work with isolated quantum programs. But similarly to how we work with classical computers, we can run multiple programs in parallel. Those programs communicate between each other.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:28:56]:
So eventually we will get to that future in quantum computing and you will need infrastructure for that. So the things that we today are building for this lower level stack, they naturally project into that future, basically offering these capabilities of allowing different programs to communicate, which is with each other.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:29:25]:
At this stage, the only remaining question is whether you're still structurally positioned to act on what you've just heard. Guys out there, let's get in the. In the very last few questions, Mikola here. Deep tech founders often have to build conviction long before the validation appears. How do you personally manage that psychological tension?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:29:55]:
Yeah, that's. On one hand, that's hard. On the other hand, I would say most of the deep tech founders, they have this very strong intuition. These are typically scientists or like people who are deeply in some technological area for all their life. So it's not based just like on the blind vision. It's all based actually. It's deeply rooted in Science. So our team is majority, probably 90% of our team scientists.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:30:27]:
So they have PhDs or some high degrees from good universities. And we have deep understanding where everything is heading, where hardware providers are moving, what is the roadmap, what is marketing roadmap and what is realistic roadmap. We also can always look back in what happened in classical computing. I'm a huge fan actually of scientific and engineering history. I like to draw a lot of parallels between what happened in the past and what lessons we can actually bring to the future. And also, even while working in the industry for just six years before founding Haiku, I witnessed deep learning revolution and I witnessed what happened to Blockchain, for example. And I've noticed like some similarities in two of these technologies and how. And also what's interesting, augmented reality and virtual reality, that's another kind of exponential technology.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:31:35]:
So while working in these areas, I witnessed what worked, what didn't work, for example, what made AI widely adopted and why, for example, augmented reality is still not there. Like why not everyone wearing augmented reality glasses and we still have mobile phones around. Everyone predicted that there will be no mobile phones already. So yeah, so a lot of these, you can draw parallels and think about your particular space and where it's heading. And that's, I would say, what kind of gives me a lot of confidence about what we are doing and understanding that what we are doing is right. We so far did not need to pivot too much. We had actually quite well defined trajectory and we achieved very well defined milestones on the way to our goal. So I think that's what makes us very unique and at the same time confident in the future.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:32:42]:
I would partly agree with you. I've learned especially in economic history. I love the saying, history does not repeat itself, but it rhymes. You've spent years operating IT areas where being directly right can take a long time to prove. What mental models helped you navigate that uncertainty?
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:33:05]:
I would say I operate as a researcher, I always treat myself as scientist. And fortunately my scientific area is such that you find a lot of beauty and excitement in what you are doing from those interesting theories and interesting results that you start getting even on the daily basis. And at the same time, the work in theoretical physics, I think it's unique because like as many theoretical physicists know, that results of your work is typically seen in many, many, many years from now. So I think that helps a lot in deep tech or in the area which we are today. On one hand, you understand that probably results will not be immediate and that's hard. Maybe it's worth just going back to AI space, which is booming today. But on the other hand, we understand the roadmap. We understand that when we will hit that milestone and when quantum computing will actually start to be much broaderly adopted, that will be entirely different game.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:34:44]:
And once we will be one of the first companies in that age at that moment when that happens, that can bring us to some absolutely unbelievable, I would say, results. So yeah, it's very interesting to kind of understand where it might bring us and. And when you see that roadmap quite clearly, that's actually quite exciting.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:35:21]:
I usually close my interviews with two questions. I'll just combine them into one because you've been talking that you've been saying that you are a company. If quantum computing really takes off, you are on the verge of profiting from that. Or are you open to talk to new investors who would like to join you on this journey and as well talented employees? Sure.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:35:48]:
We are currently on very early stage, so we just closed the seed round and obviously we will be venture backed for a while. So we continue talking to investors. We might get some extensions of rounds. We will have next rounds. So this is a continuous process and when you are a startup you need always be raising. So that's the mode we operate. And my co founder actually always speaks to investors, also speak to investors, but probably not as frequently. But yeah, that's a mode that we are working in and we certainly are always looking for exciting and energetic talent and for that we actually operate very much as like a research group.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:36:38]:
So we have many internship programs, we collaborate with many universities. We try actually to give young people opportunity to join us earlier and grow with us because like the stuff that we are doing is basically at the bleeding edge of some of these areas in middleware which are very hard to find anywhere in academia today. So that's how we recruit new talented people. But also we are from time to time offering full time positions and these are regularly announced. So I would say both.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:37:27]:
Only thing left for me to say. Mikola, thank you very much. Best of luck and thank you very much for joining us from Western Ukraine for recording this interview.
Mykola Myksymenko | Co-Founder & CTO | Haiqu [00:37:38]:
Thank you, Joe. It was a pleasure talking to you.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:37:41]:
Quantum computing will not be saved by optimism or funding cycles. It will survive only if execution becomes honest, repeatable and useful. Mykola Maximenko is working at that boundary. Haikyuu is available at Haikyuu AI. This conversation continues wherever serious decisions about frontier technologies are made. Thank you. That's all folks. Find more news streams, events and interviews@www.startuprat.IO.
Jörn "Joe" Menninnger | Founder, Editor in Chief | Startuprad.io [00:38:23]:
remember, sharing is car.





Comments