Fluence Co-Founder On Why Cloudless Computing is the Future

0



Decentralized computing has surged to the forefront as industries demand greater flexibility and autonomy beyond the reach of traditional cloud giants. Fluence, co-founded by Evgeny Ponomarev, is boldly challenging the dominance of AWS, Azure, and Google Cloud with its concept of “cloudless computing”—aggregating global compute power through an open, decentralized network.

BeInCrypto interviewed Ponomarev to explore the rationale behind this groundbreaking approach, discussing hurdles faced, industry reactions, tokenomics, and the future of decentralized infrastructure. Ponomarev shared strategies Fluence employs to deliver cost savings, strengthen community governance, and integrate with Web3 protocols.

Defining Cloudless Computing and Market Gaps

At Fluence, we work in decentralized computing. We’ve built what we call a “cloudless” platform—a term we use to describe our alternative to traditional cloud providers like AWS, Azure, or Google Cloud. Essentially, it’s a decentralized physical infrastructure network (DePIN) that offers developers and companies access to compute resources without relying on centralized cloud services.

Think of it as the Uber or Airbnb of the cloud. Instead of relying on a single provider, our platform aggregates compute resources from a wide range of independent sources. It’s an open-source, permissionless protocol that enables developers to tap into these resources.

Industries, Use Cases, and Web3 Focus

Right now, we primarily target the Web3 market, which is a key focus of this conference. In the Web3 space, a core use case involves running nodes. Whether it’s layer one, layer two, roll-ups, or other blockchain solutions, they all rely on nodes—essentially instances of databases.

People run these nodes in the cloud, on bare metal, or even on personal computers at times. What we offer is a reliable alternative to traditional cloud infrastructure for running these workloads. Specifically, node operators are a critical market segment for us within the Web3 ecosystem.

But essentially, you can use this platform to run any kind of workloads like traditional clouds—whether that’s backends, databases, game servers, web apps, and more.

Fluence’s Differentiation in the DePIN Space

As I said, we are focusing purely on computing. In DePIN, you can see a wide variety of people and projects doing different things. But the whole model is that they’re crowdsourcing resources from multiple providers, maybe like consumers, end-user devices, or more professional companies. They’re aggregating, packaging them into products, and finding customers for them.

For us, the goal is to provide compute resources. But when you provide compute, there’s also a storage component — details that you package into different cloud services. Our primary direction, however, is to bring in compute infrastructure, because we see a huge and growing demand, especially with the rise of AI, for compute resources.

So we start providing compute for node operators, but then we grow much wider, and we’re going to also cover AI use cases.

Challenges in Building Decentralized Orchestration

The challenge in building decentralized orchestration stems from our early experiments with various solutions to determine the right approach to cloud infrastructure.

At some point, we were trying to have peer-to-peer orchestration of different computations happening on different hardware, nodes, or user devices. But then we realised that lower-level solutions, like basic virtualisation of resources, were missing. So we decided to first deliver this, and this is what we have now.

And then get back to orchestration. But essentially, orchestration is about reducing vendor lock-in. Currently, large cloud providers dominate the market, and many companies rely entirely on a single provider to run their applications. This creates what’s known as “platform risk” or “vendor risk.” If the provider decides to de-platform, ban, or adjust pricing, the entire business could be at risk.

So whenever you use a decentralised platform instead of a centralised cloud, your business continuity and sustainability are safer.

FLT Tokenomics and Network Incentives

The FLT token plays a crucial role in securing the compute power and hardware within the network. Basically, any new hardware that is being added on the supply side of the network needs to be staked on by token holders, and there’s a cryptoeconomic incentive to prove that this hardware is online and available and has certain performance.

Staking helps to have this skin in the game, where providers risk losing their stake if they fail to meet their commitments regarding hardware availability and performance. On the flip side, providers who successfully deliver on their commitments earn staking rewards.

We’re also exploring additional ways to enhance the utility of FLT. For instance, by enabling loans collateralized in FLT, we could allow providers to borrow against their token rewards. This would enable them to quickly acquire new hardware, connect it to the network, and earn more rewards. Over time, they can borrow against these rewards to scale even faster.

And on the customer side, you can also do it by subsidising prices for customers. We’re going to be bringing more such mechanics in the future.

Developer and Enterprise Reactions to Decentralized Compute

From the customer side, what’s important for us is to build a product that offers an experience similar to centralized cloud platforms. It’s decentralized, but the user experience remains the same.

That’s why developers are very open and positive about us — it’s very easy for them to switch. If they have workloads in the cloud, they can simply host them here. There’s nothing special or unusual about the developer experience.

You simply deploy your workloads to our virtual machines and virtual servers, using SSH keys and standard authorization processes. Essentially, the experience is similar.

Of course, there’s always the argument that this is a new platform — it’s a startup — so a certain level of trust is required.

It’s a new business. When you switch from a big, established company with 20 years of history to a new one, there’s naturally a trust factor involved. We’re working to bridge that gap by being present everywhere, staying open and transparent, providing quick support, and even offering financial assistance when needed, especially for bringing smaller projects onto our platform.

It’s all part of a typical onboarding process.

Cost Structure and Pricing Advantages

Yeah, it’s funny how many people don’t realize how much cloud platforms charge on top of the actual economics of the hardware. Essentially, the margins in the cloud business are huge.

If you simply buy a server directly from a manufacturer and run a business model where the server pays itself back over two or three years, you can offer prices several times lower than what traditional cloud providers charge. What they charge for is largely their brand, the “free” credits they offer — which they ultimately recover by charging more — and hundreds of additional services they try to push, leading you to pay even more.

What we do is different: we offer a permissionless protocol managed by an on-chain DAO. We don’t take any fees. The economics are purely based on the provider’s side, with reasonable — not excessive — margins.

We allow providers to access the customer base directly, so they don’t need to handle sales themselves. That’s why they’re comfortable operating with thinner margins — their only focus is running the hardware.

We simply take the lowest price they offer and pass it directly to the customer — no middlemen, no added margins. That’s the whole magic: it’s purely about the real economics of hardware and compute.

Web3 and AI Partnerships for a Decentralized Stack

We have a fairly significant pipeline of companies in the Web3 space. It’s mainly node operators or companies offering what’s called “node-as-a-service,” meaning they provide their end users with the ability to deploy nodes for different protocols with just one click.

We’re supporting them with compute infrastructure under the hood—essentially, these nodes are running on our servers. We have several names lined up in the pipeline. I’m not sure I can share them just yet, but we’ll be announcing them soon.

Supporting AI/LLM Workloads and GPU Roadmap

Right now, we’re focused on CPU servers only, which aren’t suitable for AI inference or training. However, we’re planning to add GPUs soon.

Our providers already have many GPUs and are constantly asking if they can connect them. We’re working to ensure that when we do offer GPUs, they will be available at some of the best prices on the market. Once everything is ready, we’ll publish the offerings and make them available for users.

Essentially, to onboard LLMs and support inference use cases, all you really need is access to GPU capacity and, ideally, some additional UX layers to simplify the developer experience. This is already on our roadmap.

AI in general is driving huge demand for GPU hardware, but it’s also increasing demand for CPU hardware — because tasks like data processing, data labeling, and dataset preparation are all critical steps before training a model.

There are also workloads known as AI agents, which are essentially bots that use AI models. Running these bots primarily requires CPU servers, while calling or interacting with the models requires GPU servers.

So, you always need both CPU servers and GPU servers to fully support these types of applications.

DAO Governance and Community Engagement

We have a fairly standard DAO model. It’s based on on-chain voting, with certain thresholds in place. For example, you need to have a delegated amount of voting power to create a proposal, and a proposal must receive a minimum number of votes in order to pass.

Execution happens on-chain, but we also have an off-chain legal structure in place. A governance committee is responsible for overseeing and facilitating the execution process.

We follow a model where the governance committee is elected by the community every one to two years.

Overall, this model is quite standard — there’s nothing particularly new or unusual about it. It’s a typical Web3 DAO model where voting is token-weighted — people vote based on the number of tokens they hold.

Decentralization for Enterprise: SLAs, Certification, and Fiat

Enterprise users typically require three key things: first, a Service Level Agreement (SLA), which guarantees the availability of services.

Second, they need providers to have relevant certifications — security and compliance standards like SOC 2 or ISO 27001. Most of our providers already hold these certifications, and we are currently focusing on working primarily with hardware providers who meet these standards.

Third, of course, they want to pay in fiat, as they are enterprises operating in the Web2 world, not in Web3. We are making sure we have all the necessary systems in place to successfully onboard large enterprises from the Web2 space.

We’re making good progress in this area. When it comes to SLAs, there are several ways to address them. One approach is to enter into a legal agreement with the enterprise, clearly outlining and guaranteeing service availability.

We’re also working on an on-chain SLA, which would essentially function like a legal agreement. In this model, providers would commit on-chain to guaranteeing a certain level of service availability to customers.

Everything would be recorded in a smart contract, with clear rules: for example, if a provider fails to meet a 99% SLA, they would be required to refund a portion of the payment to the customer.

As for fiat payments, there’s only so much we can do — it’s a limitation we’re aware of.

If enterprises want to pay in fiat, we accept it. We then convert the fiat into stablecoins and fund the corresponding smart contracts.

This step is unavoidable at the moment — we haven’t found a way around it yet. However, we believe that in the future, stablecoin adoption will continue to grow, which will help solve this issue, at least partially.

Inevitability of Decentralized Cloud and Fluence’s Future

I don’t think there was a single moment when we realized this needed to exist. It was more the overall growth of the Web3 and crypto movement that made it clear.

Decentralized models have shown that they can sometimes be more efficient and can significantly lower the barriers to entry for many services and technologies.

Especially when combined with the dramatic growth in compute demand driven by AI, we believe it’s essential to deliver easy and affordable access to compute resources to a much wider audience — something traditional cloud platforms often prevent.

Their models come with KYC requirements, credit card barriers, and a primary obligation to serve shareholders. In contrast, we offer an alternative: a DAO-managed, permissionless infrastructure model.

Final Thoughts

We’re inviting people to join our upcoming beta for virtual servers and are currently collecting applications — you can sign up on our website.

We’re excited to see more people try it out, share their feedback, and help us grow this open and permissionless compute infrastructure for humanity.

Disclaimer

In compliance with the Trust Project guidelines, this opinion article presents the author’s perspective and may not necessarily reflect the views of BeInCrypto. BeInCrypto remains committed to transparent reporting and upholding the highest standards of journalism. Readers are advised to verify information independently and consult with a professional before making decisions based on this content.  Please note that our Terms and Conditions, Privacy Policy, and Disclaimers have been updated.



Source link

You might also like
Leave A Reply

Your email address will not be published.