How does Kinesis Network achieve savings?

Compute landscape is a competitive landscape, with razor thin margins. How can Kinesis offer any money savings? Below, we will explore some business and technological strategies we employ to save money and pass on the savings to our customers.

Business: Economies of Scale

Bargaining Power

As Kinesis expands, its bargaining power increases significantly. A single customer requesting 5 machines from a vendor is not the same as Kinesis negotiating for 5,000 machines. At such a large scale, we secure bulk discounts or exclusive business deals, enhancing cost efficiency.

More Predictability

With larger workloads and wider variety of customers, we can average out some core workload. This allows us to commit for longer terms plans (e.g. reserved instances, savings plans, etc.) that individual customer’s themselves cannot commit to.

Joint Ventures / Partnerships

Our scale can offer joint ventures, where we can achieve even higher savings than what we can achieve with retail discount agreements.

Tech: Serverless Architecture

Access to Opportunistic or Underutilized Resources

Kinesis Portal abstracts background complexity, allowing workloads to run on most economical resources without any additional considerations in the front end. Most economical resources might come from

Spot Instances:

These are instances that are offered up 90% savings from top cloud providers such as AWS and Azure. The catch is, unlike the dedicated instance, they can be recalled with only 2 minutes notice. A customer would need to handle these interruptions, which is not an insignificant engineering effort. Thankfully, Kinesis Network handles these resource movements seamlessly.

Under-Capitalized Resources

There is a ton of HW that is underutilized such as

  • Gamer PCs

  • Ex-Ethereum Miners

  • Media production houses with top end graphics cards

  • Data centers with idle machines on the shelves, awaiting customers (AWS and Azure offer these machines as “spot” instances. many others don’t have the infra to capitalize these resources, so Kinesis can enable such vendors)

Since Kinesis Portal can run workloads seamlessly across the globe, we make use of these resources that is otherwise hard/impossible to reach for regular retail customers.

Pooling Resources Together

Most CPU workloads do not utilize their resources to 100%. In fact, research shows that most workloads are between 1% utilization and 35% utilization. But they pay for the whole machine and for the whole month. With Kinesis’s serverless architecture we can run workloads from multiple sources on the same machine so each machine is much more optimally utilized.

Who will Save Most with Kinesis

Customers who activate the more of above cost saving strategies will enjoy the higher savings. As always, please remember that the total value of Kinesis comes from more than cost savings, so even if we cannot offer cost savings, we can still offer a tremendous value.

Who Benefits Most with CPU and GPU Workloads?

  • Currently on AWS or Azure, but looking for a similarly respectable cloud provider and looking for savings (in this case, Kinesis can run their workloads on a wider variety of data centers, unlocking seamless savings).

  • They have a large number of microservices and interactive workloads with average CPU or GPU utilization of less than 50%. Unless they are big number crunchers (for example in the shape of large batch jobs), they will very likely fit this description. This way, instead of paying for the whole month, they will pay only for the CPU/GPU cycles they actually consumed.

  • They are willing to make long term commitments. This way we can ourselves commit to reserved instances and pass on the savings.

  • Their workloads are docker images on Ubuntu, and can run on a variety of hardware / location.

  • Their workload can be divided into smaller chunks and can run asynchronously and independently. Some very large map-reduce and ML training problems are good examples.

  • GPU specific: Their models can fit into 16GB or 24GB VRAM. This way, we can run their models on much wider variety of hardware, unlocking even more savings.

Last updated