What You Can Do with Kinesis
Run AI and LLM workloads Deploy, fine-tune, and serve machine learning models using high-performance GPU infrastructure, with the flexibility to scale as demand changes.
Build and operate data pipelines Execute batch and streaming workloads efficiently, with the ability to adapt compute resources to changing data volumes.
Optimize cost and utilization Choose between on-demand, usage-based compute or pre-configured resources, and gain a unified view of performance and spend across your workloads.
Leverage existing infrastructure Connect your own compute resources to Kinesis and manage them alongside platform-provided capacity, all within a single control plane.
Standardize your runtime environment Use container-based deployments to ensure consistency across development, testing, and production environments.
Last updated
Was this helpful?