New top story on Hacker News: Show HN: Shadeform – Single Platform and API for Provisioning GPUs
Show HN: Shadeform – Single Platform and API for Provisioning GPUs
9 by edgoode | 1 comments on Hacker News.
Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform ( https://ift.tt/aCZQoMf ), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available. From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds. With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference. This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in. We shipped a few features to help teams access GPUs today: - a “single plane of glass” for GPU availability and prices; - a “single control plane” for provisioning GPUs in any cloud through our platform and API; - a reservation system that monitors real time availability and launches GPUs as soon as they become available. Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more. You can try our platform at https://ift.tt/hDxOlF3 . You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee. We’d love your feedback on how we’re approaching this problem. What do you think?
9 by edgoode | 1 comments on Hacker News.
Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform ( https://ift.tt/aCZQoMf ), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available. From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds. With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference. This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in. We shipped a few features to help teams access GPUs today: - a “single plane of glass” for GPU availability and prices; - a “single control plane” for provisioning GPUs in any cloud through our platform and API; - a reservation system that monitors real time availability and launches GPUs as soon as they become available. Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more. You can try our platform at https://ift.tt/hDxOlF3 . You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee. We’d love your feedback on how we’re approaching this problem. What do you think?
No comments