Felafax recently launched!

Launch YC: Felafax: Expanding AI Infra beyond NVIDIA

"Building open-source AI platform for next-generation AI hardware, reducing ML training costs by 30%."

TL;DR:They are building an open-source AI platform for non-NVIDIA GPUs. Try it at felafax.ai or check out their github!


Founded by
Nikhil Sonti & Nithin Sonti


👋 Introduction

Nikhil and Nithin are the twin brothers behind Felafax AI. Before this, they spent half a decade at Google and Meta building AI infrastructure. Drawing on experience, they are creating an ML stack from the ground up. The goal is to deliver high performance and provide an easy workflow for training models on non-NVIDIA hardware like TPU, AWS Trainium, AMD GPU, and Intel GPU.

🧨 The Problem

  • The ML ecosystem for non-NVIDIA GPUs is underdeveloped. However, alternative chipsets like Google TPUs offer a much better price-to-performance ratio; TPUs are 30% cheaper to use.
  • The cloud layer for spinning up AI workloads is painful. Training requires installing the right low-level dependencies (infamous CUDA errors), attaching persistent storage, waiting 20 minutes for the machine to boot up… the list goes on.
  • Models are getting bigger (like Llama 405B) and don't fit on a single GPU, requiring complex multi-GPU orchestration.

🥳 The Solution

Felafax is launching a cloud layer to make it easy to spin up AI training clusters of any size, from 8 TPU cores to 2048 cores. They provide:

  • Effortless Setup: Out-of-the-box templates for PyTorch XLA and JAX to get you up and running quickly.
  • LLaMa Fine-tuning, Simplified: Dive straight into fine-tuning LLaMa 3.1 models (8B, 70B, and 405B) with pre-built notebooks. They've handled the tricky multi-TPU orchestration for you.
Felafax GIF

In the coming weeks, they will also launch open-source AI platform built on top of JAX and OpenXLA (an alternative to NVIDIA's CUDA stack). They will support AI training across a variety of non-NVIDIA hardware (Google TPU, AWS Trainium, AMD and Intel GPU) and offer the same performance as NVIDIA at 30% lower cost. Follow them on Twitter, LinkedIn and Github for updates!

🙏 How You Can Help

  1. Try their seamless cloud layer for spinning up VMs for AI training – you get $200 credits to start off - app.felafax.ai
  2. Try fine-tuning LLaMa 3.1 models for your use case.
  3. If you are an ML startup or an enterprise that would like a seamless platform for your in-house ML training, reach out to them (calendar).


Learn More

🌐 Visit felafax.ai to learn more
🌟 Star them on GitHub

👣 Follow Felafax on
LinkedIn & X

Posted 
October 11, 2024
 in 
Launch
 category
← Back to all posts  

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.