Updated 354 days ago

Agora Labs (Fine-Tuning to Inference Pipeline)

Fine-tune an open-source model and deploy for inference on Akash!

  • AI / Robotics

Using our updated version of the Akash console, a user can take LLaMa-2-7B, upload their proprietary data via Storj, and fine-tune it. We store model checkpoints so that if a provider drops during a lease, the model progress and weights are stored and can be re-deployed with ease. Finally, we automatically host the fine-tuned LLaMa-2 model for inference and allow for interaction via a Gradio UI.