IDE For LLMs
Build, Fine-tune and Deploy like never beforeA month after launching, here's what we've learned
Lots done, a lot more to do. 1 month after launching, our analysis on how that went and what we learned.
Written by: Phily Hayes, March 8, 2024
When we started building Zerve, we had a couple of perspectives on the space that we learned both through our own experience and through hundreds of interviews that we performed over the course of about a year. First, the tools available to data scientists force them to choose between stability and flexibility; i.e., exploring data in VS Code is a nightmare, and trying to deploy a notebook is practically delusional. Second, the only people generating value from data today are doing it with code. In other words, 100% of AI projects require code in one way or another.
That’s why we built Zerve, and that’s why we made it free for our users.
We launched on January 30 of this year and in the first week had over 1000s of people sign up to use Zerve. We got tons of great feedback directly and on social media, and that let us know that we are on the right track. Here are some things that we’ve learned along the way.
The community recognizes the problem
Everyone that signed up for Zerve got a direct email from me because we really want to understand their experience with Zerve and their needs in tooling. Going into the launch we thought we knew our users very well. We had run a small scale beta test earlier in 2023, and learned a lot through that experience, so this time we were looking for confirmation of what we’ve built.
We’ve gotten lots of great feedback. Things like, “Zerve AI is like Miro for developers, except cooler. 😎” and “It’s like the Google Colab upgrade we’ve all been waiting for.” These comments are particularly awesome as they would have been part of pitches we have given about our vision, but now we were seeing people we had never spoken to, see the problem and Zerve just like we did.
We have long since called the state of tooling in the data science space the “Data Science Stockholm Syndrome.” It definitely seems like people are waking up to the limitations of the tools. Since the seminal talk, “I don’t like notebooks,” posted in October of 2018 highlighting some of these issues, people have begun to realize that better tools are needed to do serious data science. Our launch further validated this.
Self-hosting is an absolutely critical part of data science
Scores of data science tools have been built over the last decade in the cloud, and it always surprised me when they would stutter and stumble when asked about self-hosting. At the end of the day, nobody wants their data to have to leave their cloud environment and go to a third party to be analyzed or processed. Even more, vendors shouldn’t want this either. It creates all sorts of problems when it comes to data security and working with larger and more sophisticated organizations.
That’s why we built Zerve to be self-hosted from the beginning. We’re on the AWS marketplace, and a full self installation takes less than 10 minutes.
The feedback that we got from the launch (on our cloud environment) proved to us that people are very hesitant to send their data out to a third party, so we’ve had lots of conversations about getting people up and running in their own environments so that they can use Zerve for real projects on real data.
We also often hear that offering Zerve locally is important to the community. Zerve is, however, unlikely to be offering this. It’s not to diminish that this is important to some folks, for us it’s not how we see we’ll be able to elevate the impact of data science. For lots of reasons, but most crucially true collaboration, whether within the team or with stakeholders like Data Engineers - the benefits we have built are centred around streamlining the process of going from Prototype to Production using foundational cloud technologies to make handover, or direct deployment infinitely more straightforward.
🐛 Bugs 🐞
Of course there have been issues, but the community has a real sense of wanting to like what we are doing, and the patience shown to us to get things resolved has been really awesome. Additionally the community has quickly pointed out gaps in what we are doing. Some we knew about, and some new ones. It’s been insanely valuable to meet all of these people, understand their ‘why’ and look to add the desired functionality when it aligns with our vision.
Generative AI is everywhere
Early on, I thought that generative AI was largely hype from a business perspective, but I was completely wrong about that. Nearly everyone that we speak to is working on some sort of LLM or other generative AI project. Discovering and tapping into the capabilities of these powerful new models is a fundamental leap forward.
The bad news is that the tooling for fine-tuning and rolling out these models is very challenging. From the dev/ops issues related to utilizing GPUs and the costs associated with compute, many organizations have stumbled out the gate on these projects. Because Zerve provides easy, targeted, serveless GPU support, we resolve both of these issues pretty seamlessly.
That’s why next month we’ll be launching a host of new features including integrations with HuggingFace and AWS Bedrock, GPU support, and more. These features make Zerve incredibly well-positioned to be THE IDE for Gen AI.
Stay tuned for details of that launch on our website and our Product Hunt page.
Published: March 8, 2024
Written by: Phily Hayes
Subscribe to our newsletter: