AI Made Friendly HERE

Open source Dracarys models ignite generative AI fired coding

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

For fans of the HBO series Game of Thrones, the term “Dracarys” has a very specific meaning. Dracarys is the word used to command a dragon to breathe fire.

While there are no literal dragons in the world of generative AI, thanks to Abacus.ai, the term Dracarys now has some meaning as well. Dracarys is the name of a new family of open large language models (LLMs) for coding.

Abacus.ai is an AI model development platform and tools vendor that is no stranger to using the names of fictional dragons for its technology. Back in February, the company released Smaug-72B. Smaug is the name of the dragon from the classic fantasy book The Hobbit. Whereas Smaug is a general-purpose LLM, Dracarys is designed to optimize coding tasks.

For its initial release, Abacus.ai  has applied its so-called “Dracarys recipe” to the 70B parameter class of models. The recipe involves optimized fine-tuning among other techniques.

“It’s a combination of training dataset and fine-tuning techniques that improve the coding abilities of any open-source LLM,” Bindu Reddy, CEO and co-founder of Abacus.ai told VentureBeat. “We have demonstrated that it improves both Qwen-2 72B and LLama-3.1 70b.”

Gen AI for coding tasks is a growing space

The overall market for gen AI in the application development and coding space is an area full of activity.

The early pioneer in the space was GitHub Copilot which helps developers with code completion and application development tasks. Multiple startups including Tabnine and Replit have also been building features that bring the power of LLMs to developers.

Then of course there are the LLM vendors themselves. Dracarys provides a fine-tuned version of Meta’s Llama 3.1 general-purpose model. Anthropic’s Claude 3.5 Sonnet has also emerged in 2024 to be a popular and competent LLM for coding as well.

“Claude 3.5 is a very good coding model but it’s a closed-source model,” Reddy said. “Our recipe improves the open-sourcing model and Dracarys-72B-Instruct is the best coding model in its class.”

The numbers behind Dracarys and its AI coding capabilities

According to LiveBench benchmarks for the new models, there is a marked improvement with the Dracarys recipe.

LiveBench provides a coding score of 32.67 for the meta-llama-3.1-70b-instruct turbo model. The Dracarys tuned version boosts the performance up to 35.23. For qwen2 the results are even better. The existing qwen2-72b-instruct model has a coding score of 32.38. Using the Dracarys recipe boosts that score up to 38.95.

While qwen2 and Llama 3.1 are the only models that currently have the Dracarys recipe, Abacus.ai has plans for more models in the future.

“We will also be releasing the Dracarys versions for Deepseek-coder and Llama-3.1 400b,” Reddy said.

How Dracarys will help enterprise coding

There are several ways that developers and enterprises can potentially benefit from the improved coding performance that Dracarys promises.

Abacus.ai currently provides the model weights on Hugging Face for both the Llama and Qwen2-based models. Reddy noted that the fine-tuned models are also now available as part of Abacus.ai’s Enterprise offering. 

“They are great options for enterprises who don’t want to send their data to public APIs such as OpenAI and Gemini,” Reddy said. “We will also make Dracarys available on our extremely popular ChatLLM service that is meant for small teams and professionals if there is sufficient interest.”

VB Daily

Stay in the know! Get the latest news in your inbox daily

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird