AI Made Friendly HERE

Article Series: Practical Applications of Generative AI

Generative AI (GenAI) has become a major component of the artificial intelligence (AI) and machine learning (ML) industry. AI models have been developed that can generate realistic text, speech, images, and even videos. Using these models, anyone can now automate many tasks that previously required extensive and skilled human labor.

However, using GenAI comes with challenges and risks. While text-generating models, often known as large language models or LLMs, can perform many natural language tasks “out of the box,” they often require careful crafting of their input; this is known as “prompt engineering” and is often a key ingredient to any application using an LLM.

Some businesses are reluctant to adopt LLMs because of their associated risks. For example, LLMs are known to “hallucinate”, or generate convincing factual responses that are completely false. There are also concerns about data privacy, as many of the most popular models, such as ChatGPT, require sending data to a 3rd party. Fortunately, there are mitigations for these risks.

One of the most troubling downsides of GenAI is its ability to quickly produce convincing but false information. We’ve mentioned LLMs and their hallucinations, but there are also deliberate misuses. Speech models can “clone” a speaker’s voice, providing phone-based scammers with a tool for imitating someone trusted by their victims. Image-generating models can generate “deep fakes”: photo-realistic images of events that never happened.

As GenAI becomes more common and is used in more applications, the development community will need to learn about the models’ abilities, risks, and limitations.

In the InfoQ “Practical Applications of Generative AI” article series, we present real-world solutions and hands-on practices from leading GenAI practitioners in the industry.

You can download the entire series collated in PDF format, in the associated eMag.

 

Series Contents

3

Navigating LLM Deployment: Tips, Tricks, and Techniques

This article focuses on self-hosted LLMs and how to get the best performance from them. The author provides best practices on how to overcome challenges due to model size, GPU scarcity, and a rapidly evolving field.

Article by: Meryem Arik

To be released: week of September 23, 2024

4

 

Virtual Panel: What to Consider when Adopting Large Language Models

This virtual panel brings four of our authors together to discuss topics such as: how to choose between an API-based vs. self-hosted LLM, when to fine-tune an LLM, how to mitigate LLM risks, and what non-technical changes organizations need to make when adopting LLMs.

Panelists: Meryem Arik, Tingyi Li, Numa Dhamani, Maggie Engler

Hosted by: Anthony Alford

To be released: week of September 30, 2024

Originally Appeared Here

You May Also Like

About the Author:

Early Bird