Generative AI with Large Language Models — New Hands-on Course by DeepLearning.AI and AWS

0
398
Generative AI with Large Language Models — New Hands-on Course by DeepLearning.AI and AWS


Voiced by Polly

Generative AI has taken the world by storm, and we’re beginning to see the subsequent wave of widespread adoption of AI with the potential for each buyer expertise and software to be reinvented with generative AI. Generative AI allows you to to create new content material and concepts together with conversations, tales, photos, movies, and music. Generative AI is powered by very giant machine studying fashions which are pre-trained on huge quantities of knowledge, generally known as basis fashions (FMs).

A subset of FMs referred to as giant language fashions (LLMs) are skilled on trillions of phrases throughout many natural-language duties. These LLMs can perceive, be taught, and generate textual content that’s practically indistinguishable from textual content produced by people. And not solely that, LLMs may interact in interactive conversations, reply questions, summarize dialogs and paperwork, and supply suggestions. They can energy functions throughout many duties and industries together with artistic writing for advertising, summarizing paperwork for authorized, market analysis for monetary, simulating scientific trials for healthcare, and code writing for software program growth.

Companies are shifting quickly to combine generative AI into their services. This will increase the demand for knowledge scientists and engineers who perceive generative AI and tips on how to apply LLMs to resolve enterprise use circumstances.

This is why I’m excited to announce that DeepLearning.AI and AWS are collectively launching a brand new hands-on course Generative AI with giant language fashions on Coursera’s schooling platform that prepares knowledge scientists and engineers to develop into consultants in choosing, coaching, fine-tuning, and deploying LLMs for real-world functions.

DeepLearning.AI was based in 2017 by machine studying and schooling pioneer Andrew Ng with the mission to develop and join the worldwide AI group by delivering world-class AI schooling.

Generative AI with large language models

DeepLearning.AI teamed up with generative AI specialists from AWS together with Chris Fregly, Shelbee Eigenbrode, Mike Chambers, and me to develop and ship this course for knowledge scientists and engineers who need to discover ways to construct generative AI functions with LLMs. We developed the content material for this course beneath the steering of Andrew Ng and with enter from numerous business consultants and utilized scientists at Amazon, AWS, and Hugging Face.

Course Highlights
This is the primary complete Coursera course centered on LLMs that particulars the everyday generative AI undertaking lifecycle, together with scoping the issue, selecting an LLM, adapting the LLM to your area, optimizing the mannequin for deployment, and integrating into enterprise functions. The course not solely focuses on the sensible features of generative AI but in addition highlights the science behind LLMs and why they’re efficient.

The on-demand course is damaged down into three weeks of content material with roughly 16 hours of movies, quizzes, labs, and additional readings. The hands-on labs hosted by AWS Partner Vocareum allow you to apply the methods straight in an AWS surroundings supplied with the course and contains all sources wanted to work with the LLMs and discover their effectiveness.

In simply three weeks, the course prepares you to make use of generative AI for enterprise and real-world functions. Let’s have a fast take a look at every week’s content material.

Week 1 – Generative AI use circumstances, undertaking lifecycle, and mannequin pre-training
In week 1, you’ll look at the transformer structure that powers many LLMs, see how these fashions are skilled, and take into account the compute sources required to develop them. You will even discover tips on how to information mannequin output at inference time utilizing immediate engineering and by specifying generative configuration settings.

In the primary hands-on lab, you’ll assemble and evaluate totally different prompts for a given generative job. In this case, you’ll summarize conversations between a number of folks. For instance, think about summarizing help conversations between you and your prospects. You’ll discover immediate engineering methods, strive totally different generative configuration parameters, and experiment with numerous sampling methods to achieve instinct on tips on how to enhance the generated mannequin responses.

Week 2 – Fine-tuning, parameter-efficient fine-tuning (PEFT), and mannequin analysis
In week 2, you’ll discover choices for adapting pre-trained fashions to particular duties and datasets by a course of referred to as fine-tuning. A variant of fine-tuning, referred to as parameter environment friendly fine-tuning (PEFT), allows you to fine-tune very giant fashions utilizing a lot smaller sources—typically a single GPU. You will even be taught in regards to the metrics used to guage and evaluate the efficiency of LLMs.

In the second lab, you’ll get hands-on with parameter-efficient fine-tuning (PEFT) and evaluate the outcomes to immediate engineering from the primary lab. This side-by-side comparability will show you how to acquire instinct into the qualitative and quantitative impression of various methods for adapting an LLM to your area particular datasets and use circumstances.

Week 3 – Fine-tuning with reinforcement studying from human suggestions (RLHF), retrieval-augmented era (RAG), and LangChain
In week 3, you’ll make the LLM responses extra humanlike and align them with human preferences utilizing a method referred to as reinforcement studying from human suggestions (RLHF). RLHF is vital to enhancing the mannequin’s honesty, harmlessness, and helpfulness. You will even discover methods reminiscent of retrieval-augmented era (RAG) and libraries reminiscent of LangChain that permit the LLM to combine with customized knowledge sources and APIs to enhance the mannequin’s response additional.

In the ultimate lab, you’ll get hands-on with RLHF. You’ll fine-tune the LLM utilizing a reward mannequin and a reinforcement-learning algorithm referred to as proximal coverage optimization (PPO) to extend the harmlessness of your mannequin responses. Finally, you’ll consider the mannequin’s harmlessness earlier than and after the RLHF course of to achieve instinct into the impression of RLHF on aligning an LLM with human values and preferences.

Enroll Today
Generative AI with giant language fashions is an on-demand, three-week course for knowledge scientists and engineers who need to discover ways to construct generative AI functions with LLMs.

Enroll for generative AI with giant language fashions at the moment.

— Antje

LEAVE A REPLY

Please enter your comment!
Please enter your name here