Developing Generative AI Applications on AWS
Do you want generative AI to enhance customer experiences and resolve challenging business issues? Developing Generative AI Applications on AWS teaches you how to create generative AI applications.
During this two-day course, a knowledgeable instructor with experience in Python will walk you through the fundamentals, advantages, and related jargon of generative artificial intelligence as a developer. In order to create generative AI applications using AWS services, you will discover the fundamentals of prompt engineering, how to plan a generative AI project, and more. By the end of the course, you will acquire the knowledge and abilities required to create applications that can answer queries, create and summarize text, and communicate with users via a chatbot interface.
After Completing Developing Generative AI course, students will be able to:
- Describe generative AI and how it aligns to machine learning
- Define the importance of generative AI and explain its potential risks and benefits
- Identify business value from generative AI use casesc
- Discuss the technical foundations and key terminology for generative AI
- Explain the steps for planning a generative AI project
- Identify some of the risks and mitigations when using generative AI
- Understand how Amazon Bedrock works
- Familiarize yourself with basic concepts of Amazon Bedrock
- Recognize the benefits of Amazon Bedrock
- List typical use cases for Amazon Bedrock
- Describe the typical architecture associated with an Amazon Bedrock solution
- Understand the cost structure of Amazon Bedrock
- Implement a demonstration of Amazon Bedrock in the AWS Management Console
- Define prompt engineering and apply general best practices when interacting with FMs
- Identify the basic types of prompt techniques, including zero-shot and few-shot learning
- Apply advanced prompt techniques when necessary for your use case
- Identify which prompt-techniques are best-suited for specific models
- Identify potential prompt misuses
- Analyze potential bias in FM responses and design prompts that mitigate that bias
- Identify the components of a generative AI application and how to customize a foundation model (FM)v
- Describe Amazon Bedrock foundation models, inference parameters, and key Amazon Bedrock APIs
- Identify Amazon Web Services (AWS) offerings that help with monitoring, securing, and governing your Amazon Bedrock applications
- Describe how to integrate LangChain with large language models (LLMs), prompt templates, chains, chat models, text embeddings models, document loaders, retrievers, and Agents for Amazon Bedrock
- Describe architecture patterns that can be implemented with Amazon Bedrock for building generative AI applications
- Apply the concepts to build and test sample use cases that leverage the various Amazon Bedrock models, LangChain, and the Retrieval Augmented Generation (RAG) approach