Course Overview:

With the incredible capabilities of Large Language Models (LLMs), enterprises are eager to integrate them into their products and applications for use cases like text generation, large-scale document analysis, and chatbot assistants. 

The fastest way to begin leveraging LLMs is through prompt engineering, a foundational technique that underpins advanced methods such as Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). 

In this instructor-led workshop, participants will work with NVIDIA’s NIM deployment of Llama 3.1 alongside the popular LangChain library. Through practical, hands-on projects, learners will gain the skills to design, build, and deploy powerful LLM-based applications using prompt engineering. 

After completing this course, participants will be able to:

  • Apply iterative prompt engineering best practices to create robust LLM applications.
  • Use LangChain to organize, compose, and optimize LLM workflows.
  • Write Python application code for generative tasks, document analysis, and chatbot development.
  • Build applications leveraging structured outputs from LLMs.
  • Develop and integrate LLM-powered agents capable of tool use and real-time data integration.
  • Deploy NVIDIA LLM NIM with Llama 3.1 for scalable enterprise-ready applications.

Upcoming Batches

Loading Dates...

Key Features:

  • Hands-On Prompt Engineering 

    • Practical exercises in crafting and refining prompts. 
    • Build applications covering generation, analysis, and chatbots. 
  • LangChain & NVIDIA Ecosystem Integration 

    • Learn how to build reusable, composable LLM workflows. 
    • Use NVIDIA NIM to deploy Llama 3.1 efficiently. 
  • Real-World Use Cases 

    • Document tagging, structured output, chatbots with persona-driven roles, and tool integration. 
  • Project-Based Learning 

    • Multiple mini-projects after each module. 
    • Final assessment project integrating prompts, LangChain, structured outputs, and agentic tools. 
  • Certification of Competency 

    • Earn an NVIDIA DLI certificate to validate your practical skills. 

Who should Attend?

  • Python developers exploring LLM application development
  • Data scientists and AI engineers
  • Solution architects designing enterprise generative AI solutions
  • Technical professionals seeking to integrate LLMs into workflows

Prerequisites:

  • Intermediate-level Python programming skills
  • Basic understanding of LLM fundamentals (tokenization, embeddings, etc.)
  • Familiarity with APIs and JSON-based workflows recommended
  • Why choose CloudThat as your training partner?

    • NVIDIA-Certified Training Partner with expertise in Generative AI.
    • Industry-recognized trainers with real-world AI/ML experience.
    • Hands-on learning with GPU-powered cloud environments provided.
    • Customized learning paths for both beginners and advanced professionals.
    • Interactive sessions with live coding, mini-projects, and Q&A.
    • Career support through guidance on AI roles and certification pathways.
    • Always up-to-date content reflecting NVIDIA’s cutting-edge LLM technologies.

    Course Outline: Download Course Outline

    • Orient to the main worshop topics, schedule and prerequisites.
    • Learn why prompt engineering is core to interacting with Large Languange Models (LLMs).
    • Discuss how prompt engineering can be used to develop many classes of LLM-based applications.
    • Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.

    • Get familiar with the workshop environment.
    • Create and view responses from your first prompts using the OpenAI API, and LangChain.
    • Learn how to stream LLM responses, and send LLMs prompts in batches, comparing differences in performance.
    • Begin practicing the process of iterative prompt development.
    • Create and use your first prompt templates.
    • Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs.

    • Learn about LangChain runnables, and the ability to compose them into chains using LangChain Expression Language (LCEL).
    • Write custom functions and convert them into runnables that can be included in LangChain chains.
    • Compose multiple LCEL chains into a single larger application chain.
    • Exploit opportunities for parallel work by composing parallel LCEL chains.
    • Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs using LCEL and parallel execution.

    • Learn about two of the core chat message types, human and AI messages, and how to use them explictly in application code.
    • Provide chat models with instructive examples by way of a technique called few-shot prompting.
    • Work explicitly with the system message, which will allow you to define an overarching persona and role for your chat models.
    • Use chain-of-thought prompting to augment your LLMs ability to perform tasks requiring complex reasoning.
    • Manage messages to retain conversation history and enable chatbot functionality.
    • Do a mini-project where you build a simple yet flexible chatbot application capable of assuming a variety of roles.

    • Explore some basic methods for using LLMs to generate structured data in batch for downstream use.
    • Generate structured output through a combination of Pydantic classes and LangChain's `JsonOutputParser`.
    • Learn how to extract data and tag it as you specify out of long form text.
    • Do a mini-project where you use structured data generation techniques to perform data extraction and document tagging on an unstructured text document.

    • Create LLM-external functionality called tools, and make your LLM aware of their availability for use.
    • Create an agent capable of reasoning about when tool use is appropriate, and integrating the result of tool use into its responses.
    • Do a mini-project where you create an LLM agent capable of utilizing external API calls to augment its responses with real-time data.

    • Review key learnings and answer questions.
    • Earn a certificate of competency for the workshop.
    • Complete the workshop survey.
    • Get recommendations for the next steps to take in your learning journey.

    Certification Details:

      Participants who complete all modules and final project will receive an NVIDIA DLI Certificate of Competency in LLM Application Development.

    Select Course date

    Loading Dates...
    Add to Wishlist

    Course ID: 26350

    Course Price at

    Loading price info...
    Enroll Now

    FAQs:

    Developers, data scientists, and AI engineers interested in applying prompt engineering for enterprise-ready LLM solutions.

    Basic familiarity with LLM concepts is recommended, but deep ML expertise is not required.

    NVIDIA NIM, LangChain, Llama 3.1, Pydantic, and agentic frameworks.

    A 1-day, instructor-led workshop with lectures, hands-on labs, and coding projects.

    Yes, every module includes coding labs and mini-projects, culminating in a final integrated application.

    AI/ML professionals with LLM expertise typically earn 30–50% higher salaries than traditional software engineers, depending on role and region.

    Yes, an NVIDIA DLI Certificate of Competency upon successful completion.

    A laptop with Chrome/Firefox. NVIDIA provides cloud-based GPU-accelerated servers for hands-on labs.

    It is designed for Python developers with intermediate coding skills; prior LLM exposure is helpful but not mandatory.

    It provides practical, industry-relevant skills in Generative AI and prompt engineering, opening opportunities in AI engineering, solution architecture, and applied NLP.

    Enquire Now