Course Overview of Building LLM Applications With Prompt Engineering:

With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into their products and internal applications for a wide variety of use cases, including (but not limited to) text generation, large-scale document analysis, and chatbot assistants.

The fastest way to begin leveraging LLMs for diverse tasks is by using modern prompt engineering techniques. These techniques are also foundational for more advanced LLM-based methods such as Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). In this workshop, learners will work with an NVIDIA language model NIM, powered by the open-source Llama‑3.1 large language model, alongside the popular LangChain library. The workshop will provide a foundational skill set for building a range of LLM-based applications using prompt engineering.

After completing Building LLM Applications With Prompt Engineering, participants will be able to:

  • Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.
  • Be proficient in using LangChain to organize and compose LLM workflows.
  • Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.

Upcoming Batches

Loading Dates...

Key Features of Building LLM Applications With Prompt Engineering:

  • Instructor-led, hands-on workshop focused on practical prompt engineering.

  • Use of NVIDIA language model NIM (powered by Llama‑3.1) and LangChain.

  • Coverage of structured outputs, LCEL & runnables, and tool/agent patterns.

  • Mini-projects for batch processing, chatbots with roles/personas, and structured extraction.

  • Realistic exercises in streaming, batching, and iterative prompt development.

  • Final assessment and guidance on next steps in your learning journey.

Who should Attend?

  • Developers building LLM-based applications and intelligent systems.
  • AI/ML engineers designing automation, retrieval, or chat workflows.
  • Technical professionals who want practical skills in prompt engineering.
  • Teams seeking repeatable LLM patterns using LangChain and NIM.

Prerequisites of Building LLM Applications With Prompt Engineering:

Intermediate Python developers with familiarity in LLM fundamentals.

Why choose CloudThat as your training partner?

  • Expert-led instruction focused on real-world prompt engineering.
  • Hands-on labs with NVIDIA NIM, Llama‑3.1, and LangChain.
  • Structured approach to LCEL, runnables, structured outputs, tools, and agents.
  • Mini-projects that translate directly to production-ready patterns.
  • Mentorship, Q&A, and guidance on next steps.
  • Regular updates aligned with modern LLM and LangChain best practices.
  • Trusted by enterprise teams for practical, scalable AI training.

Course Outline: Download Course Outline

  • Orient to the main workshop topics, schedule, and prerequisites.
  • Learn why prompt engineering is core to interacting with Large Language Models (LLMs).
  • Discuss how prompt engineering can be used to develop many classes of LLM-based applications.
  • Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.

  • Get familiar with the workshop environment.
  • Create and view responses from your first prompts using the OpenAI API and LangChain.
  • Learn how to stream LLM responses and send prompts in batches, comparing performance.
  • Begin practicing iterative prompt development.
  • Create and use your first prompt templates.
  • Mini-project: perform a combination of analysis and generative tasks on a batch of inputs.

  • Learn about LangChain runnables and compose them into chains using LangChain Expression Language (LCEL).
  • Write custom functions and convert them into runnables that can be included in LangChain chains.
  • Compose multiple LCEL chains into a single larger application chain.
  • Exploit opportunities for parallel work by composing parallel LCEL chains.
  • Mini-project: batch analysis and generation with LCEL parallel execution.

  • Learn two core chat message types—human and AI—and how to use them explicitly in application code.
  • Provide instructive examples via few-shot prompting.
  • Work explicitly with the system message to define persona and role.
  • Use chain-of-thought prompting to improve complex reasoning tasks.
  • Manage conversation history for chatbot functionality.
  • Mini-project: build a flexible chatbot capable of assuming multiple roles.

  • Explore methods for generating structured data in batch for downstream use.
  • Generate structured output using Pydantic classes and LangChain’s JsonOutputParser.
  • Learn how to extract and tag data from long-form text.
  • Mini-project: perform data extraction and document tagging on an unstructured text document.

  • Create LLM-external functionality (“tools”) and make your LLM aware of them.
  • Create an agent that decides when to use tools and integrates tool results in responses.
  • Mini-project: build an LLM agent that calls external APIs to augment responses with real-time data.

  • Review key learnings and answer questions.
  • Earn a certificate of competency for the workshop.
  • Complete the workshop survey.
  • Get recommendations for the next steps in your learning journey.

Certification Details:

    Participants who complete the assessment will earn a certificate of competency for the workshop.

Select Course date

Loading Dates...
Add to Wishlist

Course ID: 27003

Course Price at

Loading price info...
Enroll Now

FAQs:

Yes, it’s an instructor-led workshop with live guidance and Q&A.

8 hours (single-day workshop format).

NVIDIA NIM (with Llama 3.1) and LangChain, plus LCEL, structured outputs, and tools/agents.

Intermediate intended for developers with LLM fundamentals and Python experience.

Yes, final assessment with a certificate of competency upon completion.

Enquire Now