Course Overview of Instructor Led Workshop-Generative AI with Diffusion Models:

Thanks to advances in computing power and scientific methods, generative AI is more accessible than ever. Its applications span creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, and personalized recommendations. This instructor‑led workshop takes a deeper look at denoising diffusion models, a leading approach behind modern text‑to‑image systems. Learners will architect and train U‑Nets, implement forward and reverse diffusion, apply key optimizations, and integrate CLIP encodings to control outputs from English prompts.

After completing Generative AI with Diffusion Models, participants will be able to:

  • Build a U Net to generate images from pure noise.
  • Improve image quality using the denoising diffusion process.
  • Control image output with context embeddings.
  • Generate images from English text prompts using CLIP.

Upcoming Batches

Loading Dates...

Key Features of Generative AI with Diffusion Models:

  • Hands‑on construction of U‑Net and diffusion pipelines.

  • Practical optimizations: GroupNorm, GELU, Rearrange Pooling, sinusoidal position embeddings.

  • Classifier‑free guidance for conditional generation.

  • CLIP‑based text conditioning for prompt‑to‑image workflows.

  • Clear, modular progression from U‑Net to full diffusion systems.

  • Clear, modular progression from U‑Net to full diffusion systems.

  • Instructor‑led format with real‑time guidance.

Who should Attend Generative AI with Diffusion Models?

  • Developers and ML practitioners implementing generative imaging systems.
  • Data scientists exploring diffusion for augmentation and simulation.
  • Researchers and engineers building text to image applications.
  • Teams evaluating modern generative AI workflows for production.

Prerequisites of Generative AI with Diffusion Models:

  • Basic understanding of deep learning concepts.
  • Familiarity with a deep learning framework (TensorFlow, PyTorch, or Keras). (This course uses PyTorch.)
  • Why choose CloudThat as your training partner?

    • Expert‑led instruction with step‑by‑step building of diffusion pipelines.
    • Practical labs on U‑Nets, diffusion, optimization tricks, and guidance methods.
    • Clear integration of CLIP for prompt‑driven image generation.
    • Emphasis on reproducible engineering practices and modular code.
    • Mentorship, Q&A, and guidance on next steps after the workshop.
    • Up‑to‑date curriculum reflecting modern generative AI best practices.
    • Trusted by enterprise teams for hands‑on, production‑oriented AI training.

    Course Outline of Generative AI with Diffusion Models: Download Course Outline

    • Build a U Net architecture.
    • Train a model to remove noise from an image.

    • Define the forward diffusion function.
    • Update the U Net architecture to accommodate a timestep.
    • Define a reverse diffusion function.

    • Implement Group Normalization.
    • Implement GELU.
    • Implement Rearrange Pooling.
    • Implement Sinusoidal Position Embeddings.

    • Add categorical embeddings to a U Net.
    • Train a model with a Bernoulli mask.

    • Learn how to use CLIP encodings.
    • Use CLIP to create a text to image neural network.

    Certification Details:

    • Participants who complete the assessment and final review earn a certificate of competency for the workshop.

    Select Course date

    Loading Dates...
    Add to Wishlist

    Course ID: 26998

    Course Price at

    Loading price info...
    Enroll Now

    FAQs:

    8 hours (single day format).

    PyTorch for modeling and training; CLIP for text to image conditioning.

    Basic deep learning knowledge and familiarity with a major framework; PyTorch is used in this course.

    Yes, via classifier free guidance and CLIP encodings for text conditioned outputs.

    Yes, upon completing the assessment and final review.

    Enquire Now