LLMs for IT Professionals: Building, Fine-tuning, and Deploying – Digital Nova Scotia – Leading Digital Industry

Take your development skills to the next level with practical AI training designed for experienced developers. In this microcredential, you’ll dive into large language models (LLMs), transformers, Python AI tools, and real-world datasets while learning how to build, fine-tune, and deploy AI applications.

The program guides you through end-to-end pipelines, API integration, testing and QA, and best practices for responsible AI deployment, so you can confidently bring AI into your projects. By the end of the course, you’ll have the practical expertise to create robust, real-world LLM applications, integrate them seamlessly into your workflows, and drive AI innovation in your team or organization.

Applications now closed

About this course

Course overview

  • Hands-On LLM Development: Gain practical expertise in developing and deploying Large Language Model (LLM)-based applications, including working with transformers, datasets, and Python-based AI models.
  • Fine-Tuning and Integration: Master techniques for fine-tuning LLMs, optimizing performance, and integrating them into applications through APIs and custom solutions.
  • Deployment and Monitoring: Explore strategies for secure deployment, testing, and monitoring LLMs in production environments, ensuring robust and reliable performance.
  • Ethical and Responsible AI: Address ethical considerations in AI, including bias detection and mitigation, while applying responsible AI practices.

Modules

  • Module 1: Introduction and Fundamentals
    • Understand transformer architecture, including key components like attention mechanisms and encoder-decoder structures.
    • Explore popular transformer models such as BERT, GPT, and T5, along with real-world applications.
    • Learn the capabilities, training methodologies, and differences between LLMs and traditional ML/NLP models.
    • Hands-on assignment: Use pre-trained LLMs for tasks like text generation, summarization, and sentiment analysis.
  • Module 2: Building and Fine-Tuning LLMs
    • Learn to source, clean, and preprocess datasets, applying tokenization, padding, and batch processing techniques.
    • Fine-tune LLMs on custom datasets and optimize performance using advanced architectures like Adapter layers, LoRA, and QLoRA.
    • Hands-on assignment: Fine-tune an LLM and address common optimization challenges
  • Module 3: Leveraging LLMs Through APIs and Integration
    • Access and utilize API-based LLM services, understanding authentication, rate limits, and usage costs.
    • Build LLM-powered applications like chatbots and summarizers, leveraging evaluation metrics like BLEU, ROUGE, and Perplexity.
    • Integrate LLMs into backend and frontend applications, ensuring secure API usage and data privacy.
    • Hands-on assignment: Create a functional application using an LLM API and deploy it in an end-to-end pipeline.
  • Module 4: Utilization of RAG and MCP
    • Understand the architecture and use-cases of Retrieval-Augmented Generation (RAG), including how it combines knowledge retrieval with LLM-based generation.
    • Learn to set up and integrate RAG frameworks with existing LLMs to enhance factual accuracy and reduce hallucinations.
    • Explore Memory-Contextual Prompting (MCP) techniques to provide dynamic, personalized responses in applications like chatbots and assistants.
    • Compare RAG and MCP with other architectures, evaluating trade-offs in performance, scalability, and deployment.
    • Hands-on assignment: Build a simple RAG or MCP-enhanced application using either an open-source framework or by integrating an API-based knowledge retrieval system.
  • Module 5: Good Practices and Ethics
    • Address ethical considerations in AI, including data privacy, fairness, and bias detection/mitigation.
    • Learn responsible AI practices to ensure ethical and reliable model deployment.

Course schedule

Live sessions are mandatory; however, if you require an exception, please speak with the facilitator
  • Week 1 | Participant Onboarding
    • No live sessions
  • Week 2 | Module 1: Introduction and Fundamentals
    • January 13 & 15, 2026 (9:00AM - 12:00PM)
  • Week 3 | Module 2: Building and Fine-Tuning LLMs
    • January 20 & 22, 2026 (9:00AM - 12:00PM)
  • Week 4 | Break
    • No live sessions
  • Week 5 | Module 3: Leveraging LLMs Through APIs and Integration
    • February 3 & 5, 2026 (9:00AM - 12:00PM)
  • Week 6 | Module 4: Utilization of RAG and MCP
    • February 10 & 12, 2026 (9:00AM - 12:00PM)
  • Week 7 | Module 5: Good Practices and Ethics
    • February 17 & 19, 2026 (9:00AM - 12:00PM)
  • Week 8-9 | Capstone Project
    • February 24, 2026 (9:00AM - 12:00PM)
  • Weeks 10-11 | Grading & Project Feedback

Course requirements

This program leverages open-source tools and libraries, such as Python, Hugging Face, PyTorch, and TensorFlow, to provide accessible and practical learning experiences. No paid subscriptions are required.
  • Other software and hardware required:
    • Google account
    • Internet access
    • Laptop or desktop device (macOS or Windows)

This course is designed for IT, Programming, and Development professionals.

This course is for:

  • AI and Software Developers: Professionals with Python or programming experience who want to enhance their skills in building and deploying LLM-based applications for workplace solutions.
  • Data Scientists and Analysts: Practitioners aiming to apply advanced LLM techniques, such as fine-tuning and data preparation, to create impactful AI-driven insights in their organizations.
  • Tech Professionals Transitioning to AI: Engineers or IT specialists looking to upskill in AI to bring transformative LLM applications to their workplace.

Have you experimented with AI tools and have Python or programming experience, but want to dive deeper? This course is designed for you!

Application requirements

All applications will be reviewed by a committee and we will only accept up to 30 participants for this program. Here are various elements we will review and score:

  • Professional experience and relevance with your current role
  • Experience with Python and other programming languages
  • Your motivation for AI training and upskilling
  • Potential impact on your productivity and professional work after completion of this course
  • Past experience with online learning and time commitment towards completing the course

*Please note that only residents of Nova Scotia will be accepted into the program.

Scholarship details

Applications close December 2, 2025 at 12:00pm AST. Applicants will be accepted on a rolling basis.

The full course is $739+HST, but good news, Nova Scotians can access scholarship pricing thanks to generous funding support from ACOA. Apply below to access $89+HST (DNS members) and $129+HST (non-members).

Your Instructors


Adnane Ait Nasser
AI Digital Research Consultant | ACENET


Yashar Monfared
Digital Research Consultant, Engineering | ACENET


Akshay Ghosh
Digital Research Consultant | ACENET


Fred Allen, MEd
Professional Studies Manager | StFX

Applications now closed