Edge AI Engineering - Weekly Labs

Week 1: Introduction and Setup

Lab 1: Raspberry Pi Configuration

Objectives:

  • Install Raspberry Pi OS using Raspberry Pi Imager
  • Configure basic settings (hostname, SSH, WiFi)
  • Learn essential Linux commands
  • Manage files between your computer and Raspberry Pi

Instructions:

  1. Download Raspberry Pi Imager on your computer
  2. Configure OS settings (enable SSH, set hostname, WiFi credentials)
  3. Boot your Raspberry Pi and confirm connectivity
  4. Learn how to use SSH for remote access
  5. Transfer files using SCP or FileZilla
  6. Update your Raspberry Pi OS (sudo apt update && sudo apt upgrade)
  7. Practice basic Linux commands (ls, cd, mkdir, cp, mv)

Deliverable: Screenshot showing successful SSH connection to your Raspberry Pi

Lab 2: Development Environment Setup

Objectives:

  • Set up Python environment for development
  • Configure remote development tools
  • Install essential libraries
  • Test camera functionality

Instructions:

  1. Install Python essentials: pip install jupyter matplotlib numpy pillow

  2. Configure Jupyter Notebook for remote access:

    pip install jupyter
    jupyter notebook --generate-config
    jupyter notebook --ip=0.0.0.0 --no-browser
  3. Connect the camera module (USB or CSI) to your Raspberry Pi

  4. Test camera functionality using command-line tools

  5. Write a simple Python script to capture an image

Deliverable: A simple Python script that captures and displays an image from your camera and a Screenshot showing a successful image capture


Week 2: Image Classification Fundamentals

Lab 3: Working with Pre-trained Models

Objectives:

  • Install TensorFlow Lite runtime
  • Download and run MobileNet V2 model
  • Process and classify images
  • Understand model inputs and outputs

Instructions:

  1. Install TensorFlow Lite runtime: pip install tflite_runtime

  2. Download MobileNet V2 model:

    wget https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgztar xzf mobilenet_v2_1.0_224_quant.tgz
  3. Download labels file

  4. Create a Python script that:

    • Loads the TFLite model
    • Processes input images to 224x224 format
    • Runs inference on test images
    • Displays top-5 predicted classes with confidence scores

Deliverable: Python script that successfully classifies sample images with MobileNet V2 and a Screenshot showing a successful result

Lab 4: Custom Dataset Creation

Objectives:

  • Create a simple custom dataset using Raspberry Pi camera
  • Organize images into classes
  • Prepare dataset for model training

Instructions:

  1. Create a web interface for image capture:
    • Use Flask to create a simple web server
    • Set up camera preview and capture functionality
    • Save captured images with appropriate filenames
  2. Capture at least 50 images per class for 3 classes
  3. Organize the dataset into an appropriate directory structure
  4. Document your dataset creation process

Deliverable: Structured dataset with at least 3 classes and 50 images per class


Week 3: Custom Image Classification

Lab 5: Edge Impulse Model Training

Objectives:

  • Create an Edge Impulse project
  • Upload and process the dataset
  • Design and train a transfer learning model
  • Evaluate model performance

Instructions:

  1. Create an Edge Impulse account and a new project
  2. Upload your custom dataset
  3. Create an impulse design:
    • Set image size to 160x160
    • Use Transfer Learning for feature extraction
  4. Generate features for all images
  5. Train model using MobileNet V2
  6. Analyze model performance (accuracy, confusion matrix)
  7. Test model on validation data

Deliverable: Edge Impulse project link and screenshot of model performance metrics

Lab 6: Model Deployment to Raspberry Pi

Objectives:

  • Export trained model to TFLite format
  • Deploy model to Raspberry Pi
  • Create a real-time inference application
  • Optimize inference speed

Instructions:

  1. Export model as TensorFlow Lite (.tflite)
  2. Transfer the model to Raspberry Pi
  3. Create a Python application that:
    • Captures live images from the camera
    • Preprocesses images for the model
    • Runs inference and displays results
    • Shows confidence scores
  4. Implement a web interface for real-time classification

Deliverable: Python script for real-time image classification with your custom model and a Screenshot showing a successful result


Week 4: Object Detection Fundamentals

Lab 7: Pre-trained Object Detection

Objectives:

  • Understand object detection architecture
  • Run pre-trained SSD-MobileNet model
  • Process detection outputs
  • Visualize detected objects

Instructions:

  1. Download the pre-trained SSD-MobileNet V1 model
  2. Create a Python script that:
    • Loads the model and labels
    • Preprocesses input images
    • Runs inference
    • Extracts bounding boxes, classes, and scores
    • Implements Non-Maximum Suppression (NMS)
    • Visualizes detections with bounding boxes
  3. Test on various images with multiple objects

Deliverable: Python script that performs and visualizes object detection on test images and a Screenshot showing a successful result.

Lab 8: EfficientDet and FOMO Models

Objectives:

  • Compare different object detection architectures
  • Implement EfficientDet and FOMO models
  • Analyze performance differences
  • Understand trade-offs between models

Instructions:

  1. Download the EfficientDet Lite0 model
  2. Implement inference with EfficientDet
  3. Compare with SSD-MobileNet implementation
  4. Learn about FOMO (Faster Objects, More Objects)
  5. Analyze trade-offs in accuracy vs. speed
  6. Measure inference time on Raspberry Pi

Deliverable: Comparison report of SSD-MobileNet vs. EfficientDet with performance metrics and visualized results


Week 5: Custom Object Detection

Lab 9: Dataset Creation and Annotation

Objectives:

  • Create an object detection dataset
  • Learn annotation techniques
  • Prepare dataset for model training

Instructions:

  1. Capture at least 100 images containing objects to detect
  2. Upload images to Roboflow or a similar annotation tool
  3. Create bounding box annotations for each object
  4. Apply data augmentation (rotation, brightness adjustment)
  5. Export dataset in YOLO format
  6. Document the annotation process

Deliverable: Annotated dataset with at least 2 object classes and 100 total images

Lab 10: Training Models in Edge Impulse

Objectives:

  • Upload annotated dataset to Edge Impulse
  • Train SSD MobileNet object detection model
  • Evaluate model performance
  • Export model for deployment

Instructions:

  1. Create a new Edge Impulse project for object detection
  2. Upload annotated dataset (train/test splits)
  3. Create object detection impulse
  4. Train SSD MobileNet model
  5. Evaluate model performance
  6. Export model as TensorFlow Lite

Deliverable: Edge Impulse project link with trained object detection model and performance metrics


Week 6: Advanced Object Detection

Lab 11: FOMO Model Training

Objectives:

  • Understanding FOMO architecture benefits
  • Train FOMO model on Edge Impulse
  • Compare performance with SSD MobileNet
  • Deploy optimized model to Raspberry Pi

Instructions:

  1. Create a new impulse in Edge Impulse using the same dataset
  2. Train FOMO model instead of SSD MobileNet
  3. Compare inference speed and accuracy
  4. Deploy both models to Raspberry Pi
  5. Create an application that can switch between models
  6. Measure and document performance differences

Deliverable: Python application that compares SSD MobileNet vs. FOMO performance in real-time

Lab 12: YOLO Implementation

Objectives:

  • Install and configure Ultralytics YOLO
  • Convert models to optimized NCNN format
  • Create real-time detection application
  • Implement object counting

Instructions:

  1. Install Ultralytics: pip install ultralytics
  2. Download and test YOLO (n) model
  3. Export model to NCNN format for optimization
  4. Create Python script for real-time detection
  5. Implement object counting algorithm
  6. Add visualization of counts over time

Deliverable: Python application for real-time object detection and counting using YOLO


Week 7: Object Counting Project

Lab 13: Custom YOLO Training

Objectives:

  • Train YOLO on a custom dataset
  • Optimize model for edge deployment
  • Create a complete application for object counting

Instructions:

  1. Train the YOLO model on your custom dataset
    • Use Google Colab for training if needed
    • Set appropriate hyperparameters
  2. Export optimized model for Raspberry Pi
  3. Create a Python application that:
    • Captures video feed
    • Detects objects using YOLO
    • Counts objects over time
    • Logs results to a database

Deliverable: Complete object counting application with data logging

Lab 14: Fixed-Function AI Integration (Optional)

Objectives:

  • Integrate multiple AI models into a single application
  • Create a dashboard for visualization
  • Optimize application for long-term deployment

Instructions:

  1. Create an integration application that combines:
    • Object detection capabilities
    • Classification for detected objects
    • Counting and tracking over time
  2. Implement a simple web dashboard for visualization
  3. Add performance monitoring
  4. Configure application for startup at boot

Deliverable: Integrated application combining multiple AI capabilities with visualization dashboard


Week 8: Introduction to Generative AI

Lab 15: Raspberry Pi Configuration for SLMs

Objectives:

  • Optimize Raspberry Pi for running Small Language Models
  • Install an active cooling solution
  • Configure memory and swap
  • Install essential libraries

Instructions:

  1. Install active cooling solution on Raspberry Pi 5

  2. Optimize system configuration:

    • Increase swap memory: sudo dphys-swapfile swapoff, edit /etc/dphys-swapfile
    • Set CONF_SWAPSIZE to 2048
    • sudo dphys-swapfile setup && sudo dphys-swapfile swapon
  3. Install dependencies:

    sudo apt update
    sudo apt install build-essential python3-dev

Deliverable: Screenshot showing system configuration with increased swap and temperature monitor during stress test

Lab 16: Ollama Installation and Testing

Objectives:

  • Install Ollama framework
  • Pull and test Small Language Models
  • Benchmark model performance
  • Monitor resource usage

Instructions:

  1. Install Ollama:

    curl -fsSL https://ollama.com/install.sh | sh
  2. Pull different models (such as):

    ollama pull llama3.2:1b
    ollama pull gemma:2b
    ollama pull phi3:latest
  3. Run a basic inference test with each model

  4. Measure and compare:

    • Load time
    • Inference speed (tokens/sec)
    • Memory usage
    • Temperature

Deliverable: Benchmark report comparing performance metrics of different SLM models on your Raspberry Pi


Week 9: SLM Python Integration

Lab 17: Ollama Python Library

Objectives:

  • Use the Ollama Python library
  • Create interactive applications
  • Process SLM responses programmatically
  • Handle multiple conversation turns

Instructions:

  1. Install Ollama Python library: pip install ollama
  2. Create Python script to:
    • Connect to Ollama API
    • Send prompts to models
    • Process and format responses
    • Handle conversation context
  3. Implement proper error handling
  4. Create a simple interactive CLI application

Deliverable: Python script demonstrating Ollama library usage with conversation handling

Lab 18: Function Calling and Structured Outputs

Objectives:

  • Implement function calling with SLMs
  • Create applications with structured outputs
  • Build validation mechanisms
  • Handle image inputs

Instructions:

  1. Install required libraries:

    pip install pydantic instructor openai
  2. Create Pydantic models for structured outputs

  3. Implement function calling with an instructor:

    client = instructor.patch(    OpenAI(base_url="http://localhost:11434/v1", api_key="ollama"),    mode=instructor.Mode.JSON,)
  4. Build distance calculator application using SLM for city/country recognition

  5. Add image input processing

Deliverable: Python application that uses function calling for structured interaction with SLMs


Week 10: Retrieval-Augmented Generation

Lab 19: RAG Fundamentals

Objectives:

  • Understand RAG architecture
  • Create vector database
  • Implement embedding generation
  • Build simple RAG system

Instructions:

  1. Install required libraries:

    pip install langchain chromadb
  2. Create a simple dataset with text documents

  3. Implement document splitting and chunking

  4. Generate embeddings using Ollama

  5. Store embeddings in ChromaDB

  6. Create query system

Deliverable: Python implementation of a basic RAG system with simple text documents

Lab 20: Advanced RAG

Objectives:

  • Optimize RAG for edge devices
  • Implement more efficient retrieval
  • Create a specialized knowledge base
  • Build validation mechanisms

Instructions:

  1. Create a specialized knowledge base (e.g., technical documentation)
  2. Implement optimized embedding generation
  3. Fine-tune retrieval parameters
  4. Add response validation
  5. Create a persistent vector store
  6. Benchmark performance

Deliverable: Optimized RAG implementation with specialized knowledge base and performance analysis


Week 11: Vision-Language Models

Lab 21: Florence-2 Setup

Objectives:

  • Install Florence-2 model
  • Configure environment
  • Run basic inference tests
  • Understand model capabilities

Instructions:

  1. Install required dependencies:

    pip install transformers torch torchvision torchaudio
    pip install timm einops
    pip install autodistill-florence-2
  2. Download model and test basic functionality

  3. Run image captioning test

  4. Measure performance (memory usage, inference time)

Deliverable: Python script demonstrating basic Florence-2 functionality with performance metrics

Lab 22: Vision Tasks with Florence-2

Objectives:

  • Implement various vision tasks
  • Create applications for captioning, detection, grounding
  • Optimize performance
  • Combine tasks

Instructions:

  1. Implement image captioning:
    • Basic caption generation
    • Detailed caption generation
  2. Implement object detection:
    • Bounding box visualization
    • Multiple object detection
  3. Implement visual grounding:
    • Highlight specific objects based on text prompts
  4. Create segmentation application
  5. Measure the performance of each task

Deliverable: Python application demonstrating multiple vision tasks with Florence-2 and performance analysis


Week 12: Physical Computing Basics

Lab 23: Sensor and Actuator Integration

Objectives:

  • Connect digital sensors
  • Read environmental data
  • Control LEDs and actuators
  • Create a data collection system

Instructions:

  1. Connect hardware components:

    • DHT22 temperature/humidity sensor
    • BMP280 pressure sensor
    • LEDs (red, yellow, green)
    • Push button
  2. Install required libraries:

    pip install adafruit-circuitpython-dht adafruit-circuitpython-bmp280
  3. Create a Python script to read sensor data

  4. Implement LED control based on conditions

  5. Create visualization of sensor data

Deliverable: Python application for reading sensor data and controlling actuators with visualization

Lab 24: Jupyter Notebook Integration

Objectives:

  • Use Jupyter Notebook for physical computing
  • Create interactive widgets
  • Visualize sensor data in real-time
  • Control actuators from a notebook

Instructions:

  1. Install ipywidgets: pip install ipywidgets
  2. Create a Jupyter Notebook for sensor data collection
  3. Implement interactive widgets for control
  4. Create real-time visualization
  5. Build a dashboard with multiple data views

Deliverable: Jupyter Notebook with interactive widgets for sensor monitoring and actuator control


Week 13: SLM-Physical Computing Integration

Lab 25: Basic SLM Analysis

Objectives:

  • Integrate SLMs with sensor data
  • Create analysis application
  • Implement decision-making logic
  • Control actuators based on SLM responses

Instructions:

  1. Create a Python application that:
    • Collects sensor data
    • Formats data for SLM prompt
    • Sends prompt to model
    • Parses response
    • Controls actuators based on response
  2. Implement multiple analysis modes
  3. Add error handling for SLM responses

Deliverable: Python application integrating SLMs with physical sensors and actuators

Lab 26: SLM-IoT Control System

Objectives:

  • Create a complete IoT monitoring system
  • Implement natural language interaction
  • Add data logging and analysis
  • Create web interface

Instructions:

  1. Build a complete system with:
    • Sensor data collection
    • SLM-based analysis
    • Natural language command processing
    • Data logging to the database
    • Web interface for interaction
  2. Implement multiple SLM models
  3. Add historical data analysis
  4. Create visualization dashboard

Deliverable: Complete IoT monitoring system with SLM integration and web interface


Week 14: Advanced Edge AI Techniques

Lab 27: Building Agents

Objectives:

  • Create agent architecture
  • Implement tool usage
  • Build decision-making system
  • Handle complex tasks

Instructions:

  1. Implement calculator agent:
    • Create a query routing system
    • Implement tool functions
    • Build decision-making logic
  2. Create knowledge router:
    • Implement web search integration
    • Build classification system
    • Handle time-based queries
  3. Measure and optimize performance

Deliverable: Python implementation of agent architecture with tool usage and decision routing

Lab 28: Advanced Prompting and Validation

Objectives:

  • Implement chain-of-thought prompting
  • Create few-shot learning examples
  • Build task decomposition system
  • Implement response validation

Instructions:

  1. Create examples for different prompting strategies
  2. Implement chain-of-thought framework
  3. Build few-shot learning templates
  4. Create a task decomposition system
  5. Implement validation mechanisms
  6. Compare the effectiveness of different strategies

Deliverable: Python implementation demonstrating different prompting strategies with performance comparison


Week 15: Final Project Integration

Lab 29: Agentic RAG System

Objectives:

  • Combine agent architecture with RAG
  • Create a complete knowledge system
  • Implement advanced validation
  • Build query optimization

Instructions:

  1. Create a complete agentic RAG system:
    • Build knowledge database
    • Implement agent architecture
    • Add tool functions
    • Create validation mechanisms
    • Optimize retrieval
  2. Test with complex queries
  3. Measure performance
  4. Create visualization of system components

Deliverable: Complete agentic RAG system with documentation and performance analysis

Lab 30: Final Project

Objectives:

  • Design and implement a comprehensive Edge AI system
  • Combine multiple techniques
  • Create complete documentation
  • Present project

Instructions:

  1. Design final project combining:
    • Computer vision capabilities
    • SLM integration
    • Physical computing
    • Advanced techniques (RAG, agents, etc.)
  2. Implement complete system
  3. Create documentation
  4. Measure performance
  5. Prepare presentation

Deliverable: Complete final project with documentation, code, and presentation


Hardware Requirements

Basic Setup (Weeks 1-7)

  • Raspberry Pi Zero 2W or Pi 5
  • MicroSD card (32GB+)
  • Camera module (USB webcam or Pi camera)
  • Power supply

Generative AI (Weeks 8-15)

  • Raspberry Pi 5 (8GB RAM recommended)
  • Active cooling solution
  • MicroSD card (64GB+ recommended)

Physical Computing (Weeks 12-15)

  • DHT22 temperature/humidity sensor
  • BMP280 pressure/temperature sensor
  • LEDs (red, yellow, green)
  • Push button
  • Resistors (4.7kΩ, 330Ω)
  • Jumper wires
  • Breadboard

Software Requirements

Development Environment

  • Raspberry Pi OS (64-bit)
  • Python 3.9+
  • Jupyter Notebook
  • SSH client

Computer Vision and DL

  • TensorFlow Lite runtime
  • OpenCV
  • Edge Impulse
  • Ultralytics

Generative AI

  • Ollama
  • Transformers
  • Pytorch
  • ChromaDB
  • LangChain
  • Pydantic
  • Instructor

Physical Computing

  • GPIO Zero
  • Adafruit CircuitPython libraries

Assessment Criteria

Each lab will be evaluated based on:

  1. Functionality (40%): Does the implementation work as specified?
  2. Code Quality (20%): Is the code well-structured, documented, and efficient?
  3. Documentation (20%): Are the process and results documented?
  4. Analysis (20%): Is there a thoughtful analysis of results and performance?

The final project will be evaluated based on:

  1. Integration (30%): How well different components are integrated
  2. Innovation (20%): Novel approaches or applications
  3. Implementation (30%): Overall quality and functionality
  4. Presentation (20%): Clear explanation and demonstration

Tips for Success

  1. Start Early: These labs build on each other. Falling behind makes later labs more difficult.
  2. Document As You Go: Take notes, screenshots, and document issues/solutions.
  3. Optimize Resources: SLMs and VLMs require careful resource management.
  4. Collaborate: Discuss approaches with classmates while ensuring individual work.
  5. Backup Regularly: Create backups of your SD card after significant progress.
  6. Measure Performance: Always benchmark and optimize your implementations.
  7. Ask Questions: If you’re stuck, ask for help early rather than falling behind.