Edge AI Engineering - Weekly Labs
Week 1: Introduction and Setup
Lab 1: Raspberry Pi Configuration
Objectives:
- Install Raspberry Pi OS using Raspberry Pi Imager
- Configure basic settings (hostname, SSH, WiFi)
- Learn essential Linux commands
- Manage files between your computer and Raspberry Pi
Instructions:
- Download Raspberry Pi Imager on your computer
- Configure OS settings (enable SSH, set hostname, WiFi credentials)
- Boot your Raspberry Pi and confirm connectivity
- Learn how to use SSH for remote access
- Transfer files using SCP or FileZilla
- Update your Raspberry Pi OS (
sudo apt update && sudo apt upgrade
) - Practice basic Linux commands (ls, cd, mkdir, cp, mv)
Deliverable: Screenshot showing successful SSH connection to your Raspberry Pi
Lab 2: Development Environment Setup
Objectives:
- Set up Python environment for development
- Configure remote development tools
- Install essential libraries
- Test camera functionality
Instructions:
Install Python essentials:
pip install jupyter matplotlib numpy pillow
Configure Jupyter Notebook for remote access:
pip install jupyter jupyter notebook --generate-config jupyter notebook --ip=0.0.0.0 --no-browser
Connect the camera module (USB or CSI) to your Raspberry Pi
Test camera functionality using command-line tools
Write a simple Python script to capture an image
Deliverable: A simple Python script that captures and displays an image from your camera and a Screenshot showing a successful image capture
Week 2: Image Classification Fundamentals
Lab 3: Working with Pre-trained Models
Objectives:
- Install TensorFlow Lite runtime
- Download and run MobileNet V2 model
- Process and classify images
- Understand model inputs and outputs
Instructions:
Install TensorFlow Lite runtime:
pip install tflite_runtime
Download MobileNet V2 model:
wget https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgztar xzf mobilenet_v2_1.0_224_quant.tgz
Download labels file
Create a Python script that:
- Loads the TFLite model
- Processes input images to 224x224 format
- Runs inference on test images
- Displays top-5 predicted classes with confidence scores
Deliverable: Python script that successfully classifies sample images with MobileNet V2 and a Screenshot showing a successful result
Lab 4: Custom Dataset Creation
Objectives:
- Create a simple custom dataset using Raspberry Pi camera
- Organize images into classes
- Prepare dataset for model training
Instructions:
- Create a web interface for image capture:
- Use Flask to create a simple web server
- Set up camera preview and capture functionality
- Save captured images with appropriate filenames
- Capture at least 50 images per class for 3 classes
- Organize the dataset into an appropriate directory structure
- Document your dataset creation process
Deliverable: Structured dataset with at least 3 classes and 50 images per class
Week 3: Custom Image Classification
Lab 5: Edge Impulse Model Training
Objectives:
- Create an Edge Impulse project
- Upload and process the dataset
- Design and train a transfer learning model
- Evaluate model performance
Instructions:
- Create an Edge Impulse account and a new project
- Upload your custom dataset
- Create an impulse design:
- Set image size to 160x160
- Use Transfer Learning for feature extraction
- Generate features for all images
- Train model using MobileNet V2
- Analyze model performance (accuracy, confusion matrix)
- Test model on validation data
Deliverable: Edge Impulse project link and screenshot of model performance metrics
Lab 6: Model Deployment to Raspberry Pi
Objectives:
- Export trained model to TFLite format
- Deploy model to Raspberry Pi
- Create a real-time inference application
- Optimize inference speed
Instructions:
- Export model as TensorFlow Lite (.tflite)
- Transfer the model to Raspberry Pi
- Create a Python application that:
- Captures live images from the camera
- Preprocesses images for the model
- Runs inference and displays results
- Shows confidence scores
- Implement a web interface for real-time classification
Deliverable: Python script for real-time image classification with your custom model and a Screenshot showing a successful result
Week 4: Object Detection Fundamentals
Lab 7: Pre-trained Object Detection
Objectives:
- Understand object detection architecture
- Run pre-trained SSD-MobileNet model
- Process detection outputs
- Visualize detected objects
Instructions:
- Download the pre-trained SSD-MobileNet V1 model
- Create a Python script that:
- Loads the model and labels
- Preprocesses input images
- Runs inference
- Extracts bounding boxes, classes, and scores
- Implements Non-Maximum Suppression (NMS)
- Visualizes detections with bounding boxes
- Test on various images with multiple objects
Deliverable: Python script that performs and visualizes object detection on test images and a Screenshot showing a successful result.
Lab 8: EfficientDet and FOMO Models
Objectives:
- Compare different object detection architectures
- Implement EfficientDet and FOMO models
- Analyze performance differences
- Understand trade-offs between models
Instructions:
- Download the EfficientDet Lite0 model
- Implement inference with EfficientDet
- Compare with SSD-MobileNet implementation
- Learn about FOMO (Faster Objects, More Objects)
- Analyze trade-offs in accuracy vs. speed
- Measure inference time on Raspberry Pi
Deliverable: Comparison report of SSD-MobileNet vs. EfficientDet with performance metrics and visualized results
Week 5: Custom Object Detection
Lab 9: Dataset Creation and Annotation
Objectives:
- Create an object detection dataset
- Learn annotation techniques
- Prepare dataset for model training
Instructions:
- Capture at least 100 images containing objects to detect
- Upload images to Roboflow or a similar annotation tool
- Create bounding box annotations for each object
- Apply data augmentation (rotation, brightness adjustment)
- Export dataset in YOLO format
- Document the annotation process
Deliverable: Annotated dataset with at least 2 object classes and 100 total images
Lab 10: Training Models in Edge Impulse
Objectives:
- Upload annotated dataset to Edge Impulse
- Train SSD MobileNet object detection model
- Evaluate model performance
- Export model for deployment
Instructions:
- Create a new Edge Impulse project for object detection
- Upload annotated dataset (train/test splits)
- Create object detection impulse
- Train SSD MobileNet model
- Evaluate model performance
- Export model as TensorFlow Lite
Deliverable: Edge Impulse project link with trained object detection model and performance metrics
Week 6: Advanced Object Detection
Lab 11: FOMO Model Training
Objectives:
- Understanding FOMO architecture benefits
- Train FOMO model on Edge Impulse
- Compare performance with SSD MobileNet
- Deploy optimized model to Raspberry Pi
Instructions:
- Create a new impulse in Edge Impulse using the same dataset
- Train FOMO model instead of SSD MobileNet
- Compare inference speed and accuracy
- Deploy both models to Raspberry Pi
- Create an application that can switch between models
- Measure and document performance differences
Deliverable: Python application that compares SSD MobileNet vs. FOMO performance in real-time
Lab 12: YOLO Implementation
Objectives:
- Install and configure Ultralytics YOLO
- Convert models to optimized NCNN format
- Create real-time detection application
- Implement object counting
Instructions:
- Install Ultralytics:
pip install ultralytics
- Download and test YOLO (n) model
- Export model to NCNN format for optimization
- Create Python script for real-time detection
- Implement object counting algorithm
- Add visualization of counts over time
Deliverable: Python application for real-time object detection and counting using YOLO
Week 7: Object Counting Project
Lab 13: Custom YOLO Training
Objectives:
- Train YOLO on a custom dataset
- Optimize model for edge deployment
- Create a complete application for object counting
Instructions:
- Train the YOLO model on your custom dataset
- Use Google Colab for training if needed
- Set appropriate hyperparameters
- Export optimized model for Raspberry Pi
- Create a Python application that:
- Captures video feed
- Detects objects using YOLO
- Counts objects over time
- Logs results to a database
Deliverable: Complete object counting application with data logging
Lab 14: Fixed-Function AI Integration (Optional)
Objectives:
- Integrate multiple AI models into a single application
- Create a dashboard for visualization
- Optimize application for long-term deployment
Instructions:
- Create an integration application that combines:
- Object detection capabilities
- Classification for detected objects
- Counting and tracking over time
- Implement a simple web dashboard for visualization
- Add performance monitoring
- Configure application for startup at boot
Deliverable: Integrated application combining multiple AI capabilities with visualization dashboard
Week 8: Introduction to Generative AI
Lab 15: Raspberry Pi Configuration for SLMs
Objectives:
- Optimize Raspberry Pi for running Small Language Models
- Install an active cooling solution
- Configure memory and swap
- Install essential libraries
Instructions:
Install active cooling solution on Raspberry Pi 5
Optimize system configuration:
- Increase swap memory:
sudo dphys-swapfile swapoff
, edit/etc/dphys-swapfile
- Set CONF_SWAPSIZE to 2048
sudo dphys-swapfile setup && sudo dphys-swapfile swapon
- Increase swap memory:
Install dependencies:
sudo apt update sudo apt install build-essential python3-dev
Deliverable: Screenshot showing system configuration with increased swap and temperature monitor during stress test
Lab 16: Ollama Installation and Testing
Objectives:
- Install Ollama framework
- Pull and test Small Language Models
- Benchmark model performance
- Monitor resource usage
Instructions:
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Pull different models (such as):
ollama pull llama3.2:1b ollama pull gemma:2b ollama pull phi3:latest
Run a basic inference test with each model
Measure and compare:
- Load time
- Inference speed (tokens/sec)
- Memory usage
- Temperature
Deliverable: Benchmark report comparing performance metrics of different SLM models on your Raspberry Pi
Week 9: SLM Python Integration
Lab 17: Ollama Python Library
Objectives:
- Use the Ollama Python library
- Create interactive applications
- Process SLM responses programmatically
- Handle multiple conversation turns
Instructions:
- Install Ollama Python library:
pip install ollama
- Create Python script to:
- Connect to Ollama API
- Send prompts to models
- Process and format responses
- Handle conversation context
- Implement proper error handling
- Create a simple interactive CLI application
Deliverable: Python script demonstrating Ollama library usage with conversation handling
Lab 18: Function Calling and Structured Outputs
Objectives:
- Implement function calling with SLMs
- Create applications with structured outputs
- Build validation mechanisms
- Handle image inputs
Instructions:
Install required libraries:
pip install pydantic instructor openai
Create Pydantic models for structured outputs
Implement function calling with an instructor:
= instructor.patch( OpenAI(base_url="http://localhost:11434/v1", api_key="ollama"), mode=instructor.Mode.JSON,) client
Build distance calculator application using SLM for city/country recognition
Add image input processing
Deliverable: Python application that uses function calling for structured interaction with SLMs
Week 10: Retrieval-Augmented Generation
Lab 19: RAG Fundamentals
Objectives:
- Understand RAG architecture
- Create vector database
- Implement embedding generation
- Build simple RAG system
Instructions:
Install required libraries:
pip install langchain chromadb
Create a simple dataset with text documents
Implement document splitting and chunking
Generate embeddings using Ollama
Store embeddings in ChromaDB
Create query system
Deliverable: Python implementation of a basic RAG system with simple text documents
Lab 20: Advanced RAG
Objectives:
- Optimize RAG for edge devices
- Implement more efficient retrieval
- Create a specialized knowledge base
- Build validation mechanisms
Instructions:
- Create a specialized knowledge base (e.g., technical documentation)
- Implement optimized embedding generation
- Fine-tune retrieval parameters
- Add response validation
- Create a persistent vector store
- Benchmark performance
Deliverable: Optimized RAG implementation with specialized knowledge base and performance analysis
Week 11: Vision-Language Models
Lab 21: Florence-2 Setup
Objectives:
- Install Florence-2 model
- Configure environment
- Run basic inference tests
- Understand model capabilities
Instructions:
Install required dependencies:
pip install transformers torch torchvision torchaudio pip install timm einops pip install autodistill-florence-2
Download model and test basic functionality
Run image captioning test
Measure performance (memory usage, inference time)
Deliverable: Python script demonstrating basic Florence-2 functionality with performance metrics
Lab 22: Vision Tasks with Florence-2
Objectives:
- Implement various vision tasks
- Create applications for captioning, detection, grounding
- Optimize performance
- Combine tasks
Instructions:
- Implement image captioning:
- Basic caption generation
- Detailed caption generation
- Implement object detection:
- Bounding box visualization
- Multiple object detection
- Implement visual grounding:
- Highlight specific objects based on text prompts
- Create segmentation application
- Measure the performance of each task
Deliverable: Python application demonstrating multiple vision tasks with Florence-2 and performance analysis
Week 12: Physical Computing Basics
Lab 23: Sensor and Actuator Integration
Objectives:
- Connect digital sensors
- Read environmental data
- Control LEDs and actuators
- Create a data collection system
Instructions:
Connect hardware components:
- DHT22 temperature/humidity sensor
- BMP280 pressure sensor
- LEDs (red, yellow, green)
- Push button
Install required libraries:
pip install adafruit-circuitpython-dht adafruit-circuitpython-bmp280
Create a Python script to read sensor data
Implement LED control based on conditions
Create visualization of sensor data
Deliverable: Python application for reading sensor data and controlling actuators with visualization
Lab 24: Jupyter Notebook Integration
Objectives:
- Use Jupyter Notebook for physical computing
- Create interactive widgets
- Visualize sensor data in real-time
- Control actuators from a notebook
Instructions:
- Install ipywidgets:
pip install ipywidgets
- Create a Jupyter Notebook for sensor data collection
- Implement interactive widgets for control
- Create real-time visualization
- Build a dashboard with multiple data views
Deliverable: Jupyter Notebook with interactive widgets for sensor monitoring and actuator control
Week 13: SLM-Physical Computing Integration
Lab 25: Basic SLM Analysis
Objectives:
- Integrate SLMs with sensor data
- Create analysis application
- Implement decision-making logic
- Control actuators based on SLM responses
Instructions:
- Create a Python application that:
- Collects sensor data
- Formats data for SLM prompt
- Sends prompt to model
- Parses response
- Controls actuators based on response
- Implement multiple analysis modes
- Add error handling for SLM responses
Deliverable: Python application integrating SLMs with physical sensors and actuators
Lab 26: SLM-IoT Control System
Objectives:
- Create a complete IoT monitoring system
- Implement natural language interaction
- Add data logging and analysis
- Create web interface
Instructions:
- Build a complete system with:
- Sensor data collection
- SLM-based analysis
- Natural language command processing
- Data logging to the database
- Web interface for interaction
- Implement multiple SLM models
- Add historical data analysis
- Create visualization dashboard
Deliverable: Complete IoT monitoring system with SLM integration and web interface
Week 14: Advanced Edge AI Techniques
Lab 27: Building Agents
Objectives:
- Create agent architecture
- Implement tool usage
- Build decision-making system
- Handle complex tasks
Instructions:
- Implement calculator agent:
- Create a query routing system
- Implement tool functions
- Build decision-making logic
- Create knowledge router:
- Implement web search integration
- Build classification system
- Handle time-based queries
- Measure and optimize performance
Deliverable: Python implementation of agent architecture with tool usage and decision routing
Lab 28: Advanced Prompting and Validation
Objectives:
- Implement chain-of-thought prompting
- Create few-shot learning examples
- Build task decomposition system
- Implement response validation
Instructions:
- Create examples for different prompting strategies
- Implement chain-of-thought framework
- Build few-shot learning templates
- Create a task decomposition system
- Implement validation mechanisms
- Compare the effectiveness of different strategies
Deliverable: Python implementation demonstrating different prompting strategies with performance comparison
Week 15: Final Project Integration
Lab 29: Agentic RAG System
Objectives:
- Combine agent architecture with RAG
- Create a complete knowledge system
- Implement advanced validation
- Build query optimization
Instructions:
- Create a complete agentic RAG system:
- Build knowledge database
- Implement agent architecture
- Add tool functions
- Create validation mechanisms
- Optimize retrieval
- Test with complex queries
- Measure performance
- Create visualization of system components
Deliverable: Complete agentic RAG system with documentation and performance analysis
Lab 30: Final Project
Objectives:
- Design and implement a comprehensive Edge AI system
- Combine multiple techniques
- Create complete documentation
- Present project
Instructions:
- Design final project combining:
- Computer vision capabilities
- SLM integration
- Physical computing
- Advanced techniques (RAG, agents, etc.)
- Implement complete system
- Create documentation
- Measure performance
- Prepare presentation
Deliverable: Complete final project with documentation, code, and presentation
Hardware Requirements
Basic Setup (Weeks 1-7)
- Raspberry Pi Zero 2W or Pi 5
- MicroSD card (32GB+)
- Camera module (USB webcam or Pi camera)
- Power supply
Generative AI (Weeks 8-15)
- Raspberry Pi 5 (8GB RAM recommended)
- Active cooling solution
- MicroSD card (64GB+ recommended)
Physical Computing (Weeks 12-15)
- DHT22 temperature/humidity sensor
- BMP280 pressure/temperature sensor
- LEDs (red, yellow, green)
- Push button
- Resistors (4.7kΩ, 330Ω)
- Jumper wires
- Breadboard
Software Requirements
Development Environment
- Raspberry Pi OS (64-bit)
- Python 3.9+
- Jupyter Notebook
- SSH client
Computer Vision and DL
- TensorFlow Lite runtime
- OpenCV
- Edge Impulse
- Ultralytics
Generative AI
- Ollama
- Transformers
- Pytorch
- ChromaDB
- LangChain
- Pydantic
- Instructor
Physical Computing
- GPIO Zero
- Adafruit CircuitPython libraries
Assessment Criteria
Each lab will be evaluated based on:
- Functionality (40%): Does the implementation work as specified?
- Code Quality (20%): Is the code well-structured, documented, and efficient?
- Documentation (20%): Are the process and results documented?
- Analysis (20%): Is there a thoughtful analysis of results and performance?
The final project will be evaluated based on:
- Integration (30%): How well different components are integrated
- Innovation (20%): Novel approaches or applications
- Implementation (30%): Overall quality and functionality
- Presentation (20%): Clear explanation and demonstration
Tips for Success
- Start Early: These labs build on each other. Falling behind makes later labs more difficult.
- Document As You Go: Take notes, screenshots, and document issues/solutions.
- Optimize Resources: SLMs and VLMs require careful resource management.
- Collaborate: Discuss approaches with classmates while ensuring individual work.
- Backup Regularly: Create backups of your SD card after significant progress.
- Measure Performance: Always benchmark and optimize your implementations.
- Ask Questions: If you’re stuck, ask for help early rather than falling behind.