Month 3 Box - AI Deep Dive

Lesson 2: Exploring the AI HAT+ and Its Capabilities

Lesson 2: Exploring the AI HAT+ and Its Capabilities


Resources can be found here -> https://github.com/HowardCraft/Academy/tree/main/Month3/Lesson2

Here is the location of the script mentioned around the 5 to 6 minute mark: https://github.com/HowardCraft/Academy/blob/main/Month3/Lesson2/TF/install_tflite_dependencies.sh


Now that your Raspberry Pi is powered up, it’s time to unlock the real magic: the AI HAT+. In this lesson, we’ll explore what makes this tiny accelerator so powerful—and we’ll put it to work with real AI demos using sample images and video.


    What You’ll Learn Today:

  • What the AI HAT+ is and how it boosts your Raspberry Pi's performance
  • Why running AI locally (on-device) is faster, safer, and more responsive
  • How to classify objects in static images using preloaded models
  • How to test AI inference on a sample video clip


Why Local AI Rocks:

✅ No internet required

✅ Lightning-fast performance

✅ Private data stays on your device

✅ Zero cloud costs


Perfect for smart cameras, home automation, and offline voice assistants.


Frameworks You'll Use:

  • ONNX Runtime for object detection
  • TensorFlow Lite with MobileNetV2 for image classification
  • OpenVINO (optional, for Intel optimization)


Today’s Tasks:

  • Install AI HAT+ drivers and dependencies [🖼 Step-by-step visuals suggested here]
  • Run image classification with TFLite (banana, cat, person, etc.)
  • View results on your LCD touchscreen with confidence percentages
  • Run video inference to preview real-time performance


Troubleshooting Tips:

  • Confirm the correct model format (ONNX or TFLite)
  • Check system usage with htop to confirm AI accelerator activity
  • Make sure drivers were installed using our official setup script


Suggested Visuals:

  • Diagram of how the AI HAT+ connects to the Pi [🖼 Insert Hardware Diagram Here]
  • Sample output: "Image → Classification → Confidence %" overlay [🖼 Screenshot or animation]
  • Video clip detection in action (e.g., person walking) [🖼 Demo with bounding boxes]


      Homework:

  • Try swapping in your own video or image to test the model’s flexibility
  • Screenshot your results and post them in our community Discord channel
  • Make sure your touchscreen displays output clearly and accurately


Up Next:

You’ve tested AI on files—now we’ll switch to real-time video using your Pi’s camera module. You’ll build a live AI vision system that sees the world in real time!