Publication 2025/7 – Real-time FFB ripeness detection using IoT-enabled YOLOv8n on Raspberry Pi 4 edge devices for precision agriculture

Read Article

The palm oil industry plays a critical role in Malaysia’s agricultural economy, where accurate and timely harvesting of Fresh Fruit Bunches (FFB) directly impacts oil yield and quality. Traditionally, ripeness assessment relies heavily on manual inspection, which is subjective, labor-intensive, and inconsistent under varying field conditions.

To address these challenges, our research introduces a real-time, IoT-enabled Edge AI system for automatic palm oil FFB ripeness detection using a YOLOv8n deep learning model deployed on a Raspberry Pi 4. The system enables on-site intelligence, minimizes data transmission latency, and supports smarter plantation decision-making.

 Figure 1: Block Diagram of the Proposed System

The proposed system integrates computer vision, edge computing, and IoT services into a compact and deployable architecture. The core components include:

      1. Camera Module: Captures real-time images of palm oil FFB either on-tree or post-harvest (on the ground).

      2. Edge Device (Raspberry Pi 4): Executes the YOLOv8n model locally for real-time inference.

      3. Deep Learning Model (YOLOv8n): Detects and classifies palm fruit ripeness stages.

      4. IoT Communication Layer: Transmits detection results and metadata to the cloud.

      5. Web-Based Dashboard: Visualizes ripeness distribution and system performance for plantation managers.

By processing data at the edge, the system significantly reduces dependence on cloud computing while maintaining high detection accuracy.

Model Training and Testing Workflow

Figure 2: Training and Testing Workflow

A supervised deep learning approach was adopted for model development:

  1. Dataset Preparation
    Palm oil FFB images were collected under real plantation conditions and annotated according to ripeness stages.

  2. Annotation and Augmentation
    Image labeling and dataset management were performed using Roboflow.
    Data augmentation techniques such as:

    • Image flipping

    • Rotation

    • Brightness and exposure adjustment

    were applied to improve generalization and prevent overfitting.

  3. Model Fine-Tuning
    A pre-trained YOLOv8n model was fine-tuned on the custom dataset to balance accuracy and computational efficiency.

  4. Training and Validation
    The dataset was split into training, validation, and testing subsets to ensure reliable performance evaluation.

This workflow ensures that the trained model is robust to real-world lighting variations and plantation environments.

Model Evaluation Indicators

Figure 3: Model Evaluation Metrics (Precision, Recall, mAP)

To evaluate the effectiveness of the proposed model, several standard performance metrics were used:

      1. Precision – Measures the accuracy of positive detections.

      2. Recall – Indicates the model’s ability to detect all relevant ripe fruit instances.

      3. Mean Average Precision (mAP) – Summarizes detection performance across confidence thresholds.

The YOLOv8n model achieved high precision and recall, demonstrating reliable detection while maintaining real-time inference capability on a resource-constrained edge device.

Compared to earlier YOLOv4 and YOLOv5 implementations reported in the literature, the proposed approach delivers a balanced trade-off between accuracy, speed, and deployment efficiency, making it well-suited for Edge AI applications.

Real-Time Deployment Scenarios

The system was evaluated under two practical plantation scenarios:

      1. On-tree FFB detection (pre-harvest)

      2. On-ground FFB detection (post-harvest)

Both scenarios reflect real operational conditions and validate the system’s adaptability for field deployment.

IoT Dashboard for Plantation Monitoring

Figure 4: Web-Based Monitoring Dashboard

A web-based dashboard was developed to support estate plantation management, providing:

      1. Real-time visualization of detected FFB ripeness levels

      2. Historical data tracking

      3. Summary statistics for harvesting planning

      4. System status monitoring

The dashboard enables plantation managers to make data-driven decisions without requiring technical expertise in AI or computer vision.

Why Edge AI Matters for Agriculture

Deploying AI directly on edge devices offers several advantages, among them:

      1. Low latency – Immediate decision-making without cloud delays

      2. Reduced bandwidth usage – Only essential data is transmitted

      3. Improved privacy – Images remain local to the device

      4. Scalability – Suitable for large plantation deployments

This makes Edge AI particularly attractive for remote and resource-limited agricultural environments.

STEM Learning Impact 

This project has been a highly engaging learning experience, providing students and researchers with hands-on exposure to image processing, Artificial Intelligence (AI), and Edge Computing. By working with YOLO-based object detection, participants gained practical insight into how AI models learn visual patterns, process real-world images, and make intelligent decisions. YOLO, as a deep learning–based computer vision algorithm, represents a core component of modern AI applications used in industry today.

Through this project, learners developed essential 21st-century STEM skills, including data annotation, model training and evaluation, system integration, and deployment on resource-constrained edge devices. Importantly, participants were able to see how AI moves beyond theory into real-world problem solving, particularly within the context of precision agriculture.

Looking ahead, this work provides a strong foundation for expanding AI-driven characterization methods, such as fruit size estimation, defect detection, and maturity analysis. The system can also be adapted for wider agricultural applications, including crop monitoring, yield estimation, and decision support for plantation management.

From a technology perspective, future efforts will explore deployment on diverse processors and edge platforms, enabling comparisons across low-power embedded systems and AI accelerators. This approach supports the development of cost-effective, scalable, and energy-efficient AI solutions, making advanced technology more accessible to local communities and industries.

Overall, this project demonstrates how STEM-based AI initiatives can nurture innovation, strengthen digital competencies, and contribute to sustainable agricultural practices—aligning with national priorities in Industry 4.0, smart farming, and digital transformation.

Reflections and Future Directions

Working on this project has been both technically enriching and intellectually exciting, as it allowed us to deepen our understanding of image processing and Artificial Intelligence (AI) through hands-on experimentation. In particular, implementing YOLO-based object detection, a deep learning approach within the broader AI domain, provided valuable insights into how modern computer vision models learn visual features, make real-time predictions, and operate under resource constraints on edge devices.

Beyond ripeness classification, this project opens opportunities to expand AI-driven characterization methods, including size estimation, defect detection, maturity grading, and yield prediction. We also look forward to extending these techniques to broader agricultural applications, such as crop health monitoring, disease detection, and automated harvesting support.

From a systems perspective, future work will explore optimization and deployment across different processors and edge computing platforms, including alternative single-board computers, AI accelerators, and low-power embedded systems. This aligns with the growing need for scalable, cost-effective, and energy-efficient Edge AI solutions in real-world agricultural environments.

Overall, this project highlights how integrating image processing, deep learning, and edge computing can drive practical innovation in precision agriculture while serving as a strong learning platform for AI and embedded system development.

Acknowledgement

This work was conducted at the UMPSA STEM Lab, Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang Al-Sultan Abdullah (UMPSA), with support from academic, industry, and student collaborators.

BTE1522 DRE2213 – Week 11 BME280 – Cloud and Local IoT Visualisation

This week, Week 11, we reached an important milestone in the IoT learning journey. Building upon the foundations established in Weeks 9 and 10, this week’s activity focused on visualising sensor data through dashboards, using two different approaches:

      1. A cloud-hosted dashboard using Adafruit IO

      2. A self-hosted dashboard using HTML served directly from the Raspberry Pi Pico W (LilEx3)

By the end of this session, you no longer just reading sensors — but you’ve design a complete IoT data pipelines, from sensing to networking to visualisation.

This week is we transit our attention from collecting data to presenting data.

Using the BME280 environmental sensor, you are able to work with:

        1. Temperature

        2. Humidity

        3. Atmospheric pressure

The same sensor data was then visualised using two different dashboard approaches, highlighting important design choices in IoT systems.

Approach 1: Cloud Dashboard Using Adafruit IO – Refer to Act 7 in TINTA and Google Classsroom 

This method introduces students to cloud-based IoT platforms, a common industry practice.

Key concepts:

        1. WiFi connectivity

        2. MQTT protocol

        3. Publishing data to a third-party server

        4. Remote access and visualisation

Code Explanation (Adafruit IO Method)

from machine import Pin, I2C
import network
import time
from umqtt.simple import MQTTClient
import bme280
      • Imports modules for hardware control, networking, MQTT communication, and the BME280 sensor.

i2c = I2C(1, sda=Pin(2), scl=Pin(3), freq=400000)
bme = bme280.BME280(i2c=i2c)
      • Initializes the I2C bus and the BME280 sensor.

wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect(wifi_ssid, wifi_password)
      • Connects the Pico W to a WiFi network.

mqtt_client = MQTTClient(
client_id=mqtt_client_id,
server=mqtt_host,
user=mqtt_username,
password=mqtt_password)
      • Configures the MQTT client for communication with Adafruit IO.

values = bme.values
temp = values[0]
mqtt_client.publish(temp_feed, temp)
      • Reads sensor values and publishes temperature data to the cloud dashboard.

This approach shows how sensor data can be accessed anywhere in the world, but depends on external services and internet connectivity.

Approach 2: Self-Hosted HTML Dashboard on Pico W

This method shifts learning toward edge computing and embedded web servers.

Key concepts:

        1. HTTP client–server model

        2. Serving HTML from a microcontroller

        3. JSON data exchange

        4. JavaScript-based live updates

        5. Local network dashboards

 

Code Explanation (HTML Dashboard Method)

import socket
      • Enables the Pico W to act as a web server.

html = """<html>...</html>"""
      • Stores the dashboard webpage directly in Python memory.

s = socket.socket()
s.bind(('0.0.0.0', 80))
s.listen(1)
      • Starts an HTTP server on port 80.

if "/data" in request:
      • Distinguishes between:

        • Page requests (/)

        • Data requests (/data)

values = bme.values
      • Reads temperature, humidity, and pressure in real time.

fetch('/data')
      • JavaScript on the webpage periodically requests new sensor data and updates the display without refreshing the page.

This approach emphasizes system integration, where the device itself becomes the dashboard — similar to ground stations and embedded monitoring panels.

Comparing Both Dashboard Approaches

Feature Adafruit IO HTML on Pico W
Hosting Cloud Local (device)
Internet required Yes Local WiFi only
Protocol MQTT HTTP
Complexity Lower Higher
Control Limited Full
Educational value Intro to IoT cloud Full-stack IoT

Both approaches are valuable, and understanding when to use each is an important engineering skill.

Bringing It All Together

By connecting:

    1. Weeks 9 & 10 (MPU6050 motion sensing & data logging)

    2. Week 11 (IoT dashboards and networking)

you are now capable of:

      1. Interfacing multiple sensors

      2. Logging and processing data

      3. Transmitting data over networks

      4. Designing dashboards (cloud and local)

      5. Building complete IoT systems

At this stage, you are no longer following isolated tutorials, but are now ready to design and execute their own IoT projects.

BTE1522 DRE2213 – Week 9 and 10 MPU6050

Dear DRE and BTE-ian,

notes on serial data communication |  notes on reading MPU6050 data.

This week, you’ve gone thru to one of the most exciting aspects of embedded systems and sensor-based computing: collecting, processing, and logging motion data using the MPU6050 sensor. Working with the LilEx3 – our in-house Raspberry Pi Pico–based picosatellite simulator, you explored how real satellites interpret motion, orientation, and attitude information through microcontrollers and built-in algorithms.

This activity was designed not only to strengthen understanding of Python programming on microcontrollers, but also to demonstrate how sensor data can be captured, logged, and interpreted, a fundamental skill in IoT, robotics, aerospace, and scientific computing.

1. Introducing the MPU6050 Sensor

The MPU6050 combines a 3-axis accelerometer and 3-axis gyroscope, allowing us to detect:

        1. Linear acceleration (AX, AY, AZ)

        2. Angular velocity (GX, GY, GZ)

        3. Motion patterns

        4. Orientation of a device in space

In satellite engineering, this type of sensor is crucial for:

        1. Attitude determination

        2. Stabilisation

        3. Orientation control

        4. Deployment sequence monitoring

For our LiLex3 picosatellite simulator, this data helps you to understand how satellites “sense” their position and respond to environmental changes.

2. Python Programming on the Raspberry Pi Pico

Acomplishing the task, you wrote MicroPython code to:

      1. Initialise the I2C communication bus

      2. Read real-time sensor values

      3. Display values on the Thonny console

      4. Log data into a .txt file for later analysis

This hands-on exercise strengthened key Python concepts:

  1. Variables & Data Types
    • You handled multiple numeric readings and stored them in variables such as ax, ay, az.
  2. Functions & Modular Code
    • They used functions like mpu.values() and learned how functions return multiple sensor readings at once.
  3. Loops
    • A continuous while True: loop was used to collect real-time data every second.
  4. File Handling
    • One of the most important skills today was learning how to open, write, and save data to a file—essential for logging experiments.
  5. Example snippet:
    • file = open("data.txt", "a")
      file.write(f"{count},{ax},{ay},{az},{gx},{gy},{gz}\n")
      file.flush()
    • This allowed the Pico to create a growing dataset, which you can later open in Excel for plotting or further analysis.
  6. Printing to Console
    • The real-time values were also displayed in the Thonny console, helping you can visualize live changes as they physically moved the LiLex3 module.

3. Experiencing Motion: Determining Roll, Pitch, and Yaw

Rather than reading just “raw numbers,” you were tasked to interpret meaning behind the MPU6050 readings.

Through controlled physical movement of the LiLex3:

        1. Pitch changed when tilting forward/backward

        2. Roll changed when tilting left/right

        3. Yaw changed when rotating horizontally (similar to turning a compass)

By observing accelerometer and gyroscope patterns, you began to understand how flight controllers, drones, and satellites estimate their orientation in space.

This experience reinforces why MPU data is vital in aerospace applications:

      1. CubeSat attitude determination

      2. Drone flight stabilization

      3. Rocket telemetry

      4. Robotics navigation

      5. VR/AR motion tracking

Then you were encouraged to mark down the sensor readings corresponding to specific movements and attempt simple calculations for roll/pitch/yaw using standard trigonometric formulas (e.g., atan2).

4. Data Logging: Building a Dataset for Analysis

One of the biggest takeaways was the importance of data logging.

By saving values into a .txt file, you learned how to:

        1. Record experimental data
        2. Align timestamps and readings
        3.  Import the file into Excel
        4. Plot sensor graphs (AX vs. time, pitch changes, etc.)
        5. Observe patterns corresponding to movement

This introduces to real scientific data workflows used in:

        1. Research experiments

        2. IoT sensor monitoring

        3. Engineering testing

        4. Satellite mission data collection

The logged dataset becomes the “flight log” for their miniature picosatellite simulator.

5. Conclusion: Why Today’s Activity Matters

Today’s class was not just about wiring a sensor and reading numbers. It was about understanding how real systems sense, interpret, and record the world around them.

You learned:

        1. Embedded Python programming

        2. Real-time sensor acquisition

        3. Data logging techniques

        4. Interpreting physical motion through numerical patterns

        5. Satellite-style orientation measurement

By the end of the session, every student had generated their own dataset and gained insight into how satellites determine roll, pitch, and yaw—all through hands-on experimentation with the LiLex3 and MPU6050.

This activity bridges classroom concepts with real aerospace and IoT engineering, preparing you for more advanced missions involving filtering (Kalman), attitude determination, and flight-control algorithms.