Discussion UMPSA STEM Lab – PPD Pekan

Presented the 2026 program line-up to PPD Pekan, a valued collaborator and supporter since 2016.
We are excited about the upcoming TalentCorp 2026 initiatives scheduled for June and July, which will include workshops for students and teachers focusing on Computational Thinking through block programming and AI, as well as Digital Making with Arduino, Python, Raspberry Pi, and drones. The program will also feature Edge AI applications such as data analytics and AI image processing, alongside the digital literacy and inclusive STEM outreach through PKI. Teacher professional development will be a key component, complemented by training sessions, symposiums, and pedagogical research. Additionally, we aim to strengthen networking through PAJSK at the national level.
Looking forward to engaging in more programs and creating impactful STEM experiences for all.

DRE2213 – Week 13 Project Demonstration – SULAM

Bringing Python, IoT, and Physical Computing Together =)..

Today’s class marked an important milestone for DRE2213 – Programming and Data Structure, as students presented their final projects developed using Raspberry Pi and Python, with a strong focus on environmental sensing using the BME280 sensor. The SULAM session showcased not only technical competence, but also how far the students have progressed in applying programming concepts to real-world systems – in Perpustakaan UMPSA Pekan Monitoring System.

I am truly impressed by the level of achievement demonstrated by the students. Each group successfully implemented a complete IoT-based system, covering three essential components of modern embedded and data-driven applications.

1. Closed-Loop Sensor Integration

Students demonstrated their ability to build closed-loop systems by interfacing the BME280 temperature, humidity, and pressure sensor with the Raspberry Pi. Based on predefined threshold values, the system was able to trigger actuators such as LEDs and buzzers, reinforcing key concepts in sensor reading, decision-making logic, and control flow in Python.

2. Data Logging and Management

Another highlight was the diversity in data management approaches. Some groups opted for cloud-based databases such as Firebase, while others used Google Sheets or local storage solutions. This exposed students to different data structures, data persistence methods, and practical considerations in handling sensor data over time.

3. Dashboard Development and Visualization

Students also demonstrated creativity and flexibility in building dashboards to visualize sensor data. A wide range of tools were used, including:

  • HTML-based dashboards

  • Adafruit IO

  • Flask web applications

  • Streamlit dashboards

This variety reflects students’ growing confidence in selecting appropriate tools and frameworks to communicate data effectively.

From Games to Physical Systems – A Meaningful Learning Journey

At the beginning of this course, students were introduced to Python programming through a slider game developed using Pygame. This approach allowed them to grasp fundamental programming concepts—such as variables, loops, conditionals, and functions—within a digital and interactive environment.

As the course progressed, students transitioned from digital game development to physical computing projects, applying the same programming principles to real hardware and sensors. This combination of digital embodiment (game development) and physical embodiment (IoT systems) provided a strong foundation for understanding how software interacts with the real world.

Learning programming in an interactive and hands-on manner enables students to truly understand what their code is doing. Instead of writing abstract programs, they can see, hear, and measure the outcomes of their code—whether it is a game reacting to user input or a sensor triggering a buzzer based on environmental conditions.

Closing Reflections

Today’s presentations clearly demonstrated that interactive, project-based learning is an effective way to teach programming and data structures. By engaging with both digital and physical systems, students developed not only technical skills but also problem-solving confidence and design thinking.

Well done to all DRE2213 students on your excellent work. Your projects reflect strong effort, creativity, and meaningful learning. Keep building, keep experimenting, and keep pushing the boundaries of what you can create with Python and Raspberry Pi.

 

 

 

Publication 2025/7 – Real-time FFB ripeness detection using IoT-enabled YOLOv8n on Raspberry Pi 4 edge devices for precision agriculture

Read Article

The palm oil industry plays a critical role in Malaysia’s agricultural economy, where accurate and timely harvesting of Fresh Fruit Bunches (FFB) directly impacts oil yield and quality. Traditionally, ripeness assessment relies heavily on manual inspection, which is subjective, labor-intensive, and inconsistent under varying field conditions.

To address these challenges, our research introduces a real-time, IoT-enabled Edge AI system for automatic palm oil FFB ripeness detection using a YOLOv8n deep learning model deployed on a Raspberry Pi 4. The system enables on-site intelligence, minimizes data transmission latency, and supports smarter plantation decision-making.

 Figure 1: Block Diagram of the Proposed System

The proposed system integrates computer vision, edge computing, and IoT services into a compact and deployable architecture. The core components include:

      1. Camera Module: Captures real-time images of palm oil FFB either on-tree or post-harvest (on the ground).

      2. Edge Device (Raspberry Pi 4): Executes the YOLOv8n model locally for real-time inference.

      3. Deep Learning Model (YOLOv8n): Detects and classifies palm fruit ripeness stages.

      4. IoT Communication Layer: Transmits detection results and metadata to the cloud.

      5. Web-Based Dashboard: Visualizes ripeness distribution and system performance for plantation managers.

By processing data at the edge, the system significantly reduces dependence on cloud computing while maintaining high detection accuracy.

Model Training and Testing Workflow

Figure 2: Training and Testing Workflow

A supervised deep learning approach was adopted for model development:

  1. Dataset Preparation
    Palm oil FFB images were collected under real plantation conditions and annotated according to ripeness stages.

  2. Annotation and Augmentation
    Image labeling and dataset management were performed using Roboflow.
    Data augmentation techniques such as:

    • Image flipping

    • Rotation

    • Brightness and exposure adjustment

    were applied to improve generalization and prevent overfitting.

  3. Model Fine-Tuning
    A pre-trained YOLOv8n model was fine-tuned on the custom dataset to balance accuracy and computational efficiency.

  4. Training and Validation
    The dataset was split into training, validation, and testing subsets to ensure reliable performance evaluation.

This workflow ensures that the trained model is robust to real-world lighting variations and plantation environments.

Model Evaluation Indicators

Figure 3: Model Evaluation Metrics (Precision, Recall, mAP)

To evaluate the effectiveness of the proposed model, several standard performance metrics were used:

      1. Precision – Measures the accuracy of positive detections.

      2. Recall – Indicates the model’s ability to detect all relevant ripe fruit instances.

      3. Mean Average Precision (mAP) – Summarizes detection performance across confidence thresholds.

The YOLOv8n model achieved high precision and recall, demonstrating reliable detection while maintaining real-time inference capability on a resource-constrained edge device.

Compared to earlier YOLOv4 and YOLOv5 implementations reported in the literature, the proposed approach delivers a balanced trade-off between accuracy, speed, and deployment efficiency, making it well-suited for Edge AI applications.

Real-Time Deployment Scenarios

The system was evaluated under two practical plantation scenarios:

      1. On-tree FFB detection (pre-harvest)

      2. On-ground FFB detection (post-harvest)

Both scenarios reflect real operational conditions and validate the system’s adaptability for field deployment.

IoT Dashboard for Plantation Monitoring

Figure 4: Web-Based Monitoring Dashboard

A web-based dashboard was developed to support estate plantation management, providing:

      1. Real-time visualization of detected FFB ripeness levels

      2. Historical data tracking

      3. Summary statistics for harvesting planning

      4. System status monitoring

The dashboard enables plantation managers to make data-driven decisions without requiring technical expertise in AI or computer vision.

Why Edge AI Matters for Agriculture

Deploying AI directly on edge devices offers several advantages, among them:

      1. Low latency – Immediate decision-making without cloud delays

      2. Reduced bandwidth usage – Only essential data is transmitted

      3. Improved privacy – Images remain local to the device

      4. Scalability – Suitable for large plantation deployments

This makes Edge AI particularly attractive for remote and resource-limited agricultural environments.

STEM Learning Impact 

This project has been a highly engaging learning experience, providing students and researchers with hands-on exposure to image processing, Artificial Intelligence (AI), and Edge Computing. By working with YOLO-based object detection, participants gained practical insight into how AI models learn visual patterns, process real-world images, and make intelligent decisions. YOLO, as a deep learning–based computer vision algorithm, represents a core component of modern AI applications used in industry today.

Through this project, learners developed essential 21st-century STEM skills, including data annotation, model training and evaluation, system integration, and deployment on resource-constrained edge devices. Importantly, participants were able to see how AI moves beyond theory into real-world problem solving, particularly within the context of precision agriculture.

Looking ahead, this work provides a strong foundation for expanding AI-driven characterization methods, such as fruit size estimation, defect detection, and maturity analysis. The system can also be adapted for wider agricultural applications, including crop monitoring, yield estimation, and decision support for plantation management.

From a technology perspective, future efforts will explore deployment on diverse processors and edge platforms, enabling comparisons across low-power embedded systems and AI accelerators. This approach supports the development of cost-effective, scalable, and energy-efficient AI solutions, making advanced technology more accessible to local communities and industries.

Overall, this project demonstrates how STEM-based AI initiatives can nurture innovation, strengthen digital competencies, and contribute to sustainable agricultural practices—aligning with national priorities in Industry 4.0, smart farming, and digital transformation.

Reflections and Future Directions

Working on this project has been both technically enriching and intellectually exciting, as it allowed us to deepen our understanding of image processing and Artificial Intelligence (AI) through hands-on experimentation. In particular, implementing YOLO-based object detection, a deep learning approach within the broader AI domain, provided valuable insights into how modern computer vision models learn visual features, make real-time predictions, and operate under resource constraints on edge devices.

Beyond ripeness classification, this project opens opportunities to expand AI-driven characterization methods, including size estimation, defect detection, maturity grading, and yield prediction. We also look forward to extending these techniques to broader agricultural applications, such as crop health monitoring, disease detection, and automated harvesting support.

From a systems perspective, future work will explore optimization and deployment across different processors and edge computing platforms, including alternative single-board computers, AI accelerators, and low-power embedded systems. This aligns with the growing need for scalable, cost-effective, and energy-efficient Edge AI solutions in real-world agricultural environments.

Overall, this project highlights how integrating image processing, deep learning, and edge computing can drive practical innovation in precision agriculture while serving as a strong learning platform for AI and embedded system development.

Acknowledgement

This work was conducted at the UMPSA STEM Lab, Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang Al-Sultan Abdullah (UMPSA), with support from academic, industry, and student collaborators.