top of page
Search

Monitor the Beehives

  • Writer: Ziwei Zhu
    Ziwei Zhu
  • Nov 26, 2018
  • 1 min read

Updated: Jan 17, 2019

GitHub Link: https://github.com/ziwei-zhu/YOLO-V3_WaggleNet.git

Testing the Training model on the Real-Time Video Stream

From the practical perspectives, bee farmers are deeply concerned with the real-time monitor for beehives. The one-time object classification after the completion of data collection cannot meet the demand of synchronization. Even if the massive pictures and videos can be collected and processed through pipelines, it still consumes tremendous resources, including memory, computation and bandwidth. Moreover, models trained from static pictures lack the adaptation to fit into the dynamic movements of the objects, since the sizes and shades changes drastically when an object is moving. The features of objects in moving cannot be generalized by a few static models.

For the above concerns, I turn to real-time detection for the purpose of monitoring the conditions in beehives. YOLO V3 (You Only Look Once Version 3) is the most widely used and newest real-time detection model, which applies the pre-trained model into real time videos. YOLO V3 takes substantial static images as input to produce models that can adapt to different sizes of objects in a video. The object detection is synchronized with the input stream, which can instantaneously give feedbacks according to the conditions in beehives. This report will describe the complete process of how to train and transplant the YOLO V3 neural network with a single-self defined class and apply it to the practical object detection in real honeycomb site.

 
 
 

Comments


Blog: Blog2

Subscribe

5106960262

©2019 by Ziwei Zhu. Proudly created with Wix.com

bottom of page