Nothing Special   »   [go: up one dir, main page]

CN112381044A - Method and device for detecting ingestion state of fish - Google Patents

Method and device for detecting ingestion state of fish Download PDF

Info

Publication number
CN112381044A
CN112381044A CN202011364526.XA CN202011364526A CN112381044A CN 112381044 A CN112381044 A CN 112381044A CN 202011364526 A CN202011364526 A CN 202011364526A CN 112381044 A CN112381044 A CN 112381044A
Authority
CN
China
Prior art keywords
fish
feeding
fish school
texture
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011364526.XA
Other languages
Chinese (zh)
Inventor
刘春红
孔庆辰
叶荣珂
李道亮
陈英义
段青玲
张玉泉
蔡振鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202011364526.XA priority Critical patent/CN112381044A/en
Publication of CN112381044A publication Critical patent/CN112381044A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Agronomy & Crop Science (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Animal Husbandry (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of aquaculture, and particularly relates to a method for detecting the ingestion state of fishes. The detection method comprises the following steps: obtaining texture features and shape features of the fish school when the fish school is eaten; and substituting the texture characteristics and the shape characteristics of the fish school when eating into a preset depth forest model to obtain the ingestion state of the fish school to be detected. According to the method, the texture features and the shape features of the fish school when the fish school is fed and the corresponding feeding state are combined to construct a deep forest model, so that the feeding state of the fish school can be judged directly through the texture features and the shape features of the fish school feeding pictures, the accuracy is high and can reach about 95%, and theoretical support and a solution are provided for accurate feeding of fish.

Description

Method and device for detecting ingestion state of fish
Technical Field
The invention relates to the technical field of aquaculture, in particular to a method and a device for detecting the ingestion state of fishes.
Background
Currently, the development of intensive aquaculture leads to an increased proportion of feed in the total cost. Therefore, the normal growth of the fish can be affected by insufficient or excessive feeding in the production process. The weight of the fishes is reduced due to insufficient feeding, and the economic benefit is lowered. The excessive feeding can cause economic benefit loss, and the uneaten feed can pollute the environment and cause water eutrophication. Therefore, in a recirculating aquaculture system, higher requirements are put on the control of accurate fish feeding.
The prior art mainly comprises two methods of indirect detection and direct detection in the research of fish feeding behaviors. The indirect detection method is to estimate the ingestion state of the fish school by detecting the amount of residual bait in water in real time. However, actual circulating water aquaculture plants are often weak in light and dense in fish school, so that the purpose of separating residual baits from shot pictures is extremely difficult. Chinese patent application No. CN201710238952.0, entitled "automatic feeding and water quality monitoring control system and method for aquaculture" uses OTSU and EM algorithm to calculate the amount of residual bait, but because of overlapping of bait, blocking of fish school and aquatic weed sundries, the measurement accuracy is unstable, and errors are easy to generate. The method is characterized by high precision compared with the traditional detection method, but the method usually needs larger sample amount and excessive hyper-parameters, so that the actual operation is difficult. The Chinese patent application No. CN201910805322.6 is named as a method for accurately determining the bait feeding amount for river crab cultivation, and takes a picture of river crab by an underwater camera to train a multilayer convolutional neural network, so that the required time is long, the set over-parameter is excessive, and the required sample amount is excessive.
Disclosure of Invention
The invention provides a method and a device for detecting the ingestion state of fish, aiming at solving the technical problems that the error of a measurement result of an indirect detection method is large, and a direct detection method needs a large sample size and excessive hyper-parameters in the prior art.
In a first aspect, the present invention provides a method for detecting a feeding state of fish, comprising:
photographing to obtain a fish school feeding picture;
extracting texture features and shape features of the fish school feeding picture;
inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected;
the preset depth forest model is obtained by training based on the ingestion state of the fish school and the corresponding texture feature and shape feature during ingestion; the feeding states include satiety, low hunger, moderate hunger and high hunger.
Further, extracting texture features and shape features of the fish school feeding picture, specifically:
establishing a gray level co-occurrence matrix after removing the background aiming at the fish school feeding picture, and taking the entropy of the gray level co-occurrence matrix as texture characteristics;
and taking the arc degree of the gray level co-occurrence matrix as a shape characteristic.
Further, inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected, and specifically comprising the following steps:
scanning the texture features and the shape features through a multi-granularity scanning module to obtain feature vectors;
inputting the feature vectors into a random forest and a completely random forest in a multi-granularity scanning module and then outputting classification vectors;
and connecting the classification vector with the texture feature and the shape feature and inputting the classification vector into the preset depth forest model to obtain the ingestion state of the fish school to be detected.
Further, the method for constructing the preset depth forest model comprises the following steps:
shooting fish schools with different feeding states to obtain a plurality of fish school feeding pictures and a feeding state corresponding to each fish school feeding picture;
removing the background of each fish school feeding picture, and then establishing a corresponding gray level co-occurrence matrix;
processing the gray level co-occurrence matrix corresponding to each fish school feeding photo, extracting the entropy of the gray level co-occurrence matrix as texture characteristics, and extracting the circularity of the gray level co-occurrence matrix as shape characteristics;
and taking the texture characteristic and the shape characteristic corresponding to each fish school ingestion photo and the ingestion state as a training sample, obtaining a plurality of training samples, and training the preset deep forest model by using the plurality of training samples.
Further, removing the background of each fish school feeding picture, and then establishing a corresponding gray level co-occurrence matrix, specifically:
and extracting image backgrounds from the multiple fish school feeding pictures by using a mean value background method, subtracting the image background from each fish school feeding picture by using a background difference method, extracting moving foregrounds and establishing a corresponding gray level co-occurrence matrix.
Further, extracting a moving foreground and establishing a corresponding gray level co-occurrence matrix, specifically:
setting a threshold value T, carrying out binarization processing on pixel points one by one to obtain a binarized image aiming at a moving foreground, carrying out connectivity analysis to obtain a picture only with a fish body, and establishing a corresponding gray level co-occurrence matrix aiming at the picture only with the fish body.
Further, a plurality of training samples are used for training the preset deep forest model, and the method specifically comprises the following steps:
selecting 3 sliding windows to scan a plurality of training samples to obtain 3 types of feature vectors with different dimensions, inputting the feature vectors into a random forest and a completely random forest to output classification vectors, and then connecting all the classification vectors in series into a cascade forest module to train;
each layer in the cascade forest module is composed of at least 1 random forest and at least 1 complete random forest, new classification vectors are obtained through learning after the classification vectors are input, and the new classification vectors obtained by each layer are connected with the input classification vectors in series and then input into the next layer for learning.
A plurality of random forests and a plurality of completely random forests can be arranged on each layer in the cascade forest module, the accuracy of the model can be improved faster by increasing the number of the forests, but the modeling time can be prolonged.
Further, in the cascade forest module, starting from the second layer, after learning of each layer is completed, the performance of a cross-check detection model is set, if the amplitude of overall precision improvement is smaller than 1%, the training of the preset depth forest model is completed, and otherwise, the learning of the next layer is continued.
Further, the ingestion state is specifically:
(1) satiety: fish are in a satiated state and do not ingest food;
(2) low-degree starvation: the fish only eat the food before the fish, and do not swim to eat;
(3) moderate starvation: feeding fishes in swimming, and returning to the original position after feeding;
(4) high starvation: fish swim between foods and compete for the foods.
The method provided by the invention can detect the ingestion state of various fishes, such as tilapia, mackerel, mirror carp and the like.
In a second aspect, the present invention provides a fish feeding state detection apparatus comprising:
the image acquisition module is used for photographing to obtain a fish school feeding picture;
the characteristic extraction module is used for extracting the texture characteristics and the shape characteristics of the fish school feeding picture;
the ingestion state detection module is used for inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected;
the preset depth forest model is obtained by training based on the ingestion state of the fish school and the corresponding texture feature and shape feature during ingestion; the feeding states include satiety, low hunger, moderate hunger and high hunger.
The invention has the following beneficial effects:
according to the method, the texture features and the shape features of the fish school during feeding and the corresponding feeding state are combined to construct the deep forest model, and then the feeding state of the fish school can be judged directly through the texture features and the shape features of the fish school feeding picture. Compared with the prior art, the method has the advantages of accurate judgment result, less required input parameters and sample amount, and provides theoretical support and a solution for accurate feeding of the fishes.
Drawings
FIG. 1 is a flow chart of a method for detecting a feeding state of fish according to the present invention.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
The embodiment provides a method for classifying and detecting the feeding state of fishes based on deep forests. The flow chart is shown in figure 1. The method specifically comprises the following steps:
and S1, shooting fish schools with different feeding states to obtain a plurality of fish school feeding pictures and a feeding state corresponding to each fish school feeding picture.
Specifically, tilapia was fed in 4 feeding ponds for 4 weeks (30 per pond, 138 ± 4g each) before the experiment, and the fish were fed with floating particles twice a day (8:00 and 16:00) at a feeding rate of 2% per day. The dissolved oxygen content is maintained within the range of 5.0 +/-1.0 mg/L, and the water temperature is maintained at 20-25 ℃ so as to adapt to the experimental culture environment. The feeding plan of the fish is adjusted 2 days before the experiment, so that the fish presents 4 different hunger states (1: the fish is in a full state and cannot eat food, 2: the fish is in a low-degree hunger state, the fish only eats the food before and cannot swim to eat, 3: the fish is in a medium hunger state, the fish swims to eat and returns to the original position after eating, and 4: the fish is in a high-degree hunger state, the fish swims among the food and strives for the food).
A nikon D90 camera was mounted above the center of the tank down towards the water surface (120 ± 0.2cm) and captured 24 bit RGB (red, green, blue) video with a resolution of 1280 × 720 pixels. The shooting speed was 24 frames/second. Then picking out clear pictures by frame and marking labels as samples.
S2, removing the background of each fish feeding picture, and establishing corresponding gray level co-occurrence matrix
Extracting the background in the original fish school picture by using a mean background method, adding pixel values of corresponding points of a plurality of pictures, and then calculating the average value to obtain the background template.
Figure BDA0002805036910000051
Then, image segmentation processing is carried out, a background difference method is used for subtracting a current frame image from a continuously updated background model, so as to extract a moving foreground target, and the method comprises the following steps:
(1) calculating a difference value: dn(x,y)=|fn(x,y)-B(x,y)|
(2) And setting a threshold value T, and carrying out binarization processing on the pixel points one by one to obtain a binarized image. And then, performing connectivity analysis on the binary image to finally obtain an image containing a complete moving target, namely the image only containing the fish body.
Figure BDA0002805036910000061
(3) And calculating a gray level co-occurrence matrix of the image.
p(i,j,δ,θ)={[(x,y),(x+Δx,y+Δy)]|f(x,y)=i,
f(x+Δx,y+Δy)=j;x=0,1,…,Nx-1;
y=0,1,…,Ny-1}
Wherein i, j is 0,1, …, L-1; x and y are pixel coordinates in the image; l is the gray level number of the image; n is a radical ofx,NyRespectively the number of rows and columns of the image.
The probability of two pixels occurring at distances of δ and image gray values i and j is calculated as P (i, j, δ, θ) in four directions of typically 0 °, 45 °, 90 ° and 135 °. When δ is 1, then:
for the horizontal direction:
p(i,j,δ,0°)={[(x,y),(x+Δx,y+Δy)]
Figure BDA0002805036910000062
for the vertical direction:
p(i,j,δ,90°)={[(x,y),(x+Δx,y+Δy)]
Figure BDA0002805036910000063
③ for the 45 ° direction:
p(i,j,δ,45°)={[(x,y),(x+Δx,y+Δy)]
Figure BDA0002805036910000071
for a 135 ° orientation:
p(i,j,δ,135°)={[(x,y),(x+Δx,y+Δy)]
Figure BDA0002805036910000072
s3 extracting texture feature and shape feature
Calculating the texture characteristics of the target picture by using the obtained gray level co-occurrence matrix, and obtaining the following five characteristic values:
first second moment
Figure BDA0002805036910000073
The second moment reflects the uniformity degree of the gray level distribution and the thickness degree of the texture of the fish school image. Because it is the sum of squares of each element of the gray level co-occurrence matrix, also called energy, and is used to reflect the convergence degree of fish school in the target fish school picture.
② contrast (moment of inertia)
Figure BDA0002805036910000074
The definition and depth of the texture groove of the target fish school image are reflected by contrast. When the fish shoal shows violent snatching behavior, the texture grooves are deep, and the contrast is also large.
(iii) correlation
Figure BDA0002805036910000075
In the formula u1,u2,σ1,σ2Are respectively defined as:
Figure BDA0002805036910000081
Figure BDA0002805036910000082
Figure BDA0002805036910000083
Figure BDA0002805036910000084
the correlation is used for measuring the similarity of the gray level co-occurrence matrix established according to the fish school images in the row or column direction.
Entropy of
Figure BDA0002805036910000085
Entropy reflects the complexity or non-uniformity of texture in an image. If the texture is complex, the entropy has a large value, which indicates that the fish school behavior is more complex; on the contrary, if the gray levels in the image are uniform, the size difference of the elements in the co-occurrence matrix is large, and the entropy is small.
Moment of adverse reaction
Figure BDA0002805036910000086
Through comparison of picture texture effects, the entropy can better represent the complexity of fish school behaviors. That is, if the texture is complex, the entropy has a large value, which indicates that the fish school behavior is complex; on the contrary, if the gray levels in the image are uniform, the size difference of the elements in the co-occurrence matrix is large, and the entropy is small. The "entropy" is selected as the texture feature of the desired target fish school picture.
The steps of extracting the shape feature are as follows:
the shape parameter is a characteristic of the shape of the image, and the larger the image shape parameter is, the lower the compactness of the image is, and the more scattered the image is, which indicates that the fish shoal is struggling for food, and the edge image of the fish shoal becomes scattered. And (5) calculating by adopting circularity:
R=4πS/L2
wherein S is the area and L is the perimeter. The perimeter calculation method is the number of pixel points on the image edge, and the area calculation method is the number of all pixel points in the sensitive region.
S5: and constructing a depth forest detection model according to the texture characteristics and the shape characteristics.
(1) A multi-granularity scan is first performed. Three sliding windows with different sizes are selected and used to generate three types of feature vectors with different dimensions through scanning, the feature vectors are input into two different random forests (one random forest and one completely random forest which respectively comprise 500 decision trees) in the multi-granularity scanning module to output a plurality of 4-dimensional classification vectors, and all the classification vectors are connected in series to obtain a multi-granularity scanning stage result.
(2) And secondly, performing a cascading forest stage, wherein each layer consists of 2 random forests and 2 completely random forests, and each random forest comprises 500 decision trees. Inputting the classification vectors of the multi-granularity scanning stage into 4 random forests on the first layer for learning to obtain 4 four-dimensional classification vectors, and then inputting the 4 classification vectors and the classification vectors of the multi-granularity scanning stage in series into the random forest on the next layer for learning.
(3) The validation set undergoes a cross-validation phase. After the training of each layer is finished, setting the performance of a cross inspection detection model, if the overall precision is improved by more than 1%, indicating that the precision of the model can be improved by continuously deepening the layer number of the cascade forest, and continuously carrying out the training of the lower layer; and if the overall accuracy is improved by less than 1%, terminating the training process, wherein the model is the finally determined model.
TABLE 1 deep forest model
Figure BDA0002805036910000091
Figure BDA0002805036910000101
S6: and (4) classifying and measuring the fish ingestion pictures according to the deep forest detection model and obtaining the ingestion state.
And (5) determining the ingestion picture sample of the fish school to be detected by the deep forest model established in the step 5. And repeating the processes of S1-S3 on the fish school feeding picture sample, inputting the texture features and the shape features into a deep forest regression prediction model to obtain the fish school feeding state category, and determining the fish school feeding state according to the category.
Namely 1, the fish is in a satiety state and cannot eat food; 2: the fish is in a low-degree hunger state, and the fish only eats foods before the fish and does not swim to eat the foods; 3, the fish is in a moderate hunger state, swims to eat, and returns to the original position after eating; 4, the fish is in a high hunger state, and the fish swims among foods and strives for the foods.
The above method was used to detect the feeding status of fish and the results are shown in table 2:
TABLE 2 test results of deep forest method for fish feeding status
Number of training samples Number of samples tested Number of correct results Rate of accuracy
Satiety 1400 600 570 95.00%
Low degree of hunger 1400 600 555 92.50%
Moderate hunger 1400 600 559 93.17%
High starvation 1400 600 583 97.17%
Although the invention has been described in detail hereinabove with respect to a general description and specific embodiments thereof, it will be apparent to those skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. A method for detecting a feeding state of fish, comprising:
photographing to obtain a fish school feeding picture;
extracting texture features and shape features of the fish school feeding picture;
inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected;
the preset depth forest model is obtained by training based on the ingestion state of the fish school and the corresponding texture feature and shape feature during ingestion; the feeding states include satiety, low hunger, moderate hunger and high hunger.
2. The detection method according to claim 1, wherein the extracting of the texture features and shape features of the fish school feeding picture specifically comprises:
establishing a gray level co-occurrence matrix after removing the background aiming at the fish school feeding picture, and taking the entropy of the gray level co-occurrence matrix as texture characteristics;
and taking the arc degree of the gray level co-occurrence matrix as a shape characteristic.
3. The detection method according to claim 1 or 2,
inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected, and specifically comprising the following steps:
scanning the texture features and the shape features through a multi-granularity scanning module to obtain feature vectors;
inputting the feature vectors into a random forest and a completely random forest in a multi-granularity scanning module and then outputting classification vectors;
and connecting the classification vector with the texture feature and the shape feature and inputting the classification vector into the preset depth forest model to obtain the ingestion state of the fish school to be detected.
4. The detection method according to claim 1, wherein the preset depth forest model is constructed by the following method:
shooting fish schools with different feeding states to obtain a plurality of fish school feeding pictures and a feeding state corresponding to each fish school feeding picture;
removing the background of each fish school feeding picture, and then establishing a corresponding gray level co-occurrence matrix;
processing the gray level co-occurrence matrix corresponding to each fish school feeding photo, extracting the entropy of the gray level co-occurrence matrix as texture characteristics, and extracting the circularity of the gray level co-occurrence matrix as shape characteristics;
and taking the texture characteristic and the shape characteristic corresponding to each fish school ingestion photo and the ingestion state as a training sample, obtaining a plurality of training samples, and training the preset deep forest model by using the plurality of training samples.
5. The detection method according to claim 4, wherein the background is removed for each fish school feeding picture, and then a corresponding gray level co-occurrence matrix is established, specifically:
and extracting image backgrounds from the multiple fish school feeding pictures by using a mean value background method, subtracting the image background from each fish school feeding picture by using a background difference method, extracting moving foregrounds and establishing a corresponding gray level co-occurrence matrix.
6. The detection method according to claim 5, wherein extracting the moving foreground and establishing a corresponding gray level co-occurrence matrix specifically comprises:
setting a threshold value T, carrying out binarization processing on pixel points one by one to obtain a binarized image aiming at a moving foreground, carrying out connectivity analysis to obtain a picture only with a fish body, and establishing a corresponding gray level co-occurrence matrix aiming at the picture only with the fish body.
7. The detection method according to claim 4, wherein the preset depth forest model is trained using a plurality of training samples, specifically:
selecting 3 sliding windows to scan a plurality of training samples to obtain 3 types of feature vectors with different dimensions, inputting the feature vectors into 1 random forest and 1 completely random forest, outputting classification vectors, and then connecting all the classification vectors in series into a cascade forest module for training;
each layer in the cascade forest module is composed of at least 2 random forests and at least 2 completely random forests, new classification vectors are obtained through learning after the classification vectors are input, and the new classification vectors obtained by each layer are connected with the input classification vectors in series and then input into the next layer for learning.
8. The detection method as claimed in claim 7, wherein in the cascade forest module, starting from the second layer, after learning of each layer is completed, performance of a cross-check detection model is set, if the amplitude of overall precision improvement is less than 1%, training of the preset depth forest model is completed, and otherwise, learning of the next layer is continued.
9. Method according to any one of claims 1 to 8, characterized in that said feeding regime is in particular:
(1) satiety: fish are in a satiated state and do not ingest food;
(2) low-degree starvation: the fish only eat the food before the fish, and do not swim to eat;
(3) moderate starvation: feeding fishes in swimming, and returning to the original position after feeding;
(4) high starvation: fish swim between foods and compete for the foods.
10. A fish feeding state detection apparatus, comprising:
the image acquisition module is used for photographing to obtain a fish school feeding picture;
the characteristic extraction module is used for extracting the texture characteristics and the shape characteristics of the fish school feeding picture;
the ingestion state detection module is used for inputting the texture features and the shape features into a preset depth forest model to obtain the ingestion state of the fish school to be detected;
the preset depth forest model is obtained by training based on the ingestion state of the fish school and the corresponding texture feature and shape feature during ingestion; the feeding states include satiety, low hunger, moderate hunger and high hunger.
CN202011364526.XA 2020-11-27 2020-11-27 Method and device for detecting ingestion state of fish Pending CN112381044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364526.XA CN112381044A (en) 2020-11-27 2020-11-27 Method and device for detecting ingestion state of fish

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364526.XA CN112381044A (en) 2020-11-27 2020-11-27 Method and device for detecting ingestion state of fish

Publications (1)

Publication Number Publication Date
CN112381044A true CN112381044A (en) 2021-02-19

Family

ID=74588583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364526.XA Pending CN112381044A (en) 2020-11-27 2020-11-27 Method and device for detecting ingestion state of fish

Country Status (1)

Country Link
CN (1) CN112381044A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113841650A (en) * 2021-10-15 2021-12-28 天津科技大学 Intelligent bait feeding system for outdoor aquaculture pond and control method thereof
CN114467824A (en) * 2022-03-04 2022-05-13 上海海洋大学 Intelligent bait casting boat

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020031268A1 (en) * 2001-09-28 2002-03-14 Xerox Corporation Picture/graphics classification system and method
CN108445746A (en) * 2018-01-25 2018-08-24 北京农业信息技术研究中心 A kind of intelligence feeds control method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020031268A1 (en) * 2001-09-28 2002-03-14 Xerox Corporation Picture/graphics classification system and method
CN108445746A (en) * 2018-01-25 2018-08-24 北京农业信息技术研究中心 A kind of intelligence feeds control method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LOADING_123: "Deep Forest 非神经网络的深度模型(周志华)", 《HTTPS://BLOG.CSDN.NET/LOADING_123/ARTICLE/DETAILS/78860344》 *
郭强: "基于计算机视觉的循环水养殖镜鲤的摄食状态检测方法研究", 《中国优秀硕士学位论文全文数据库 (农业科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113841650A (en) * 2021-10-15 2021-12-28 天津科技大学 Intelligent bait feeding system for outdoor aquaculture pond and control method thereof
CN114467824A (en) * 2022-03-04 2022-05-13 上海海洋大学 Intelligent bait casting boat

Similar Documents

Publication Publication Date Title
Saberioon et al. Automated within tank fish mass estimation using infrared reflection system
An et al. Application of computer vision in fish intelligent feeding system—A review
Le et al. An automated fish counting algorithm in aquaculture based on image processing
CN112381044A (en) Method and device for detecting ingestion state of fish
Lainez et al. Automated fingerlings counting using convolutional neural network
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
Labuguen et al. Automated fish fry counting and schooling behavior analysis using computer vision
CN107610122B (en) Micro-CT-based single-grain cereal internal insect pest detection method
CN114863263B (en) Snakehead fish detection method for blocking in class based on cross-scale hierarchical feature fusion
CN112634202A (en) Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN112712518A (en) Fish counting method, fish counting device, electronic equipment and storage medium
Zhang et al. Intelligent fish feeding based on machine vision: A review
CN113420614A (en) Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm
CN115100513B (en) Method and system for estimating food intake of breeding object based on computer vision recognition
CN115512215A (en) Underwater biological monitoring method and device and storage medium
CN112417378A (en) Eriocheir sinensis quality estimation method based on unmanned aerial vehicle image processing
Lai et al. Automatic measuring shrimp body length using CNN and an underwater imaging system
Setiawan et al. Shrimp body weight estimation in aquaculture ponds using morphometric features based on underwater image analysis and machine learning approach
Taparhudee et al. Application of unmanned aerial vehicle (UAV) with area image analysis of red tilapia weight estimation in river-based cage culture
Liu et al. Research progress of computer vision technology in abnormal fish detection
CN113222889B (en) Industrial aquaculture counting method and device for aquaculture under high-resolution image
CN115100688A (en) Fish resource rapid identification method and system based on deep learning
CN114612454A (en) Fish feeding state detection method
Liu et al. Evaluation of body weight of sea cucumber Apostichopus japonicus by computer vision
Cao et al. Research on counting algorithm of residual feeds in aquaculture based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219

RJ01 Rejection of invention patent application after publication