Captive Animal Behavior Study by Video Analysis
<p>The two experimental environment configurations for rat tracking and trajectory analysis application: first stage (<b>a</b>) and second stage (<b>b</b>).</p> "> Figure 2
<p>The laboratory environment for the neuroscience experiment: (<b>a</b>) lighted cage and (<b>b</b>) dark scene.</p> "> Figure 3
<p>The arena is seen by 4 video cameras. Two cameras offer a view of the entire landscape viewed from two various angles (<b>a</b>,<b>b</b>). The other 2 cameras observe the ground of the general view (<b>c</b>,<b>d</b>).</p> "> Figure 4
<p>Motion traits generation. From the successive frame series, (<b>a</b>–<b>c</b>) generate target image shown in (<b>d</b>): <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>m</mi> <mi>o</mi> <mi>v</mi> </mrow> </msub> <mo>=</mo> <mi>D</mi> <mfenced separators="|"> <mrow> <mi>E</mi> <mfenced separators="|"> <mrow> <mfenced open="⌊" close="⌋" separators="|"> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>−</mo> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>b</mi> </mrow> </msub> </mrow> </mfenced> <mo>−</mo> <mfenced open="⌊" close="⌋" separators="|"> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>b</mi> </mrow> </msub> <mo>−</mo> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </mfenced> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>.</p> "> Figure 5
<p>(<b>a</b>). Original frame; (<b>b</b>). binary frame; (<b>c</b>). binary frame column projection and the middle cage left <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>b</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and right <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>b</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> walls.</p> "> Figure 6
<p>(<b>a</b>). Current frame; (<b>b</b>). binary frame; (<b>c</b>). erosion result; (<b>d</b>). rat and rat head localization results; (<b>e</b>). rat head detection result illustrated in the current frame.</p> "> Figure 6 Cont.
<p>(<b>a</b>). Current frame; (<b>b</b>). binary frame; (<b>c</b>). erosion result; (<b>d</b>). rat and rat head localization results; (<b>e</b>). rat head detection result illustrated in the current frame.</p> "> Figure 7
<p>(<b>a</b>) Original frame; (<b>b</b>) generated image to compute the cage parameters; (<b>c</b>) working frame. At its bottom, the image also displays the cages and the plastic box extremities: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mn>12</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mn>21</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>b</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>b</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) tracking result.</p> "> Figure 8
<p>(<b>a</b>). Current frame; (<b>b</b>). canny features image of the current frame; (<b>c</b>). movement features image; (<b>d</b>). localization result provided by the movement features and Canny edges images analysis; (<b>e</b>). localization result visualized in current frame.</p> "> Figure 9
<p>Head orientation calculus result. The red rectangle frames the sensors array antenna and the white line indicates the head orientation.</p> "> Figure 10
<p>First filtering stage of the ending path. (<b>a</b>,<b>d</b>) depict the right end of the first sequence of current camera; (<b>c</b>,<b>f</b>) show the left extremity of the second sequence of the same camera. (<b>b</b>) shows the beginning of an in-terleaved sequence of the complementary camera. (<b>e</b>) indicates the beginning of the current camera sequence replacing the interleaved sequence of the complementary camera.</p> "> Figure 11
<p>The start of a ‘stress running’ sequence is depicted in (<b>a</b>). The sequence end is illustrated in (<b>b</b>). The sequence is followed by a segment labeled as ‘stress walking 1’ whose start is depicted in (<b>c</b>) and the end in (<b>d</b>). The white rectangle frames the panda and the red line indicates the linear path that panda traveled for a specific stress sequence.</p> "> Figure 11 Cont.
<p>The start of a ‘stress running’ sequence is depicted in (<b>a</b>). The sequence end is illustrated in (<b>b</b>). The sequence is followed by a segment labeled as ‘stress walking 1’ whose start is depicted in (<b>c</b>) and the end in (<b>d</b>). The white rectangle frames the panda and the red line indicates the linear path that panda traveled for a specific stress sequence.</p> "> Figure 12
<p>Four positions of a sequence labeled as ‘stress walking 2‘. (<b>a</b>) indicates the sequence start. (<b>d</b>) shows the sequence end and (<b>b</b>,<b>c</b>) depict intermediate positions on the path. The rectangle frames the panda and the red line indicates the linear path that panda traveled for a specific stress sequence.</p> "> Figure 13
<p>Four positions of a sequence labeled as ‘stress walking 3‘. (<b>a</b>) indicates the sequence start. (<b>d</b>) shows the sequence end and (<b>b</b>,<b>c</b>) depict intermediate positions on the path. The rectangle frames the panda and the red line shows the panda trajectory from the sequence start to the depicted position.</p> "> Figure 14
<p>First frame (<b>a</b>) and last frame (<b>b</b>) of a sequence labeled as ‘stress stationary’. When panda is seized in the area behind of the two stones and the movement features cluster tends to elongate to the north the search area is restrained inside the zone depicted by the red rectangle.</p> "> Figure 15
<p>Main window of the application for animal activity recognition.</p> "> Figure 16
<p>Stress episode detection. Processing results in Excel format.</p> ">
Abstract
:1. Introduction
2. Experimental Environments and Localization Methods
- ;
- , where D and E represent Dilation and Erosion morphological operations, respectively;
- In , delete all movement pixels external to the region of interest (this step is application-dependent);
- Through a ‘counting-box’ method, identify the most relevant movement cluster in .
3. Rat Behavior Analysis
3.1. Head Localization
3.2. Compartment Visits Number and Visit Duration
3.3. Results and Conclusions
4. Animal Tracking Method Used in Neuroscience Experiments
4.1. Target Localization
4.2. Head Orientation
4.3. Results and Conclusions
- -
- Unequal brightness for lighted scenes and also when light was banned; during the torn-off light episodes, the monkey shadow could yield forged motion traits;
- -
- The manner in which the experimental work was managed: Frequently the monkey's movement was tracked from the cage exterior, and the experimenter’s motion produced not useful movement traits; occasionally, the monkey was fed from the outside of the box, and this fact also does not generate monkey traits;
- -
- The abrupt shifts of the box brightness due to the fact the monkey could accidentally turn the light on/off;
- -
- The monkey motion is highly nonlinear.
5. Stress Sequences Identification of a Zoo Panda Bear by Movement Image Analysis
- -
- Motion domain selection: To cancel no information motion traits (e.g., visitors’ motions; flaring on walls or ceiling), for every view, a helpful information polygon was set up. Afterward, during the tracking stage, by means of a ‘ray casting’ method, just motion traits inside the polygon were analyzed;
- -
- Panda bear and zoological garden caregiver entry/exit gates coordinates in the information polygon for every sight;
- -
- The complete visibility zones were bordered for each sight. In these zones, it was possible that after a valid motion cluster recognition, there were no discovered motion traits. This could be motivated by a relaxation or freezing moment of a panda. A no-motion characteristic was set up to eliminate the tracking re-restoration;
- -
- For each visible area of a view, a list of the coordinates of the correspondent areas of the other cameras was synthesized.
5.1. Localization at Task Level
- (a)
- features_nmb > high_features_nmb_thr. In this case, if the height movement rectangle is specific to the zoo technician and is seized in the zoo technician gate area, an attribute indicating technician presence is set: zoo_technician = 1. This parameter was used in the overall trajectory synthesis where only panda frames were considered;
- (b)
- features_nmb > features_nmb_thr, and the movement cluster is identified in the gate areas;
- (c)
- features_nmb > features_nmb_thr, and the movement cluster is identified in the previous valid position area. This case is possible only if the precedent tracking session provided valid results.
- -
- In the preceding frame, the bear was detected in an observable zone, and the movement labeling the actual frame was marked as freezing, and the watching kept going;
- -
- The previous frame was labeled as freezing, and the current frame was also labeled as freezing;
- -
- Else, the frame was marked as no_information, and the watching was canceled.
- -
- The panda bear was viewed in the door area after an exit, the watching zone for the next video frame would be restrictive to the same door area. This specific seeking continued as long as the condition took place;
- -
- The bear was viewed in a visible area other than the door zone after a movement frame series, and the current motion rectangle was almost in the same position as the set of preceding rectangles, then the current frame was marked as freezing;
- -
- Else, the no_information attribute is set.
5.2. Trajectory Synthesis
5.3. Trajectory Analysis
- (1)
- For each frame gap of the unfilled frame positions bounded by the same useful information from the camera, perform the following: Each parameter structure of gap intermediary frame , would be filled with movement mass center coordinates: ; , and the attribute would be labeled as moving if at least one pair position is marked as moving. Else, the new positions are marked as freezing;
- (2)
- For every element of the updated list filter, the motion characteristic by counting the number of every characteristic type in a window is centered in the current position. Ultimately, the position characteristic is the characteristic type with the utmost number of votes;
- (3)
- For every pair with an identical available camera number, assess the local velocity: . The velocity is then weighted by camera number and by camera resolution;
- (4)
- For every available place in the last list, filter the velocity by averaging the velocities of good places in a window centered in the actual position.
- -
- Positions on the list with are labeled as ;
- -
- Every place on the list, besides the stress feature, with and that is a partition of a moving series consequence of a recording period in a complete visible zone is marked as ;
- -
- Every place on the list, besides stress feature, with and that is a partition of a series consequence of a still period in one of the garden housings is marked as ;
- -
- Every place on the list, besides stress feature, with that is a partition of a series consequence of a still period in one of the garden housings, and the path series has at least one height, is marked as . Here, height is defined as the point located at a significant distance to the line approximating the path section;
- -
- Tag with the characteristic : the frame series with a duration greater than 5 s in which the panda bear bathes in the basin and is viewed by camera 8. Every place in such a frame series must have the motion center of gravity situated close to the basin.
5.4. Results and Conclusions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Algorithm A1. Task Trajectory synthesis procedure denoted tracking_task |
Input: iFrame = Current frame of the specific view; camera_nmb = current camera indicative; start_Frame = initial frame number of the processed sequence; |
Output: Tracking rectangles coordinates / frame file |
01. movement attributes initializations; |
02. compute the gray level average of the current frame; |
03. if view gray_level_average[i] is too low or too high |
04. return (-1, -1); |
05. if current view is low resolution then low_resolution = 1; |
06. localize movement cluster (procedure Section 2 completed with a ‚ray casting’ selection and eventually with a searching area around previous position if the panda was in a full visible zone); |
07. if not_first_time_localization == 0 |
08. if the number of movement features is less than high_features_nmb_thr then |
09. if zoo_technician = 1 and the movement cluster is localized in the area of the gate where panda was fostered during technician session then |
10. not_first_time_localization = 1; valid_movement = 1; zoo_technician = 0; |
else |
11. if the movement is in the gates area or around previous valid position then |
12. not_first_time_localization = 1; valid_movement = 1; |
else |
13. valid_movement = 0; end if (lines 11, 09) |
else |
14. if zoo_technician = 0 |
15. if the current frame is the first movement frame of the sequence and the movement is strong and around zoo technician gate then |
16. zoo_technician = 1; not_first_time_localization = 1; valid_movement = 1; |
else |
17. if the movement is in the gates area or around previous valid position then |
18. not_first_time_localization = 1; valid_movement = 1; |
else |
19. valid_movement = 0; end if (lines 17, 15) |
else |
20. if the movement is around the area of the gate where panda was fostered during technician session and also the last movement was detected in the area of the technician gate then |
21. zoo_technician = 0; not_first_time_localization = 1; valid_movement = 1; |
else |
22. not_first_time_localization = 1; valid_movement = 1; end ifs (lines 20, 14, 08) |
else |
23. if the number of movement features is less than low_features_nmb_thr then |
24. maintain_gate_search = 0; |
25. goto line 63; // no or weak movement analysis; end if (line 23) |
27. if the number of movement features is less than features_nmb_thr and (iFrame-last_information_frame > frame_gap_thr or maintain_gate_search == 1) then |
28. if features are not located in the gate areas of the scene then |
29. maintain_gate_search = 0; goto line 63; // no or weak movement analysis; |
30. else |
31. maintain_gate_search = 1; valid_movement = 1; end if (line 28) |
32. else |
33. maintain_gate_search = 0; valid_movement = 1; end ifs (line 27, 07) |
34. if valid_movement == 1 then |
35. if one of the following conditions is true |
1. suddenly bursting configuration with more than half of the movement features far away from the current movement; |
2. current movement cluster is far away from the position in full view areas where panda was previously sleeping; |
3. current movement cluster is not in the area of zoo technician gate and the last exit was of the technician (zoo_technician == 1) |
36. then valid_movement = 0; |
37. if iFrame-last_information_frame > frame_gap_thr then |
38. not_first_time_localization = 0; end ifs (lines 37, 35, 34) |
39. if valid_movement == 0 then |
40. if zoo_technician == 1 |
41. return (-1, -1); end if (line 40) |
42. |
43. scroll down the all processed information/frame; |
44. ; end if (line 39) |
45. if panda_sleeping == 0 and the last five movement rectangles are included one in the preceding one and movement features are decreasing then |
46. if the number of the movement features is less than a threshold then |
47. panda_sleeping = 1; end if (line 46, 45) |
48. if panda_sleeping == 1 |
49. if the current movement rectangle is not included in the previous one but not far away from this one then |
50. panda_sleeping = 0; |
51. scroll down the all processed information/frame; last_information_frame = iFrame; |
52. else |
53. set the current movement rectangle to the previous one and perform a search for movement features. |
54. if the number of movement features is greater than low_features_nmb_thr then |
55. panda_sleeping = 0; |
56. scroll down the all processed information/frame; last_information_frame= iFrame; |
57. else |
58. valid_movement = 0; end if (lines 54, 49, 48) |
59. if zoo_technician == 1 |
60. return (-1, -1); |
61. else |
62. ; end if (line 59) |
63. if iFrame-last_information_frame > frame_gap_thr and panda_sleeping == 0 and zoo_technician == 0 then |
64. not_first_time_localization = 0;//initialize the tracking procedure |
65. if zoo_technician == 1 and the previous computed target coordinates were in the technician gate area then |
66. zoo_technician =0; end if (line 65) |
67. if panda_sleeping == 1 then |
68. if the last chosen position by the fusion process is of a different view than the current one and there is no match with the current view position then |
69. panda_sleeping = 0; |
70. else |
71. if there is weak movement check for small displacements |
72. if small displacements then |
73. update new position; panda_sleeping = 0; last_information_frame = iFrame; |
74. |
75. end ifs (lines 72, 71, 68) |
76. else |
//check for sleep condition |
77. if iFrame-last_information_frame == 1 and panda_sleeping == 0 then |
78. if the computed pose in current view is in a suitable area for freezing then |
79. if the last chosen position by the fusion process is of a different view than the current one and there is a match with the current view position then |
80. if in the previous five frame there were not significant movements, related to low_resolution parameter, or panda in the last frame was detected in a full view area (for instance on a high platform) then |
81. panda_sleeping = 1; end ifs (lines 80, 79, 78, 77, 67) |
82. if panda_sleeping == 1 |
83. if zoo_technician == 1 |
84. ; |
85. else |
86. ; end if |
87. scroll down the all processed information/frame; |
88. |
89. else |
90. ; |
91. scroll down the all processed information/frame; |
92. ; end if (line 82, 63) |
Appendix B
Algorithm A2. Trajectory synthesis procedure |
Input: Video sequences (1-4) |
Output
: List
of tracking rectangles coordinates of the most visible position from all
views/ frame file; frame number of the analyzed position and the movement attribute (moving, freezing). |
01. Choose the camera (view) set to be included in the
project; Generate the list results: one list for each project camera; one list for fused trajectory; For each project video frame is allocated an entry list include ing: camera number; localization coordinates and movement attributes. |
02. Establish the tracking mode: parallel (two to
four camera tasks) or sequential (for debug purpose); |
03. for each project camera do |
04. set the polygon of useful view of the entire camera scene; |
05. set the areas of doors scene through panda or
zoo technician could enter in or quit the scene; |
06. set the areas where could not be considered
asleep in case of lack of movement features for the current camera; |
07. set the areas where the movement features should
be erased because are generated by the time and frame counters or by the water waves for the views including water ponds (mainly the fourth view); |
08. end for |
09. for each frame, denoted iFrame, of the
current video sequence (start_frame to end_frame) do |
10. if iFrame–start_Frame < 3 compute the gray level average (denoted gray_level_average_current) for each view; |
11. if at least one gray_level_average_current
is too low or too high (the image is entire black or white) |
12. start_Frame = iFrame; |
13. continue; end if |
14. fill the list gray_level_average[3] of each view: |
task(i).gray_level_average[2] = task(i). gray_level_average[1] |
task(i).gray_level_average[1] = task(i) .gray_level_average[0] |
task(i).gray_level_average[0] = task(i). gray_level_average_current; |
15. continue; end if |
16. if parallel _mode launch tracking
task (iFrame, camera_nmb, start_frame) of the all cameras in the project; |
15. else |
16. for each camera in project execute tracking
task (iFrame, camera_nmb, start_frame); end for, end if |
17. if at least one of the tasks returns a
value related to zoo technician presence in the scene do not consider the current frame in the fusion process; end if |
18. if a task returns non valid coordinates,
others than those signalizing the technician presence do not consider it in the fusion process; end if |
19. if only one task returns valid position retain its view as most suitable view; end if |
20. if at least two tasks return valid
positions retain most suitable view considering area localization weight and the previous view in the fused views chain; |
21. if the previous selection is near the
border of a full view area, neighbor of another full view area, and the movement speed is fast (lot of movement features) do not switch views |
else |
22. if the previous selection is near
the fading view border of a full view area and at least one of the others available views is in a full view area consider this one instead the current view; end if (lines 22, 21, 20) |
23. end for; |
Appendix C
Algorithm A3. List filtering and analysis |
Input
: List
of tracking rectangles coordinates of the most visible position from all
views/ frame file; frame number of the analyzed position and the movement attribute (moving, freezing, no_information). |
Output : Excel sheet pointing to stress episodes of the analyzed sequence. |
01. for each project camera do |
02. detect in the final list the pair intervals with
the same current camera with more entries than each contained sub-interval. The sub-intervals can be only of the selected camera or of the complementary camera (complementary cameras: 6 and 7, respectively 5 and 8); |
03. if there is at least one pair of such intervals do |
04. for each interval delimited by the pair intervals |
05. for each sub-interval of complementary camera do |
06. if there are positions of
selected camera delimited at left by the left side of the current sub-interval and at right by the left side of the next sub-interval of selected camera then |
07. replace current subinterval
of complementary camera with all the specified positions of selected camera; |
08. else |
09. eliminate current sub-interval
from the list by setting its positions with null number and freezing_or_missing attribute ; end if, for, for, if, for |
10. for each project camera do |
11. detect the pair intervals with the same current
camera with more entries than each contained sub-interval. The sub-intervals can be only of the selected camera or of the opposite camera (pairs of opposite cameras: (6,5), (6, 8) respectively (7,5) and (7,8)); In addition, the second interval of the pair is followed by a significant interval of the same opposite camera from the current interval; |
12. if there is at least one pair of such intervals do |
13. for each interval delimited by the pair intervals |
14. for each sub-interval of the opposite camera do |
15. if there are positions of
selected camera delimited at left by the left side of the current sub-interval and at right by the left side of the next sub-interval of selected camera then |
16. replace current subinterval
of opposite camera with all the specified positions of selected camera; |
17. else |
18. eliminate current sub-interval
from the list by setting its positions with null number and freezing_or_missing attribute ; end if, for, for, if, for |
19. if both general view cameras (6 and 7) are in the project do |
20. for each k entry of camera
positions list (there are two lists) compute a visibility parameter as follows: |
21. if panda is in foreground in a full visibility area: visibilityci(k) = 1 end if |
22. if panda is in foreground in a not full visibility area: visibilityci(k) = 0 end if |
23. if panda is in background |
24. if camera_nmb = 7 then visibilityci(k) = −1 |
25. else |
26. if new position is included in excepted area visibilityci(k) = 1; |
27. else visibilityci(k) = −1 end ifs, for |
28.
if there is at least one switch to the other general camera with visibilityci(k) = 1 followed by a sequence containing a re-switch to the former selection but this time with visibilityci(k) = −1 and again a switch to the best view camera |
29. for each such a sequence do |
30. replace the subinterval of weak view
camera with all the not-selected positions of the best view camera; end for, if |
31. if there is at least one switch from camera 6 to camera 7 with visibilityc6(k) = 1 followed by a sequence containing a re-switch to the former selection of camera 6 with visibilityci(k) = 1 and again a switch to the best view camera then |
32. count from the end of the included sequence
of camera 6 consecutive positions with visibilityci(k) = −1 |
33. for each such a sequence do |
34. if counter not null |
35. replace the last counter positions of
the interval of weak view camera with all the not-selected positions of the best view camera; end if, for, if, if (line 19) |
36. detect in current final list the sequences of transitions c8->c7>c8->c7. |
37. for each transition where camera 7 area is
fully visible and located behind the three levels platform do |
38. replace the second sequence c8 with the
not-selected positions of camera 7 in the same frame interval with the sequence c8 to be eliminated; end for |
39. detect in current final list the sequences of
transitions c7->c6 for periods when only one or both of these cameras provide information; |
40. for each transition where camera 7 area is
fully visible and located behind the three levels platform do |
41. if the first entry of c6 segment has ‘pause’ attribute replace the sequence c6 with the
not-selected positions of camera 7 in the same frame interval with the sequence c6 to be eliminated; end if, for |
42. for each project camera do |
43. detect the sequences starting and ending with the
same camera containing only intervals of the complementary camera alternating with those of the starting selected camera of the sequence (complementary cameras: c6 and c7, respectively c5 and c8); |
44. for each sequence compute the duration of
selection for each camera in current sequence and select as winner the camera that detained more time the selection |
45. for each sequence interval of camera to be replaced do |
46. if there are positions of selected
camera delimited at left by the left side of the current sub-interval and at right by the left side of the next sub-interval of selected camera then |
47. replace current subinterval of
complementary camera with all the specified positions of selected camera; |
48. else |
49. refill the interval to be replaced
with the last selected position of the winner camera. Set the still attribute for all positions; end if, for, for, for |
50. for each project foreground camera do |
51. detect a sequence of consistent length with mostly central list interval in full visible area of |
current camera; |
52. continue searching as long as next sequence
in the final list is short and of low visibility positions of a general view camera; |
53. if there is not such a sequence goto 51 (continue searching from the current position) |
54. else |
55. if next sequence is of consistent
length with mostly central list interval in full visible area of complementary camera |
56. replace the sequences found in step 52
with the most appropriate sequence from the lists of foreground cameras; |
57. goto 51 (continue searching from the current position); |
58. else |
59. goto 51 (continue searching from the current position); end if, if, for |
60. for each sequence in the fused list with the
same camera but with a gap of unfilled position complete the gap according with paragraph a) of the procedure description; end for |
61. for every position of the updated list filter
the motion characteristic by computing the number of each characteristic type in a window centered in the actual position. At the end, the position characteristic will be the characteristic class with the utmost number of votes; end for |
63. for every pair with the same good camera
number calculate the local velocity suitable with the paragraph c) of the procedure description; end for |
64. for every available position in the list
filter the velocity by averaging the velocities of good places in a window centered in the actual position; end for |
65. Label with attribute stress running each position in the list with vi > thr_v1 |
66. Label with attribute stress walking 1 each position in the list, without stress attribute, with thr_v2 < thr_v1 and that is part of a sequence following a repos period in a full visible area; |
66. Label with attribute stress walking 2 each position in the list, without stress attribute, with thr_v2 < thr_v1 and is part of a sequence following a still period in one of the garden shelters; |
67. Mark with attribute stress walking 3 each place in the list, besides stress characteristic, with thr_v3 < vi thr_v2 that is part of a sequence following a still period in one of the garden housings, and the path sequence has at least one height; |
68. Label with attribute stress stationary the episodes with a duration more than 5 seconds, in which panda bathes in the basin, and is viewed by camera 8. Each place in such a series must have the motion center of gravity situated nearby the basin; |
69. Synthesize
an Excel sheet indicating the sequences (start frame, end frame) with stress attribute. |
References
- Farah, R. Computer Vision Tools for Rodent Monitoring. Ph.D. Thesis, University of Montreal, Montreal, Canada, July 2013. [Google Scholar]
- Farah, R.; Langlois, J.M.P.; Bilodeau, G.-A. Catching a rat by its edglets. IEEE Trans. Image Process. 2013, 22, 668–678. [Google Scholar] [CrossRef] [PubMed]
- Farah, R.; Langlois, J.M.P.; Bilodeau, G.-A. Computing a Rodent’s Diary. Signal Image Video Process. 2016, 10, 567–574. [Google Scholar] [CrossRef]
- Koniar, D.; Hargaš, L.; Loncová, Z.; Duchoň, F.; Beňo, P. Machine vision applications in animal trajectory tracking. In Computer Methods and Programs in Biomedicine 127; Elsevier: Amsterdam, The Netherlands, 2016; pp. 258–272. [Google Scholar]
- Ishii, I.; Kurozumi, S.; Orito, K.; Matsuda, H. Automatic Scratching Pattern Detection for Laboratory Mice Using High-Speed Video Images. IEEE Trans. Autom. Sci. Eng. 2008, 5, 176–182. [Google Scholar] [CrossRef]
- Luxem, K.; Sun, J.J.; Bradley, S.P.; Krishnan, K.; Yttri, E.; Zimmermann, J.; Pereira, T.D.; Laubach, M. Open-Source Tools for Behavioral Video Analysis: Setup, Methods, and Development. 2022. Available online: https://arxiv.org/ftp/arxiv/papers/2204/2204.02842.pdf (accessed on 15 April 2023).
- Rodriguez, A.; Zhang, H.; Klaminder, J.; Brodin, T.; Andersson, P.L.; Andersson, M. ToxTrac: A fast and robust software for tracking organisms. Methods Ecol. Evol. 2018, 9, 460–464. [Google Scholar] [CrossRef]
- Patman, J.; Michael, S.C.J.; Lutnesky, M.M.F.; Palaniappan, K. BioSense: Real-Time Object Tracking for Animal Movement and Behavior Research. In Proceedings of the 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 9–11 October 2018; pp. 1–8. [Google Scholar]
- Itskovits, E.; Levine, A.; Cohen, E.; Zaslaver, A. A multi-animal tracker for studying complex behaviors. BMC Biol. 2017, 15, 29. [Google Scholar] [CrossRef] [PubMed]
- Iswanto, I.A.; Li, B. Visual Object Tracking Based on Mean-shift and Particle-Kalman Filter. ScienceDirect. Procedia Comput. Sci. 2017, 116, 587–595. [Google Scholar] [CrossRef]
- Mathis, M.W.; Mathis, A. Deep learning tools for the measurement of animal behavior in neuroscience. ScienceDirect. Curr. Opin. Neurobiol. 2020, 60, 1–11. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Yu, S.-Y.; Flierman, N.; Loyola, S.; Kamermans, M.; Hoogland, T.M.; De Zeeuw, C.I. OptiFlex: Multi-Frame Animal Pose Estimation Combining Deep Learning With Optical Flow. Front. Cell. Neurosci. 2021, 15, 621252. [Google Scholar] [CrossRef] [PubMed]
- Rodriguez, A.; Zhang, H.; Klaminder, J.; Brodin, T.; Andersson, M. ToxId: An efficient algorithm to solve occlusions when tracking multiple animals. Sci. Rep. 2017, 7, 14774. [Google Scholar] [CrossRef] [PubMed]
- Schütz, A.K.; Krause, E.T.; Fischer, M.; Müller, T.; Freuling, C.M.; Conraths, F.J.; Homeier-Bachmann, T.; Lentz, H.H.K. Computer Vision for Detection of Body Posture and Behavior of Red Foxes. Animals 2022, 12, 233. [Google Scholar] [CrossRef] [PubMed]
- Chen, P.; Swarup, P.; Matkowski, W.M.; Kong, A.W.K.; Han, S.; Zhang, Z.; Rong, H. A study on giant panda recognition based on images of a large proportion of captive pandas. Ecol. Evol. 2020, 10, 3561–3573. [Google Scholar] [CrossRef] [PubMed]
- Luxem, K.; Mocellin, P.; Fuhrmann, F.; Kürsch, J.; Miller, S.R.; Palop, J.J.; Remy, S.; Bauer, P. Identifying behavioral structure from deep variational embeddings of animal motion. Commun. Biol. 2022, 5, 1267. [Google Scholar] [CrossRef] [PubMed]
- Bethell, E.J.; Khan, W.; Hussain, A. A deep transfer learning model for head pose estimation in rhesus macaques during cognitive tasks: Towards a nonrestraint noninvasive 3Rs approach. Appl. Anim. Behav. Sci. 2022, 255, 105708. [Google Scholar] [CrossRef]
- Zuerl, M.; Stoll, P.; Brehm, I.; Raab, R.; Zanca, D.; Kabri, S.; Happold, J.; Nille, H.; Prechtel, K.; Wuensch, S.; et al. Automated Video-Based Analysis Framework for Behavior Monitoring of Individual Animals in Zoos Using Deep Learning—A Study on Polar Bears. Animals 2022, 12, 692. [Google Scholar] [CrossRef] [PubMed]
- Rotaru, F.; Bejinariu, S.I.; Luca, M.; Luca, R.; Niţă, C.D. Video processing for rat behavior analysis. In Proceedings of the 13-th International Symposium on Signals, Circuits and Systems, ISSCS 2017, Iaşi, România, 13–14 July 2017; pp. 1–4. [Google Scholar]
- Rotaru, F.; Bejinariu, S.I.; Luca, M.; Luca, R.; Niţă, C.D. Video tracking for animal behavior analysis. In Memoirs of The Scientific Sections of Romanian Academy; Series IV, Tome XXXIX; Publishing House of the Romanian Academy: Bucharest, Romania, 2016; pp. 37–46. [Google Scholar]
- Rotaru, F.; Bejinariu, S.I.; Costin, H.; Luca, R.; Niţă, C.D. Animal tracking method used in neurosciene experiments. In Proceedings of the 8th IEEE International Conference on E-Health and Bioengineering-EHB 2020, Iaşi, România, 29–30 October 2020. [Google Scholar]
- Rotaru, F.; Bejinariu, S.I.; Costin, H.; Luca, R.; Niţă, C.D. Captive Animal Stress Study by Video Analysis. In Proceedings of the 10th IEEE International Conference on E-Health and Bioengineering-EHB 2022, Iaşi, România, 17–18 November 2022. [Google Scholar]
- Paxinos, G.; Huang, X.F.; Toga, A.W. The Rhesus Monkey Brain in Stereotaxic Coordinates; Academic Press: Cambridge, MA, USA, 2000. [Google Scholar]
- Milton, R.; Shahidi, N.; Dragoi, V. Dynamic states of population activity in prefrontal cortical networks of freely-moving macaque. Nat. Commun. 2020, 11, 1948. [Google Scholar] [CrossRef] [PubMed]
Score | |||
---|---|---|---|
Walking 1 | Walking 2 | Walking 3 | |
Before improvement | 94% | 97% | 95% |
After improvement | 96% | 98% | 97% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rotaru, F.; Bejinariu, S.-I.; Costin, H.-N.; Luca, R.; Niţă, C.D. Captive Animal Behavior Study by Video Analysis. Sensors 2023, 23, 7928. https://doi.org/10.3390/s23187928
Rotaru F, Bejinariu S-I, Costin H-N, Luca R, Niţă CD. Captive Animal Behavior Study by Video Analysis. Sensors. 2023; 23(18):7928. https://doi.org/10.3390/s23187928
Chicago/Turabian StyleRotaru, Florin, Silviu-Ioan Bejinariu, Hariton-Nicolae Costin, Ramona Luca, and Cristina Diana Niţă. 2023. "Captive Animal Behavior Study by Video Analysis" Sensors 23, no. 18: 7928. https://doi.org/10.3390/s23187928
APA StyleRotaru, F., Bejinariu, S. -I., Costin, H. -N., Luca, R., & Niţă, C. D. (2023). Captive Animal Behavior Study by Video Analysis. Sensors, 23(18), 7928. https://doi.org/10.3390/s23187928