Animal Kingdom
Animal Kingdom is a large and diverse dataset that provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors. The wild animal footage used in the dataset records different times of the day in an extensive range of environments containing variations in backgrounds, viewpoints, illumination and weather conditions. More specifically, the dataset contains 50 hours of annotated videos to localize relevant animal behavior segments in long videos for the video grounding task, 30K video sequences for the fine-grained multi-label action recognition task, and 33K frames for the pose estimation task, which correspond to a diverse range of animals with 850 species across 6 major animal classes.
video primary data
Folder structure: touchcomm-psci/data/primary/ video_expt1-annotations-exported-txt/ + txt files exported from ELAN annotation software, "T{toucherID}_R{receiverID}.txt" "video_expt1-collated.csv" collated annotations from exported txt files.
Animal Re-Identification from Video
Repository of annotated videos, images and extracted features of multiple animals 1. Videos The videos are available in the file "videos.zip". The original videos included in this repository have been sourced from Pixabay under Pixabay License Free for commercial use No attribution required The video data is summarised below: Short Name Video Name # Frames Size # Bounding boxes # Identities Pigs Pigs_49651_960_540_500f.mp4 500 ( 960, 540) 6184 26 Koi fish Koi_5652_952_540.mp4 536 ( 952, 540) 1635 9 Pigeons (curb) Pigeons_8234_1280_720.mp4 443 (1280, 720) 4700 16 Pigeons (ground) Pigeons_4927_960_540_600f.mp4 600 ( 960, 540) 3079 17 Pigeons (square) Pigeons_29033_960_540_300f.mp4 300 ( 960, 540) 4892 28 2. Annotated videos The annotated videos are available in the file "annotated_videos.zip": Annotated_Pigs_49651_960_540_500f.mp4. Annotation contributed by Lucy Kuncheva Annotated_Koi_5652_952_540.mp4. Annotation contributed by Lucy Kuncheva Annotated_Pigeons_8234_1270_720.mp4. Annotation contributed by Wilf Langdon Annotated_Pigeons_4927_960_540_600f.mp4. Annotation contributed by Frank Krzyzowski Annotated_Pigeons_29033_960_540_300f.mp4. Annotation contributed by Owen West 3. Images The individual images are in the file "images.zip". For each video, all the images are in the corresponding folder. Inside, there is a folder for each individual with all the images. The filename of each image includes the frame number. 4. Frames information The correspondence between images and frames in the videos are in the file "frames.zip" The prefixes "h1_" and "h2_" denote, respectively, the first and second halves of the videos. The columns on these files are: x, y: coordinates in pixels of the top left corner of the bounding box. width, height: of the bounding box in pixels. frame: frame number. max_w, max_h. label: the label (class) number. image: file name. 5. Extracted features Files with the extracted features are in "features.zip". The prefixes "h1_" and "h2_" denote, respectively, the data corresponding to the first and second halves of the videos. Five representations are used: "RGB" moments. "HOG": Histogram of Oriented Gradients "LBP": Local Binary Patterns. "AE": AutoEncoders. "MN2": extracted from a Keras MobileNetV2 model pre-trained on Imagenet The representation appears as a postfix in the file names. In each csv file, each image appears as a row. The feature values followed by the label (class) number. 6. Source code Sample code (matlab & python) is available at https://github.com/admirable-ubu/animal-recognition
Rat 7M Videos Subject 2 - Day 2
All videos for Subject 2, Day 2 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
Rat 7M Videos Subject 4 - Day 1
All videos for Subject 4, Day 1 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
A video dataset of a wooden box assembly.
A video dataset of a 9-step wooden box assembly process including 17 subjects. 62 video files were collected with the total size of 30 GB and the total duration of 13 hours. Each of the video is complemented with temporal annotations that indicate the starting and ending timestamps of each work step in the assembly process.
GeonewsDataSet_2018_to_2020Geonews: Timely Geological Events Videos
Geonews YouTube Metrics for 2018 and 2020 Usage Geonews Supplimentary Dataset covers YouTube Dataset of Geonews Videos and other General Geoscience Educational videos in 2018 and 2020. To check the video, see the UTD Geoscience Studio YouTube Channel . Metadata List of Geonews Videos List of non-Geonews Videos Feature Comparison between Geonews and non-Geonews Videos ------------------------------------------------------------------------ If you have used this dataset, please cite the paper: Wang, N., Clowdus, Z., Sealander, A., & Stern, R. (2021). Geonews: Timely Geoscience Educational YouTube Videos about Recent Geologic Events. Geoscience Communication Discussions, 1-26. Click to Read the Paper https://camo.githubusercontent.com/951761fa173fb159d686f35370745e3a61b047e07aa01e39e19dc44ab75bae76/68747470733a2f2f7777772e67656f736369656e63652d636f6d6d756e69636174696f6e2e6e65742f677261706869635f6567755f636c61696d5f6c6f676f5f626c75652e706e67 https://doi.org/10.5194/gc-2021-38 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License.
Using Video Footage for Observing Honey Bee Behaviour at Hive Entrances
Using Video Footage for Observing Honey Bee Behaviour at Hive Entrances
Video-to-Model Data Set
This data set belongs to the paper "Video-to-Model: Unsupervised Trace Extraction from Videos for Process Discovery and Conformance Checking in Manual Assembly", submitted on March 24, 2020, to the 18th International Conference on Business Process Management (BPM). Abstract: Manual activities are often hidden deep down in discrete manufacturing processes. For the elicitation and optimization of process behavior, complete information about the execution of Manual activities are required. Thus, an approach is presented on how execution level information can be extracted from videos in manual assembly. The goal is the generation of a log that can be used in state-of-the-art process mining tools. The test bed for the system was lightweight and scalable consisting of an assembly workstation equipped with a single RGB camera recording only the hand movements of the worker from top. A neural network based real-time object classifier was trained to detect the worker’s hands. The hand detector delivers the input for an algorithm, which generates trajectories reflecting the movement paths of the hands. Those trajectories are automatically assigned to work steps using the position of material boxes on the assembly shelf as reference points and hierarchical clustering of similar behaviors with dynamic time warping. The system has been evaluated in a task-based study with ten participants in a laboratory, but under realistic conditions. The generated logs have been loaded into the process mining toolkit ProM to discover the underlying process model and to detect deviations from both, instructions and ground truth, using conformance checking. The results show that process mining delivers insights about the assembly process and the system’s precision. The data set contains the generated and the annotated logs based on the video material gathered during the user study. In addition, the petri nets from the process discovery and conformance checking conducted with ProM (http://www.promtools.org) and the reference nets modeled with Yasper (http://www.yasper.org/) are provided.
Strategies for Using Websites to Support Programming and Their Impact on Source Code: Video files.
An observation study was conducted with undergraduate students. The study includes a video analysis of coding activities, an analysis of the source code written during coding, and a structured interview with participants. This dataset contains video recordings of participants coding with the websites. It also contains the interview questions along with transcriptions. In addition, it contains the source code resulting from the participants.
Animal Kingdom
Animal_Kingdom提供了用于姿势估计任务的 6 个主要动物纲的 850 个物种的多种动物的视频数据,其中包含50 小时的带注释视频。 发表于CVPR2022,https://sutdcv.github.io/Animal-Kingdom/
Funny bikes: a symmetrical study of urban space, vehicular units and mobility through the voyeuristic spokesperson of a video lens
This paper presents methodological considerations from a comparative, symmetrical video analysis of cyclist practices in Gothenburg and Toulouse. Video recording pays as much attention to the properties of bicycles as to the characteristics of people; it takes into account the pragmatic and situated dimension and, thus, allows a generalised symmetry. From there, visual methods enable us to submit the collected material to a double treatment of a quantitative analysis of observed bicycles and a qualitative ethnomethodological analysis of bike rental sequences. To better understand the logic and challenges of our method, we present it alongside an analogy with a famous film equivalent – the strategy used by the film-maker Michael Haneke in his heuristic film(s), Funny Games. Despite objectives and content that are obviously completely at odds to one another, the Funny Games film(s) and our own videos share at least five interesting features: twin films, static shots, photomontage, silent films, and rewinding.
Pairs of RF--dataset 2
Dataset that contains trials where we filmed freely swimming zebrafish pairs.
NITYMED
130 videos are available, captured in Patras, Greece, displaying drivers in real cars, moving under nighttime conditions where drowsiness detection is more important.The participating drivers are: 11 males and 10 females with different features (hair color, beard, glasses, etc). The videos are split in 2 categories:Yawning: the drivers yawn 3 times in each video lasting approximately 15-25 seconds (107 videos) Microsleep: the drivers talk, look around and have microsleeps in videos lasting approximately 2 minutes (21 videos). This dataset can be used to test and compare algorithms and models for drowsiness detection under nighttime conditions. Other face, mouth and eye tracking applications can also be tested using this dataset. The illumination is natural with a slight boost by the lowest interior car lights in order to simulate the lighting conditions in an avenue since most of the videos were captured on a dark, not crowded road for safety reasons.All videos are mp4, 25 frames/sec, are mute and are offered in two resolutions:HDTV720: 1280 (width)X720 (height), total dataset size: ~700MB (available in Kaggle) FULL: 1920 (width)X1080 (height), total dataset size: ~1.6GB
Rat 7M Videos Subject 5 - Day 2
All videos for Subject 5, Day 2 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
Rat 7M Videos Subject 1 - Day 1
All videos for Subject 1, Day 1 of Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
Frequency and intensity of pain symptoms detected during classic massage sessions of selected body parts in purebred Arabian racing horses
ABSTRACT We analysed the frequency of symptoms and degree of muscle pain in selected body parts of racing horses assessed during classic massage sessions. The influence of horse's sex on obtained results was considered. The potential for the early determination of pain in horses by analysing their behaviour and cardiac parameters during a massage session was also evaluated. The study was conducted on 20 three-year-old purebred Arabian horses during one racing season. In the racing season, cyclic classic massage sessions were performed, during which the frequency of symptoms and the degree of pain in the neck, back, croup, front limbs, and hind limbs were analysed. A behavioural assessment of the horses was conducted, and cardiac parameters were analysed. During massage, the frequency of pain symptoms in front limbs amounted to 26, while in croup, it did not exceed 6. The studied horses were most susceptible to pain in the front limbs and in the back, with greater severity in stallions than in mares. An assessment of the frequency and severity of pain symptoms should not be based on changes in behaviour of horses or on cardiac parameters (HR and LF:HF ratio) during massage sessions. However, these methods can be applied after pain reactions intensify. Meanwhile, qualified masseurs can diagnose slight muscle pain during massage sessions.
Data from: Automated peak detection method for behavioral event identification: detecting Balaenoptera musculus and Grampus griseus feeding attempts
The desire of animal behaviorists for more flexible methods of conducting inter-study and inter-specific comparisons and meta-analysis of various animal behaviors compelled us to design an automated, animal behavior peak detection method that is potentially generalizable to a wide variety of data types, animals, and behaviors. We detected the times of feeding attempts by 12 Risso’s dolphins (Grampus griseus) and 36 blue whales (Balaenoptera musculus) using the norm-jerk (rate of change of acceleration) time series. The automated peak detection algorithm identified median true-positive rates of 0.881 for blue whale lunges and 0.410 for Risso’s dolphin prey capture attempts, with median false-positive rates of 0.096 and 0.007 and median miss rates of 0.113 and 0.314, respectively. Our study demonstrates that our peak detection method is efficient at automatically detecting animal behaviors from multisensor tag data with high accuracy for behaviors that are appropriately characterized by the data time series.
Fingertip Videos for Heart Rate estimation
Videos of fingertip were recorded with Redmi Note 8 smartphone camera. Data was collected from 24 participants, both male and female in the age group of5 to 77 years. Each participant’s finger was illuminated withtorch light which was right next to the camera. The cameralens was then completely covered with the fingertip. Video was recorded for 20 seconds. The recording frequency of the camera was 30 frames per second (fps). Simultaneously, the ground truth HR was recorded using the Andesfit Health pulse oximeter. The dataset contained HR readings from 59 to 119 bpm. This dataset can be used to extract the PPG waveform from the videos. The ground truth heart rate recorded in excel file can be used as a reference to validate your estimations.
Food position on the table (spatula)
Videos and csv files with resulting from tracking the food position on the RGB videos of the experiments (food flipping with a spatula).
Animal Kingdom is a large and diverse dataset that provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors. The wild animal footage used in the dataset records different times of the day in an extensive range of environments containing variations in backgrounds, viewpoints, illumination and weather conditions. More specifically, the dataset contains 50 hours of annotated videos to localize relevant animal behavior segments in long videos for the video grounding task, 30K video sequences for the fine-grained multi-label action recognition task, and 33K frames for the pose estimation task, which correspond to a diverse range of animals with 850 species across 6 major animal classes.
Folder structure: touchcomm-psci/data/primary/ video_expt1-annotations-exported-txt/ + txt files exported from ELAN annotation software, "T{toucherID}_R{receiverID}.txt" "video_expt1-collated.csv" collated annotations from exported txt files.
Repository of annotated videos, images and extracted features of multiple animals 1. Videos The videos are available in the file "videos.zip". The original videos included in this repository have been sourced from Pixabay under Pixabay License Free for commercial use No attribution required The video data is summarised below: Short Name Video Name # Frames Size # Bounding boxes # Identities Pigs Pigs_49651_960_540_500f.mp4 500 ( 960, 540) 6184 26 Koi fish Koi_5652_952_540.mp4 536 ( 952, 540) 1635 9 Pigeons (curb) Pigeons_8234_1280_720.mp4 443 (1280, 720) 4700 16 Pigeons (ground) Pigeons_4927_960_540_600f.mp4 600 ( 960, 540) 3079 17 Pigeons (square) Pigeons_29033_960_540_300f.mp4 300 ( 960, 540) 4892 28 2. Annotated videos The annotated videos are available in the file "annotated_videos.zip": Annotated_Pigs_49651_960_540_500f.mp4. Annotation contributed by Lucy Kuncheva Annotated_Koi_5652_952_540.mp4. Annotation contributed by Lucy Kuncheva Annotated_Pigeons_8234_1270_720.mp4. Annotation contributed by Wilf Langdon Annotated_Pigeons_4927_960_540_600f.mp4. Annotation contributed by Frank Krzyzowski Annotated_Pigeons_29033_960_540_300f.mp4. Annotation contributed by Owen West 3. Images The individual images are in the file "images.zip". For each video, all the images are in the corresponding folder. Inside, there is a folder for each individual with all the images. The filename of each image includes the frame number. 4. Frames information The correspondence between images and frames in the videos are in the file "frames.zip" The prefixes "h1_" and "h2_" denote, respectively, the first and second halves of the videos. The columns on these files are: x, y: coordinates in pixels of the top left corner of the bounding box. width, height: of the bounding box in pixels. frame: frame number. max_w, max_h. label: the label (class) number. image: file name. 5. Extracted features Files with the extracted features are in "features.zip". The prefixes "h1_" and "h2_" denote, respectively, the data corresponding to the first and second halves of the videos. Five representations are used: "RGB" moments. "HOG": Histogram of Oriented Gradients "LBP": Local Binary Patterns. "AE": AutoEncoders. "MN2": extracted from a Keras MobileNetV2 model pre-trained on Imagenet The representation appears as a postfix in the file names. In each csv file, each image appears as a row. The feature values followed by the label (class) number. 6. Source code Sample code (matlab & python) is available at https://github.com/admirable-ubu/animal-recognition
All videos for Subject 2, Day 2 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
All videos for Subject 4, Day 1 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
A video dataset of a 9-step wooden box assembly process including 17 subjects. 62 video files were collected with the total size of 30 GB and the total duration of 13 hours. Each of the video is complemented with temporal annotations that indicate the starting and ending timestamps of each work step in the assembly process.
Geonews YouTube Metrics for 2018 and 2020 Usage Geonews Supplimentary Dataset covers YouTube Dataset of Geonews Videos and other General Geoscience Educational videos in 2018 and 2020. To check the video, see the UTD Geoscience Studio YouTube Channel . Metadata List of Geonews Videos List of non-Geonews Videos Feature Comparison between Geonews and non-Geonews Videos ------------------------------------------------------------------------ If you have used this dataset, please cite the paper: Wang, N., Clowdus, Z., Sealander, A., & Stern, R. (2021). Geonews: Timely Geoscience Educational YouTube Videos about Recent Geologic Events. Geoscience Communication Discussions, 1-26. Click to Read the Paper https://camo.githubusercontent.com/951761fa173fb159d686f35370745e3a61b047e07aa01e39e19dc44ab75bae76/68747470733a2f2f7777772e67656f736369656e63652d636f6d6d756e69636174696f6e2e6e65742f677261706869635f6567755f636c61696d5f6c6f676f5f626c75652e706e67 https://doi.org/10.5194/gc-2021-38 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License.
Using Video Footage for Observing Honey Bee Behaviour at Hive Entrances
This data set belongs to the paper "Video-to-Model: Unsupervised Trace Extraction from Videos for Process Discovery and Conformance Checking in Manual Assembly", submitted on March 24, 2020, to the 18th International Conference on Business Process Management (BPM). Abstract: Manual activities are often hidden deep down in discrete manufacturing processes. For the elicitation and optimization of process behavior, complete information about the execution of Manual activities are required. Thus, an approach is presented on how execution level information can be extracted from videos in manual assembly. The goal is the generation of a log that can be used in state-of-the-art process mining tools. The test bed for the system was lightweight and scalable consisting of an assembly workstation equipped with a single RGB camera recording only the hand movements of the worker from top. A neural network based real-time object classifier was trained to detect the worker’s hands. The hand detector delivers the input for an algorithm, which generates trajectories reflecting the movement paths of the hands. Those trajectories are automatically assigned to work steps using the position of material boxes on the assembly shelf as reference points and hierarchical clustering of similar behaviors with dynamic time warping. The system has been evaluated in a task-based study with ten participants in a laboratory, but under realistic conditions. The generated logs have been loaded into the process mining toolkit ProM to discover the underlying process model and to detect deviations from both, instructions and ground truth, using conformance checking. The results show that process mining delivers insights about the assembly process and the system’s precision. The data set contains the generated and the annotated logs based on the video material gathered during the user study. In addition, the petri nets from the process discovery and conformance checking conducted with ProM (http://www.promtools.org) and the reference nets modeled with Yasper (http://www.yasper.org/) are provided.
An observation study was conducted with undergraduate students. The study includes a video analysis of coding activities, an analysis of the source code written during coding, and a structured interview with participants. This dataset contains video recordings of participants coding with the websites. It also contains the interview questions along with transcriptions. In addition, it contains the source code resulting from the participants.
Animal_Kingdom提供了用于姿势估计任务的 6 个主要动物纲的 850 个物种的多种动物的视频数据,其中包含50 小时的带注释视频。 发表于CVPR2022,https://sutdcv.github.io/Animal-Kingdom/
This paper presents methodological considerations from a comparative, symmetrical video analysis of cyclist practices in Gothenburg and Toulouse. Video recording pays as much attention to the properties of bicycles as to the characteristics of people; it takes into account the pragmatic and situated dimension and, thus, allows a generalised symmetry. From there, visual methods enable us to submit the collected material to a double treatment of a quantitative analysis of observed bicycles and a qualitative ethnomethodological analysis of bike rental sequences. To better understand the logic and challenges of our method, we present it alongside an analogy with a famous film equivalent – the strategy used by the film-maker Michael Haneke in his heuristic film(s), Funny Games. Despite objectives and content that are obviously completely at odds to one another, the Funny Games film(s) and our own videos share at least five interesting features: twin films, static shots, photomontage, silent films, and rewinding.
Dataset that contains trials where we filmed freely swimming zebrafish pairs.
130 videos are available, captured in Patras, Greece, displaying drivers in real cars, moving under nighttime conditions where drowsiness detection is more important.The participating drivers are: 11 males and 10 females with different features (hair color, beard, glasses, etc). The videos are split in 2 categories:Yawning: the drivers yawn 3 times in each video lasting approximately 15-25 seconds (107 videos) Microsleep: the drivers talk, look around and have microsleeps in videos lasting approximately 2 minutes (21 videos). This dataset can be used to test and compare algorithms and models for drowsiness detection under nighttime conditions. Other face, mouth and eye tracking applications can also be tested using this dataset. The illumination is natural with a slight boost by the lowest interior car lights in order to simulate the lighting conditions in an avenue since most of the videos were captured on a dark, not crowded road for safety reasons.All videos are mp4, 25 frames/sec, are mute and are offered in two resolutions:HDTV720: 1280 (width)X720 (height), total dataset size: ~700MB (available in Kaggle) FULL: 1920 (width)X1080 (height), total dataset size: ~1.6GB
All videos for Subject 5, Day 2 in Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
All videos for Subject 1, Day 1 of Rat 7M. The video naming scheme is s{subject_id}-d{recording_day}-camera{camera_id}-{starting_frame_idx}.mp4. Subject ID, recording day, and camera ID match videos to the data in the motion capture and camera calibration parameter .mat files. Videos are provided in 3500 frame chunks, with the index of the starting frame in each file denoted by the {starting_frame_idx} portion of the filename. Using the 'frames' field in the 'cameras' struct inside 'mocap.mat', the corresponding video file and frame index can be calculated by used frame_idx // 3500 to get the {starting_frame_idx} and frame_idx % 3500 to get the frame in that file.
ABSTRACT We analysed the frequency of symptoms and degree of muscle pain in selected body parts of racing horses assessed during classic massage sessions. The influence of horse's sex on obtained results was considered. The potential for the early determination of pain in horses by analysing their behaviour and cardiac parameters during a massage session was also evaluated. The study was conducted on 20 three-year-old purebred Arabian horses during one racing season. In the racing season, cyclic classic massage sessions were performed, during which the frequency of symptoms and the degree of pain in the neck, back, croup, front limbs, and hind limbs were analysed. A behavioural assessment of the horses was conducted, and cardiac parameters were analysed. During massage, the frequency of pain symptoms in front limbs amounted to 26, while in croup, it did not exceed 6. The studied horses were most susceptible to pain in the front limbs and in the back, with greater severity in stallions than in mares. An assessment of the frequency and severity of pain symptoms should not be based on changes in behaviour of horses or on cardiac parameters (HR and LF:HF ratio) during massage sessions. However, these methods can be applied after pain reactions intensify. Meanwhile, qualified masseurs can diagnose slight muscle pain during massage sessions.
The desire of animal behaviorists for more flexible methods of conducting inter-study and inter-specific comparisons and meta-analysis of various animal behaviors compelled us to design an automated, animal behavior peak detection method that is potentially generalizable to a wide variety of data types, animals, and behaviors. We detected the times of feeding attempts by 12 Risso’s dolphins (Grampus griseus) and 36 blue whales (Balaenoptera musculus) using the norm-jerk (rate of change of acceleration) time series. The automated peak detection algorithm identified median true-positive rates of 0.881 for blue whale lunges and 0.410 for Risso’s dolphin prey capture attempts, with median false-positive rates of 0.096 and 0.007 and median miss rates of 0.113 and 0.314, respectively. Our study demonstrates that our peak detection method is efficient at automatically detecting animal behaviors from multisensor tag data with high accuracy for behaviors that are appropriately characterized by the data time series.
Videos of fingertip were recorded with Redmi Note 8 smartphone camera. Data was collected from 24 participants, both male and female in the age group of5 to 77 years. Each participant’s finger was illuminated withtorch light which was right next to the camera. The cameralens was then completely covered with the fingertip. Video was recorded for 20 seconds. The recording frequency of the camera was 30 frames per second (fps). Simultaneously, the ground truth HR was recorded using the Andesfit Health pulse oximeter. The dataset contained HR readings from 59 to 119 bpm. This dataset can be used to extract the PPG waveform from the videos. The ground truth heart rate recorded in excel file can be used as a reference to validate your estimations.
Videos and csv files with resulting from tracking the food position on the RGB videos of the experiments (food flipping with a spatula).
查看更多数据集
相关知识
带宠物出国去泰国
找一个与动物福利有关的英文资料
Concepts, research progresses and prospects of animal personality
欧洲宠物入境规定
Animal Care Salon下载
菲律宾有家民间动物救...
动物营养(英文)(Animal Nutrition)投稿
伴侣动物 Animal Talks
Lucky Dog Animal Rescue位于哪里?
Animal Planet 动物星球小组
网址: Animal Kingdom https://m.mcbbbk.com/newsview544056.html
上一篇: 未经兽医执业注册从事动物诊疗活动 |
下一篇: 浙江省动物卫生监督局:十大典型违 |