OPPORTUNITY Activity Recognition
Donated on 6/8/2012
The OPPORTUNITY Dataset for Human Activity Recognition from Wearable, Object, and Ambient Sensors is a dataset devised to benchmark human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc).
Dataset Characteristics
Multivariate, Time-Series
Subject Area
Computer Science
Associated Tasks
Classification
Feature Type
Real
# Instances
2551
# Features
-
Dataset Information
Additional Information
The OPPORTUNITY Dataset for Human Activity Recognition from Wearable, Object, and Ambient Sensors is a dataset devised to benchmark human activity recognition algorithms (classification, automatic data segmentation, sensor fusion, feature extraction, etc). A subset of this dataset was used for the "OPPORTUNITY Activity Recognition Challenge" organized for the 2011 IEEE conf on Systems, Man and Cybernetics Workshop on "Robust machine learning techniques for human activity recognition". The dataset comprises the readings of motion sensors recorded while users executed typical daily activities: * Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information * Object sensors: 12 objects with 3D acceleration and 2D rate of turn * Ambient sensors: 13 switches and 8 3D acceleration sensors * Recordings: 4 users, 6 runs per users. Of these, 5 are Activity of Daily Living runs characterized by a natural execution of daily activities. The 6th run is a "drill" run, where users execute a scripted sequence of activities. * Annotations/classes: the activities of the user in the scenario are annotated on different levels: "modes of locomotion" classes; low-level actions relating 13 actions to 23 objects; 17 mid-level gesture classes; and 5 high-level activity classes ** Recording scenario ** The activity recognition environment and scenario has been designed to generate many activity primitives, yet in a realistic manner. Subjects operated in a room simulating a studio flat with a deckchair, a kitchen, doors giving access to the outside, a coffee machine, a table and a chair. We achieved a natural execution of activities by instructing users to follow a high-level script but leaving them free interpretation as how to achieve the high-level goals. We furthermore encouraged them to perform as naturally as possible with all the variations they were used to. For each subject we recorded 6 different runs. Five of them, termed activity of daily living (ADL), followed a given scenario as detailed below. The remaining one, a drill run, was designed to generate a large number of activity instances. The ADL run consists of temporally unfolding situations. In each situation (e.g. preparing sandwich), a large number of action primitives occur (e.g. reach for bread, move to bread cutter, operate bread cutter). * ADL run * The ADL run consists of temporally unfolding situations: Start: lying on the deckchair, get up Groom: move in the room, check that all the objects are in the right places in the drawers and on shelves Relax: go outside and have a walk around the building Prepare coffee: prepare a coffee with milk and sugar using the coffee machine Drink coffee: take coffee sips, move around in the environment Prepare sandwich: include bread, cheese and salami, using the bread cutter and various knifes and plates Eat sandwich Cleanup: put objects used to original place or dish washer, cleanup the table Break: lie on the deckchair * Drill run * The drill run consists of 20 repetitions of the following sequence of activities: Open then close the fridge Open then close the dishwasher Open then close 3 drawers (at different heights) Open then close door 1 Open then close door 2 Toggle the lights on then off Clean the table Drink while standing Drink while seated ** Annotations ** The annotations are done on five ‘tracks’. One track contains modes of locomotion (e.g. sitting, standing, walking). Two other tracks indicate the actions of the left and right hand (e.g. reach, grasp, release), and to which object they apply (e.g. milk, switch, door). The fourth track indicates the high level activities (e.g. prepare sandwich). The high level activities relate to the situations indicated in the description of the ADL runs as follows (in parenthesis the number of the situations indicated above): relaxing (1, 9), early morning (2, 3), coffee time (4, 5), sandwich time (6, 7), cleanup (8). The mid-level gesture annotations is generated automatically from the low-level hand actions. It comprises coarser characterization of the user's activities. For instance the low-level annotations 'reach door' and 'open door' are combined into a single 'open door' mid-level annotation. Here, the mid-level annotations comprise actions of the left and right hand indiscriminately. However, in practice, the users mostly interacted with the environment with their right hand. We recommend to use the mid-level annotations in first attempts to use this dataset. ** Applications ** This dataset offers a rich playground to assess methods such as, e.g: * Classification, (semi-) supervised machine learning * Automatic segmentation * Unsupervised structure discovery * Data imputation * Multi-modal sensor fusion * Sensor network research * Transfer learning, multitask learning * Sensor selection * Feature extraction * Classifier calibration and adaptation * ... ** Baseline benchmarks ** Baseline benchmarks for the OPPORTUNITY Activity Recognition Challenge subset of the dataset are available in reference [2]. Scripts to replicate the benchmarks are provided in the package.
Has Missing Values?
Yes
Introductory Paper
By D. Roggen, Alberto Calatroni, M. Rossi, Thomas Holleczek, Kilian Förster, G. Tröster, P. Lukowicz, D. Bannach, Gerald Pirkl, A. Ferscha, Jakob Doppler, Clemens Holzmann, Marc Kurz, G. Holl, Ricardo Chavarriaga, Hesam Sagha, Hamidreza Bayati, Marco Creatura, J. Millán. 2010
Published in International Conference on Networked Sensing Systems
Variables Table
Variable Name | Role | Type | Description | Units | Missing Values |
---|---|---|---|---|---|
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no |
0 to 10 of 242
Additional Variable Information
The dataset comprises the readings of motion sensors recorded while users executed typical daily activities. The detailed format is described in the package. The attributes correspond to raw sensor readings. There is a total of 242 attributes. * Body-worn sensors (145 attributes) * The body-worn sensors include 7 inertial measurement units and 12 3D acceleration sensors. The inertial measurement units provide readings of: 3D acceleration, 3D rate of turn, 3D magnetic field, and orientation of the sensor with respect to a world coordinate system in quaternions. Five sensors are on the upper body and two are mounted on the user's shoes. The acceleration sensors provide 3D acceleration. They are mounted on the upper body, hip and leg. Four tags for an ultra-wideband localization system are placed on the left/right front/back side of the shoulder. * Object sensors (60 attributes) * 12 objects are instrumented with wireless sensors measuring 3D acceleration and 2D rate of turn. This allows to detect which objects are used, and possibly also the kind of usage that is made of them. * Ambient sensors (37 attributes) * Ambient sensors include 13 switches and 8 3D acceleration sensors in drawers, kitchen appliances and doors. The reed switches are placed in triplets on the fridge, dishwasher and drawer 2 and drawer 3. They may be used to detect three states of the furniture element: closed, half open, and fully open. The acceleration sensors may allow to assess if an element of furniture is used, and whether it may be opened or closed.
Dataset Files
File | Size |
---|---|
OpportunityUCIDataset/dataset/S3-Drill.dat | 68.8 MB |
OpportunityUCIDataset/dataset/S1-Drill.dat | 53.7 MB |
OpportunityUCIDataset/dataset/S2-Drill.dat | 51.9 MB |
OpportunityUCIDataset/dataset/S1-ADL1.dat | 49.4 MB |
OpportunityUCIDataset/dataset/S4-Drill.dat | 44 MB |
0 to 5 of 88
Reviews
There are no reviews for this dataset yet.
pip install ucimlrepo
from ucimlrepo import fetch_ucirepo # fetch dataset opportunity_activity_recognition = fetch_ucirepo(id=226) # data (as pandas dataframes) X = opportunity_activity_recognition.data.features y = opportunity_activity_recognition.data.targets # metadata print(opportunity_activity_recognition.metadata) # variable information print(opportunity_activity_recognition.variables)
Roggen, D., Calatroni, A., Nguyen-Dinh, L., Chavarriaga, R., & Sagha, H. (2010). OPPORTUNITY Activity Recognition [Dataset]. UCI Machine Learning Repository. https://doi.org/10.24432/C5M027.
Creators
Daniel Roggen
Alberto Calatroni
Long-Van Nguyen-Dinh
Ricardo Chavarriaga
Hesam Sagha
DOI
License
This dataset is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
This allows for the sharing and adaptation of the datasets for any purpose, provided that the appropriate credit is given.