Nothing Special   »   [go: up one dir, main page]

Projects

Our current projects are listed below.


Industrial Automation and Optimisation: Human Robot Collaboration, Co-robotics and Autonomous Systems

An initiative supported by the East Midlands Enterprise Universities and the UKRAS Network.

MeSAPro: Medium-sized AGV for soft fruit production

partnership between L-CAS, the Lloyd’s Register Foundation and the University of York, 2020-2022

The MeSAPro project is a demonstrator project of the Assuring Autonomy International programme. It is investigating how to assure the safety of our autonomous soft fruit picking robots, in other words, how to make certain that our robots do not injure people. The work is being carried out in conjunction with the RASberry programme. MeSAPro will assure autonomy and safe operation through three specific objectives: 1) Defining safety requirements for the sense-understand-decide-act components of a soft-fruit picking robot; 2) Developing methods to detect deviations from safe behaviour and ways to mitigate the effect of such deviations; and 3) Formally verifying sensing, understanding and deciding components of the robots.

Contact: Simon Parsons (PI), Marc Hanheide

DARKO: Dynamic Agile Production Robots That Learn and Optimize Knowledge and Operations

H2020 RIA, 2021-2024

This is a collaborative project, involving academic institutions and industrial partners across five European countries. DARKO aims to develop a new generation of agile intralogistics robots with energy-efficient actuators for highly dynamic motion, which can operate safely within changing environments, have predictive planning capabilities, and are aware of human intentions. The University of Lincoln will lead the project’s work package on Human-Robot Spatial Interaction (HRSI), including tasks for qualitative spatial modelling of human motion and causal inference for safe human-robot interaction. It will also contribute to other tasks for multi-sensor perception, human motion prediction, and robot intent communication.

Contact: Nicola Bellotto (PI), Tom Duckett, Marc Hanheide


EPSRC Centre for Doctoral Training in Agri-Food Robotics: AgriFoRwArdS

EPSRC, 2019-2028

AgriFoRwArdS is the world’s first Centre for Doctoral Training (CDT) in Agri-Food Robotics. The Centre brings together a unique collaboration of leading researchers from the Universities of Lincoln, Cambridge and East Anglia, located at the heart of the UK agri-food business, together with the Manufacturing Technology Centre, supported by leading industrial partners and stakeholders. This project will advance the state of the art by creating the largest global cohort of robotics and autonomous systems (RAS) specialists and leaders focused on the Agri-Food sector. This will include 50 PhD scholarships in projects co-designed with industry to give the UK global leadership in RAS across critical and essential sectors of the world economy, expanding the UK’s science and engineering base whilst driving industrial productivity and mitigating the environmental and societal impacts of the currently available solutions. In terms of wider impact, the RAS challenges that need to be overcome in the agri-food sector will have further application across multiple sectors involving field robotics and/or robotics in manufacturing.

Contact: Marc Hanheide


Lincoln Agri-Robotics (LAR)

Research England, Expanding Excellence in England Fund, 2019-2022.

This project creates Lincoln Agri-Robotics, a global centre of excellence in Agri-Robotics in the UK. This includes research into autonomous agri-robots that can efficiently tend, harvest and quality-control high-value crops with reduced human intervention, improving agricultural productivity and environmental sustainability, and addressing the demands of growing population. Lincoln Agri-Robotics brings together world-leading expertise in robotics, artificial intelligence and agriculture, based on the site of the University’s working farm at Riseholme Campus. Lincoln Agri-Robotics expands the successful interdisciplinary collaboration between two of the University’s leading research groups: the Lincoln Institute of Agri-Food Technology (LIAT) and the Lincoln Centre for Autonomous Systems (L-CAS).

Contact: Elizabeth Sklar


Mobile Robotic Platforms for Inspection and Harvesting in Agricultural Areas (BACCHUS):

H2020 RIA, 2020-2022

Bacchus Project

With the growing population and climate change, agricultural productivity growth is unlikely to meet the increased demand for food. Besides the increasing pressure to produce more, there is an overall need for higher quality and sustainable cultivation. Precision agriculture combined with intelligent robotic technologies can push to that direction. The incorporation of such technologies into agricultural production not only benefits productivity but also improves the working conditions of farmers and labourers. Intelligent systems are becoming the go-to solution to push towards precision agriculture, while a large number of farmer operations are already transitioning to full autonomy. Smart, automated and selective harvesting, in particular, can provide considerable improvement in production leaving the unripe product in the field to mature. However, in order to achieve such automation significant progress is required regarding the cognitive and mechatronic capabilities of the robotic agents replacing the human workers in these tasks, especially in cases where human-like actions are required by the robots. BACCHUS intelligent mobile robotic system promises to reproduce hand harvesting operations, while at the same time take the manual legwork out by autonomously operating in four different levels: i) performing robot navigation with quality performance guarantee in order to inspect the crops and collect data from the agricultural area through embedded sensorial system ii) performing bi-manual harvesting operations with the needed finesse using a modular robotic platform iii) employing additive manufacturing for adjusting the robot gripper to the geometry of the different crops iv) presenting advanced cognitive capabilities and decision making skills. The envisioned system will be demonstrated and evaluated in the challenging vineyard environment inspecting different types of vines and harvesting bunches of grapes of different varieties in a human-like manner.

Contact: Marc Hanheide


Advanced Robotic Breast Examination Intelligent System (ARTEMIS):

Cancer Research UK, 2020-2021.

Early breast cancer detection has a significant impact on extending survival and improving quality of life. Young women with breast cancer account for 7% of all breast cancers. Diagnostic delays are common and they present often at an advanced stage, with the more aggressive and less responsive disease to hormonal therapy. ARTEMIS aims at developing an intelligent robotic system that will help early breast cancer detection. We will develop intelligent algorithms and soft robotic system for this project. ARTEMIS is a collaborative 1-year project, which is funded by Cancer Research UK. Partners in this project include the University of Lincoln, University of Bristol, University of Bradford and Imperial College London.  

Contact: Amir Ghalamzan


HaptiC-guidEd mobiLe manipuLatiOn (CELLO)

Flexible Partnership Funding, NCNR, 2020-2021.

CELLO will develop a haptic-guided control strategy for mobile manipulation tasks useful for remote handling in extreme environments. Telemanipulation is still the only means of performing many robotic tasks, which is trusted by conservative industries. Teleoperating a manipulator with many degrees of freedom is extremely difficult and imposes a high cognitive load on the human operator. Haptic-guide control system (HGCS) has been successfully utilised for telemanipulation in lab settings where the slave manipulator is located on a bench. HGCS showed a significant reduced cognitive load on the human operator during teleoperation. CELLO will develop an HGCS mobile manipulation task which will help a human operator to teleoperate a mobile robot and a manipulator mounted on the mobile manipulator. We test our HGCS strategy on a Thorvald equipped with 2 Panda robots, manufactured by Franka Emika. 

Contact: Amir Ghalamzan, Marc Hanheide, Gerhard NeumannManolis Chiou


CoRSA: Co-manipulated Robotic Training and Skill Assistance for Telemanipulation in Nuclear Settings

Flexible Partnership Funding, NCNR, 2020-2021.

CoRSA aims to develop a novel approach to develop a training facility for teleoperators via providing variable degrees of autonomy while providing robotic assistance. Our approach relies on the adaptive shared control paradigm to teach a human how to better use a teleoperation system. CoRSA aims to develop methods to identify human skill levels during bilateral master-slave teleoperation, and build advanced autonomy solutions for a co-manipulated robotic trainer, combining autonomous and teleoperation methods using state-of-the-art variable autonomy and shared control approaches. 

Contact: Ayse Kucukyilmaz, Gerhard Neumann & Marc Hanheide


HEAP: Human-Guided Learning and Benchmarking of Robotic Heap Sorting

ERA-NET CHIST-ERA, 2019-2022.

HEAP project is a European consortium that investigates robotic sorting of unstructured heaps of unknown objects. The consortium consists of the University of Lincoln (leading), TU Wien (Markus Vinze), IDIAP in Switzerland (Jean-Marc Odobez), INRIA Nancy (Serena Ivaldi) and IIT in Italy (Lorenzo Natale). Our team will investigate novel robot manipulation and machine learning algorithms that can learn from human guidance and shared control. Robotic heap sorting is of interest for many applications, such as nuclear decommissioning, recycling and manufacturing. The work will also be highly aligned with the National Center for Nuclear Robotics (NCNR). The consortium aims at building an end-to-end benchmarking framework, which includes rigorous scientific methodology and experimental tools for application in realistic scenarios. Benchmark scenarios will be developed with off-the-shelf manipulators and grippers, allowing to create an affordable setup that can be easily reproduced both physically and in simulation.

Contact: Gerhard NeumannAyse Kucukyilmaz


ILIAD – Intra-Logistics with Integrated Automatic Deployment: safe and scalable fleets in shared spaces

Horizon 2020 Research and Innovation Action, 2017-20.

The ILIAD project is driven by the application needs of fleets of robots that operate in intralogistics applications with a high demand on flexibility in environments shared with humans. In particular, the project aims to enable automatic deployment of a fleet of autonomous forklift trucks, which will continuously optimise its performance over time by learning from collected data. The University of Lincoln’s contributions to the project will include ensuring long-term operation of the ILIAD system, maintaining its environment representations over time while learning and predicting activity patterns of human co-workers; developing qualitative models for human-robot spatial interaction; systems architecture development and systems integration; and managing experimental work at test facilities, including the University’s National Centre for Food Manufacturing. ILIAD is a collaborative project involving four academic institutes and five industrial partners across four European countries.

Contact:  Tom DuckettGrzegorz Cielniak & Marc Hanheide


STEP2DYNA: Spatial-temporal information processing for collision detection in dynamic environments

Horizon 2020 EU.1.3.3. – Stimulating innovation by means of cross-fertilisation of knowledge

In the real world, collisions happen at every second – often resulting in serious accidents and fatalities. Autonomous unmanned aerial vehicles (UAVs) have demonstrated great potential in serving human society such as delivering goods to households and precision farming, but are restricted due to a lack of collision detection capability. The current approaches for collision detection such as radar, laser-based Ladar and GPS are far from acceptable in terms of reliability, energy consumption and size. A new type of low cost, low energy consumption and miniaturized collision detection sensors is badly needed not only to save millions of people’s lives but also make autonomous UAVs and robots safe to serve human society. The STEP2DYNA consortium proposes an innovative bio-inspired solution for collision detection in dynamic environments. It takes the advantages of low cost spatial-temporal and parallel computing capacity of visual neural systems and will realize a new chip specifically for collision detection in dynamic environments.

Contact: Shigang Yue (PI)


NCNR – National Centre for Nuclear Robotics

EPSRC RAI Hub, robotics for extreme environments, 2017-2021.

NCNR Logo

The National Centre for Nuclear Robotics (NCNR) is a multi-disciplinary EPSRC RAI (Robotics and Artificial Intelligence) Hub consisting of most leading nuclear robotics experts in the UK including Universities of Birmingham, Queen Mary, Essex, Bristol, Edinburgh, Lancaster and Lincoln. Under this project, more than 40 postdoctoral researchers and PhD researchers form a team to develop cutting-edge scientific solutions to all aspects of nuclear robotics such as sensor and manipulator design, computer vision, robotic grasping and manipulation, mobile robotics, intuitive user interfaces and shared autonomy.

At the University of Lincoln, we will develop new machine learning algorithms for several crucial applications in nuclear robotics such as waste handling, cell decommissioning and site monitoring with mobile robots. Clean-up and decommissioning of nuclear waste are one of the biggest challenges for our and the next generations with enormous predicted costs (up to £200Bn over the next hundred years). Moreover, recent disaster situations such as Fukushima have shown the crucial importance of robotics technology for monitoring and intervention, which is missing up to date. Our team will focus on algorithms for vision-guided robot grasping and manipulation, cutting, shared control and semi-autonomy, mobile robot navigation and outdoor mapping and navigation with a strong focus on machine learning and adaptation techniques. A dedicated bimanual robot arm platform is being developed, mounted on a mobile platform, and to be operated using shared autonomy, teleoperation and augmented reality concepts to be developed.

Contact:  Gerhard NeumannMarc Hanheide


Lindsey – A robot tour guide

Bilateral Collaboration with Lincolnshire County Council

Lindsey is developed in the context of a partnership between Lincolnshire County Council and L-CAS to develop a completely autonomous robot that delivers tours to visitors of the “Collection Museum” in Lincoln, employing state of the art adaptation and learning from user interactions.

Contact: Marc Hanheide


RASBerry – Robotics and Autonomous Systems for Berry Production

Innovate UK, Innovation in health and life sciences round 1, 2017-19

The RASBerry project (Robotics and Autonomous Systems for Berry Production) will develop autonomous fleets of robots for horticultural industry. In particular, the project will consider strawberry production in polytunnels. The first major objective is to support in-field transportation operations to aid and complement human fruit pickers. A dedicated mobile platform is being developed, together with software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration. A solution for autonomous in-field transportation will significantly decrease strawberry production costs, address labour shortages and be the first step towards fully autonomous robotic systems for berry production.

RASBerry is a collaboration between the Norwegian University of Life Sciences (NMBU), University of Lincoln, Saga Robotics, CBS Ltd., Berry Gardens Growers and Ekeberg Myhrene. Further funds are provided by Innovasjon Norge, NFR Forny and Innovate UK.

Contact:  Marc Hanheide & Grzegorz Cielniak


Robo-Pick: Robots for Autonomous Mushroom Picking

Innovate UK

This project aims to develop a new robotic picking system to harvest fresh mushrooms reducing labour demands by ca. 66%. The work will be carried out by a consortium comprising: Littleport Mushroom Farms, a major UK mushroom supplier; ABB, a major UK-based robotic supplier; Stelram, a small specialist UK developer of robotic solutions; and the University of Lincoln, a leading research group focusing on robotic application in the food industry.

The project will integrate novel soft robotic actuators, vision systems and data analysis with autonomous robots and will deliver an end to end solution to a problem that has challenged the industry for many years. It will greatly increase the competitiveness of UK production and the outcomes are directly transferable to many sectors of the UK food and manufacturing industries.

Contact: Gerhard Neumann (PI)


3D Vision-based Crop-Weed Discrimination for Automated Weeding Operations

BBSRC and Innovate UK (Agri-Tech Catalyst, Early Stage Feasibility), 2016-18

This project will investigate the technical foundations for the next generation of robotic weeding machinery, enabling selective and accurate treatment of specific weeds. The proposed technology is a novel combination of low-cost 3D sensing and learning software together with a suitable weed destruction technique. The proposed developments will lead to more efficient cultural weeding equipment resulting in better management of weeds and reduced input use, bringing several benefits to food producers, sellers and society. The technical objectives of the project include detection of plants using a low-cost 3D camera vision system, discrimination of target crop plants from weeds at different growth stages, providing an intuitive system training interface for rapid deployment, development of a proof-of-concept weed destruction technique and finally integration and evaluation of the developed technology in automated weeding products.

Contact:  Grzegorz Cielniak & Tom Duckett


Application of machine learning and high speed 3D vision algorithms for real time detection of fruit

Collaborative Training Partnership for Fruit Crop Research funded by BBSRC and Industry, 2017-21

The main objective of this project is to deploy novel machine learning technologies to detect, locate and measure (size and colour) fruit in real time. This work fundamentally underpins the development of all crop-picking robots. The project will develop advanced machine learning algorithms to measure, identify and detect fruit in real time and in 3D. The algorithms will be trainable (so that a range of fruit types can be identified) and provide a world x,y,z co-ordinate of the fruit. A similar system was developed for broccoli (Kusumam, 2017) which showed that 3D cameras could be deployed in field environments. The new challenge for this project will be to minimise processing requirements to identify fruit whilst maximising processing speed and recognition fidelity. This project will initially focus on strawberry and be anticipated to include apple.

Contact:  Grzegorz Cielniak


Development and Demonstration of an Automated, Selective Broccoli Harvester

Agriculture and Horticulture Development Board, 2017-21

broccoli project

Broccoli is one of the world’s largest vegetable crops, and almost all of it is currently harvested by hand. Development of an automated harvester would help to increase productivity and improve growers’ ability to control production costs. In this project, we will develop 3D imaging technologies to accurately identify broccoli plants in the field in all light conditions including at night, accurately measure the size of each plant head and compare it against pre-agreed criteria in order to establish whether or not it is suitable for cutting, and establish the precise location of each broccoli head selected for cutting. Working together with industry partners, the developed technologies will integrated into a prototype robotic system for automatic selective harvesting of broccoli, which can work for long periods to cut, lift and collect heads of the preferred size.

Contact: Tom Duckett & Grzegorz Cielniak 


Autonomous Field Rover for Agricultural Research

Research Investment Fund (RIF), University of Lincoln, 2016-18
BBSRC Seeding Catalyst, 2017-18

This project targets the challenge of developing technology for autonomous soil sampling, extending the state of the art for soil quality assessment and correlating measured soil variability with crop yield. We therefore propose to develop a novel autonomous vehicle for the agri-industry which can amalgamate multiple sensors to develop highly precise soil fertility maps. This cross-disciplinary project is based within the newly formed Lincoln Institute for Agri-Food Technology and includes contributions from the Schools of Computer Science, Engineering, Life Sciences and the National Centre for Food Manufacturing at the University of Lincoln.

Contact:  Grzegorz Cielniak & Tom Duckett


Autonomous Conversational Systems

BaxterPlayingGames

We investigate interactive machines that perceive, act, communicate, and learn. This is motivated by the fact that communicating with multiple modalities such as speaking, touching and pointing is a natural and efficient form of interaction among humans. Our approach is three-fold. First, we treat (multimodal) perception, action and communication jointly rather than independently. Second, we aim to increase the autonomy of intelligent conversational systems by reducing the amount of human intervention (development-wise) in order to train machines from example interactions rather than contrived for the purpose of system training. Third, we use conversational systems as a test bed in order to challenge machine learning algorithms in their application to real-world systems that can operate in realistic scenarios — beyond lab environments. Example applications include robots playing games and personal assistants providing information in hands-free scenarios.

Contact: Heriberto Cuayahuitl (PI)


Cognitive Social Robotics

Internally funded, University of Lincoln  &  NVidia hardware grant

Cognitive Social Robots in the Real World

To be useful for the real world, robots will need to be social to effectively interact and help people. The robots should take into account the cognitive abilities and limitations of their human partners, just as people do when interacting with others. To do so involves the combined application of artificial intelligence, cognitive science, psychology, and robotics. This research develops autonomous, cognitively capable, and social robots to interact with people in real environments to achieve measurable benefits. Evaluation work is conducted in the real-world wherever possible, in the domains of education, healthcare, and other collaborative applications.

Contact: Paul Baxter (PI)


Completed Projects

Synthesis of remote sensing and novel ground truth sensors to develop high-resolution soil moisture forecasts in China and the UK

Science and Technology Facilities Council (STFC) – Newton Fund: Earth Observation, Modelling and Autonomous Systems For Agri-Tech In China, 2016-19.

The availability of water is a key driver of agricultural productivity, and the impact of water availability on global food production is seen as a key global risk and challenge. This project seeks to develop agri-tech solutions to help alleviate the issue of water availability in agriculture, and for producers to ultimately drive water use efficiency. Currently, there is no system to measure soil moisture distribution accurately across a field, and the resolution of remote sensing has not been sufficient for agricultural applications or for local water management to reduce flood risk. The project will deploy two new sensors (one static, one mobile) within China that measure soil moisture content as a function of the albedo of cosmically generated fast neutrons (Cosmos sensor, designed by Hydroinova, US). The project coordinates the expertise of four key groups, the University of Lincoln (robotics, mapping and deployment of autonomous vehicles); the Institute of Ecology and Agrometeorology (IEAM) of Chinese Academy of Meteorological Sciences, University of Information Science &Technology; the Centre for Ecology and Hydrology (Wallingford); and the School of Geography and Earth Sciences, The University of Aberystwyth.

Contact: Grzegorz CielniakTom Duckett


FLOBOT: Floor Washing Robot for Professional Users

EU Horizon 2020, Innovation Action, 645376

flobot

FLOBOT is a collaborative project, involving academic institutes and industrial partners across five European countries. The project will develop a floor washing robot for industrial, commercial, civil and service premises, such as supermarkets and airports. Floor washing tasks have many demanding aspects, including autonomy of operation, navigation and path optimization, safety with regards to humans and goods, interaction with human personnel, easy set-up and reprogramming. FLOBOT addresses these problems by integrating existing and new solutions to produce a professional floor washing robot for wide areas. Our research contribution in this project is focussed in the area of robot perception, based on laser range-finder and RGB-D sensors, for human detection, tracking and motion analysis in dynamic environments. Primary tasks include developing novel algorithms and approaches for enabling the acquisition, maintenance and refinement of multiple human motion trajectories for collision avoidance and path optimization, as well as integration of the algorithms with the robot navigation and on-board floor inspection system.

Team: Nicola Bellotto (PI), Tom Duckett (Co-I), Zhi Yan


ENRICHME: Enabling Robot and assisted living environment for Independent Care and Health Monitoring of the Elderly

EU Horizon 2020, Research & Innovation Action, 643691

enrichme

ENRICHME is a collaborative project, involving academic institutes, industrial partners and charity organizations across six European countries. It tackles the progressive decline of cognitive capacity in the ageing population proposing an integrated platform for Ambient Assisted Living (AAL) with a mobile robot for long-term human monitoring and interaction, which helps the elderly to remain independent and active for longer. The system will contribute and build on recent advances in mobile robotics and AAL, exploiting new non-invasive techniques for physiological and activity monitoring, as well as adaptive human-robot interaction, to provide services in support to mental fitness and social inclusion. Our research contribution in this project focuses in the area of robot perception and ambient intelligence for human tracking and identity verification, as well as physiological and long-term activity monitoring of the elderly at home. Primary tasks include developing novel algorithms and approaches for enabling the acquisition, maintenance and refinement of models to describe human motion behaviors over extended periods. as well as integration of the algorithms with the AAL system.

Team: Nicola Bellotto (PI), Shigang Yue (Co-I), Tryphon Lambrou (Co-I) Serhan CosarManuel Fernandez-Carmona


Learn-Cars: Structured Deep Learning for Autonomous Driving

Funded by Toyota Europe

We will follow a data-driven approach to achieve human-like driving styles with human-level adaptability and personalization to the human driver/passenger. We will estimate driving controllers from collected experience and we will extract a library of different maneuvers from demonstrated data. We will use the maneuver library to plan the trajectory of the car by switching between different maneuvers. We will also use optimal control and  reinforcement learning techniques to improve the single maneuvers such that the maneuvers generalize to unseen situations and possibly even outperform the human drivers. In particular, we will concentrate on learning to resolve dangerous situations such as avoiding an unexpected obstacle. An important research question for using maneuver libraries is how to switch between maneuvers. The system should produce as little number of switches as necessary to generate a smooth driving behavior. Moreover, we need to incorporate high-dimensional sensory input from the environment in the switching decision. To do so, we will investigate the use of deep learning techniques.

Contact: Gerhard Neumann (PI)


ActiVis: Active Vision with Human-in-the-Loop for the Visually Impaired

Google Faculty Research Award, Winter 2015

The research proposed in this project is driven by the need of independent mobility for the visually impaired. It addresses the fundamental problem of active vision with human-in-the-loop, which allows for improved navigation experience, including real-time categorization of indoor environments with a handheld RGB-D camera. This is particularly challenging due to the unpredictability of human motion and sensor uncertainty. While visual-inertial systems can be used to estimate the position of a handheld camera, often the latter must also be pointed towards observable objects and features to facilitate particular navigation tasks, e.g. to enable place categorization. An attention mechanism for purposeful perception, which drives human actions to focus on surrounding points of interest, is therefore needed. This project proposes a novel active vision system with human-in-the-loop that anticipates, guides and adapts to the actions of a moving user, implemented and validated on a mobile device to aid the indoor navigation of the visually impaired.

Team: Nicola Bellotto (PI), Grzegorz Cielniak (Co-I), Jaycee Lock


FInCoR: Facilitating Individualised Collaboration with Robots

Research Investment Fund (RIF), University of Lincoln

The FInCoR project will investigate novel ways to facilitate individualised Human-Robot Collaboration through long-term adaptation on the level of joint tasks. This will enable robots to work with human more effectively in scenario such as high value manufacturing and assistive care. Imagine a robot helping to assemble a car’s dashboard more effectively, knowing that it is working with a left-handed person; or a robot assisting an elderly employee in a car factory who is skilled in fitting a speedometer, but requires a third-hand holding the heavy mounting frame in place. Despite significant progress in human-robot collaboration, today’s robotic systems still lack the ability to adjust to an individual’s needs. FInCoR will overcome this limitation by developing online, in-situ adaptation, putting the “human in the loop”. It will bring together flexible task representations (e.g. Markov Decision Processes), machine learning (e.g. reinforcement learning), advanced robot perception (e.g. body tracking), and robot control (e.g. reactive planning) to make progress from pre-scripted tasks to individualised models. These models account for preferences, abilities, and limitations of each individual human through long-term adaptation. Hence, FInCoR will enable processes known from human-human collaboration, such as two colleagues working together and learning more about each other’s strengths, preferences, and strategies, to take place in human-robot teams.

Contact: Marc Hanheide & Peter Lightbody


STRANDS: Spatio-Temporal Representations and Activities for Cognitive Control in Long-Term Scenarios

EU FP7 Integrating Project, 600623

STRANDS will produce intelligent mobile robots that are able to run for months in dynamic human environments. We will provide robots with the longevity and behavioural robustness necessary to make them truly useful assistants in a wide range of domains. Such long-lived robots will be able to learn from a wider range of experiences than has previously been possible, creating a whole new generation of autonomous systems able to extract and exploit the structure in their worlds.

L-CAS scientists in STRANDS are part of a European team of researchers and companies and contributes with their unique expertise in long-term mapping and behaviour generation in integrated robotic systems. Our robot developed in STRANDS is called “Linda”.

Contact: Tom Duckett & Marc Hanheide


Mobile Robotics for Ambient Assisted Living

Research Investment Fund (RIF), University of Lincoln

The life span of ordinary people is increasing steadily and many developed countries, including UK, are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Early detection and monitoring of human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. In this context, the main aim of the current research is to monitor human activities in an ambient assisted living (AAL) environment, using a mobile robot for 3D perception, high-level reasoning and representation of such activities. The robot will enable constant but discrete monitoring of people in need of home care, complementing other fixed monitoring systems and proactively engaging in case of emergency. The goal of this research will be achieved by developing novel qualitative models of ADLs, including new techniques for 3D sensing of human motion and RFID-based object recognition. This research will be further extended by new solutions in long-term human monitoring for anomaly detection.

Contact: Nicola Bellotto (PI), Tom Duckett & Serhan Cosar (Co-Is), Claudio Coppola


HAZCEPT: Towards zero road accidents – nature inspired hazard perception

EU FP7-PEOPLE-2012-IRSES

temporary image for hazcept

The number of road traffic accident fatalities world wide has recently reached 1.3 million each year, with between 20 and 50 million injuries being caused by road accidents. In theory, all accidents can be avoided. Studies showed that more than 90% road accidents are caused by or related to human error. Developing an efficient system that can detect hazardous situations robustly is the key to reduce road accidents. This HAZCEPT consortium will focus on automatic hazard scene recognition for safe driving. HAZCEPT will address the hazard recognition from three aspects – lower visual level, cognitive level, and drivers’ factors in the safe driving loop.

Contact: Shigang Yue (PI)


LIVCODE: Life like information processing for robust collision detection

EU FP7-PEOPLE-2011-IRSES, Coordinator

Animals are especially good at collision avoidance even in a dense swarm. In the future, every kind of man made moving machine, such as ground vehicles, robots, UAVs aeroplanes, boats, even moving toys, should have the same ability to avoid collision with other things, if a robust collision detection sensor is available. The six partners of this EU FP7 project from UK, Germany, Japan and China will further look into insects visual pathways and take inspirations from animal vision systems to explore robust embedded solutions for vision based collision detection for future intelligent machines.

Contact: Shigang Yue (PI)


3D Vision Assisted Robotic Harvesting of Broccoli

BBSRC and Innovate UK (Agri-Tech Catalyst, Early Stage Feasibility)

There is an urgent need to reduce the costs of production of field brassica crops, in particular, broccoli. Labour costs are a significant proportion of overall production costs. High labour usage also drives complex management and potentially social issues. In this project, we will test whether 3D camera technology can be used to identify and select broccoli which is ready to harvest within commercial crops. This will provide a key underpinning step towards the development of a fully automatic and camera guided robotic harvesting system for broccoli. The commercial benefits are highly significant, as the broccoli crop is one of the worlds largest vegetable crops, and almost all of it is manually harvested.

Contact: Tom Duckett & Grzegorz Cielniak


Trainable Vision‐based Anomaly Detection and Diagnosis (TADD)

Technology Strategy Board funded Technology Inspired CR&D – ICT Project

Market demand for automation of food processing and packaging is increasing, leading to a demand for increased automation of industrial quality control (QC) procedures. This project is developing a multi‐purpose intelligent software technology using computer vision and machine learning for automatic detection and diagnosis of faulty products, including raw, processed and packaged food products. The developed vision systems are user-trainable, requiring minimal set‐up to work with a wide variety of products and processes. The technology will be refined and evaluated by testing in automated QC equipment in the food industry.

Contact: Tom Duckett


EYE2E: Building visual brains for fast human machine interaction

In the real world, many animals pocess almost perfect sensory systems for fast and efficient interactions within dynamic environments. Vision, as an evolved organ, plays a significant role in the survival of many animal species. The mechanisms in biological visual pathways provide nice models for developing artificial vision systems. The four partners of this consortium will work together to explore biological visual systems in both lower and higher level by modelling, simulation, integration and realization in chips, to investigate fast image processing methodologies for human machine interaction through VLSI chip design and robotic experiments.

Contact: Shigang Yue


Autonomous Control of Agricultural Sprayers

Technology Strategy Board funded CR&D Feasibility Study

This project aims to improve the efficiency of agricultural spraying vehicles by developing a robust control system for the height of the spraying booms, using laser sensing to model the 3d structure of the crop canopy and terrain ahead of the vehicle. This new technology will enable greater autonomy in agricultural sprayers, enhance and simplify interaction between the driver and the vehicle, and result in an optimised spraying process.

Contact: Grzegorz Cielniak & Tom Duckett


MultiDS: Multi-Domain Dialogue System Using Deep Reinforcement Learning

Funded by Samsung Electronics

Dialogue systems that interact with humans are still confined to restricted vocabularies and tasks, and rigid interactions. The later is of particular interest to multi-domain dialogue systems because users at times deviate from the system’s expected user behaviour (e.g. asking for restaurant information while searching for a hotel). The MultiDS project aims to overcome these limitations through the following research objectives: (1) create a novel learning algorithm for multi-domain (spoken) dialogue
management; (2) create a multi-domain dialogue system with support for flexible navigation across domains; and (3) evaluate the proposed dialogue system using spoken interaction.

Contact: Heriberto Cuayahuitl