Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning
Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning
Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning
Ebook114 pages1 hour

Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What is Learning Applied to Ground Vehicles


The Learning Applied to Ground Vehicles (LAGR) initiative, which was in operation from 2004 until 2008, was designed with the intention of expediting the development of autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). DARPA, which is a research agency under the Department of Defense of the United States of America, provided funding for LAGR.


How you will benefit


(I) Insights, and validations about the following topics:


Chapter 1: DARPA LAGR Program


Chapter 2: DARPA


Chapter 3: Autonomous robot


Chapter 4: Military robot


Chapter 5: DARPA Grand Challenge


Chapter 6: Unmanned ground vehicle


Chapter 7: European Land-Robot Trial


Chapter 8: Mobile robot


Chapter 9: Crusher (robot)


Chapter 10: National Robotics Engineering Center


(II) Answering the public top questions about learning applied to ground vehicles.


(III) Real world examples for the usage of learning applied to ground vehicles in many fields.


Who this book is for


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Learning Applied to Ground Vehicles.

LanguageEnglish
Release dateMay 5, 2024
Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning

Read more from Fouad Sabry

Related to Learning Applied to Ground Vehicles

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Learning Applied to Ground Vehicles

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Learning Applied to Ground Vehicles - Fouad Sabry

    Chapter 1: DARPA LAGR Program

    The Learning Applied to Ground Vehicles (LAGR) program, which lasted from 2004 to 2008, aimed to expedite autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). DARPA, the research arm of the United States Department of Defense, supported LAGR.

    While mobile robots (e.g. Shakey) have existed since the 1960s, progress in developing robots that can travel outdoors, off-road, and on terrain with numerous obstacles has been gradual. In reality, there were no defined criteria in place to monitor progress. separate research teams fielded robotic cars in unrehearsed government testing that monitored average speed and number of required operator interventions along a set track with widely spaced waypoints. These testing revealed the tremendous difficulties associated with off-road navigation. While the PerceptOR vehicles were equipped with cutting-edge sensors and algorithms at the turn of the 21st century, the limited range of their perception technology caused them to become stranded in natural dead ends. In addition, their reliance on predetermined behaviors prevented them from adapting to unforeseen events. Except for essentially open terrain with little obstructions or along dirt roads, the PerceptOR cars were incapable of navigating without repetitive and frequent operator assistance.

    The LAGR initiative was aimed to expand on the PerceptOR technique while addressing the technological issues shown by the PerceptOR testing.

    LAGR's primary objective was to expedite the off-road navigation of UGVs. Additional synergistic objectives included (1) establishing a benchmarking methodology for measuring the progress of autonomous robots operating in unstructured environments, (2) advancing machine vision and thereby enabling long-range perception, and (3) increasing the number of institutions and individuals able to contribute to cutting-edge UGV research.

    Instead of focusing on new hardware, the LAGR program was intended to emphasize the development of new science for robot perception and control. Thus, it was decided to provide the LAGR researchers, who were members of competing teams, with a fleet of identical, relatively simple robots, allowing them to focus on algorithm development. Each team was provided with two robots of the standard design. They created new software for these robots and then sent the code to a government testing team, who tested the software on government robots at several test courses. These courses were dispersed around the United States and were previously unknown to the teams. In this manner, all teams' code could be tested under virtually identical conditions. After an initial period of launch, the code development/test cycle was repeated around once per month.

    Official Website of the Carnegie Mellon University National Robotics Engineering Center (CMU NREC) created and constructed the standard robot.

    The vehicles’ computers were preloaded with a modular Baseline perception and navigation system that was essentially the same system that CMU NREC had created for the PerceptOR program and was considered to represent the state-of-the-art at the inception of LAGR.

    The modularity of the Baseline system allowed the researchers to replace portions of the Baseline code with their own modules and still have a fully functional navigation system without having to develop an entirely new navigation system.

    Thus, for example, They were able to compare their own obstacle detection module's performance to that of the Baseline code, while holding all other variables constant.

    The Baseline code also acted as a constant reference — in every software environment and at any time, teams’ code could be compared to the Baseline code.

    This rapid cycle provided the Government team and the performance teams with immediate input and enabled the Government team to build examination courses that pushed the performers in specific perception tasks and whose difficulty was expected to challenge the performers, but not dominate, the performers’ current capabilities.

    Each test did not demand that teams submit new code, but commonly did.

    Despite this latitude, Some teams regarded the rapid test cycle to be detrimental to their long-term development and would have preferred a longer break between testing.

    To proceed to Phase II, Each team was required to modify the Baseline code in order to pass the final three Phase I Government tests, On average, robots executing the team's code were at least 10% quicker than those executing the original Baseline code.

    This rather modest Go/ No Go metric was chosen to allow teams to choose risky, yet potential techniques that may not be fully developed in the program's first 18 months.

    Each of the eight teams met this statistic, Phase II's objective was for some to achieve twice the speed of the Baseline on later tests, and this was achieved by a few.

    Note that the Phase I Go/No Go criteria was designed such that teams were not competing for a fixed number of Phase II slots: any number of teams were eligible, from eight to zero could pass the examination.

    This DARPA method was intended to promote cooperation and even code exchange across the teams.

    In Phase I, the first 18 months of LAGR, eight teams were identified as performers. The teams represented Applied Perception (PI Mark Ollis), Georgia Tech (PI Tucker Balch), Jet Propulsion Laboratory (PI Larry Matthies), Net-Scale Technologies (PI Uri Muller), NIST (PI James Albus), Stanford University (PI Sebastian Thrun), SRI International (PI Robert Bolles), and the University of Pennsylvania (PI Daniel Lee).

    The Stanford team left at the conclusion of Phase I to work on the DARPA Grand Challenge; it was replaced by the University of Colorado, Boulder team (PI Greg Grudic). Also during Phase II, the NIST team withdrew from the competition and focused on combining the best software components from each team into a single solution. Roger Bostelman became the effort's leader.

    The LAGR vehicle, which was roughly the size of a grocery store shopping cart, was intended to be easy to operate. Learning Locomotion, a sister DARPA initiative, tackled sophisticated motor control. It was powered by a battery and had two independently driven wheelchair motors in the front, as well as two caster wheels in the back. When the front wheels were spun in the same direction, the robot could be propelled forward or backward. The robot turned when these wheels were propelled in different directions.

    The $30,000 price of the LAGR vehicle allowed for the construction of a fleet that could be dispersed to a number of teams that expanded the pool of researchers who had previously participated in DARPA robotics programs. The vehicle's top speed of roughly 3 miles per hour and relatively modest weight of 100 kg meant that it offered a significantly lower safety risk compared to vehicles employed in past autonomous ground vehicle initiatives, hence substantially reducing the funding required for each team to handle its robot.

    Despite this, the LAGR vehicles were highly advanced machinery. Their sensor suite comprised two sets of stereo cameras, an accelerometer, a bumper sensor, wheel encoders, and a global positioning system. The vehicle also featured three user-programmable computers.

    The robots' assimilation of learnt behaviors was a pillar of the program. In addition, the program employed passive optical equipment for long-distance scene analysis.

    Enjoying the preview?
    Page 1 of 1