Learning Applied to Ground Vehicles: Enhancing Ground Vehicle Performance through Computer Vision Learning
By Fouad Sabry
()
About this ebook
What is Learning Applied to Ground Vehicles
The Learning Applied to Ground Vehicles (LAGR) initiative, which was in operation from 2004 until 2008, was designed with the intention of expediting the development of autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). DARPA, which is a research agency under the Department of Defense of the United States of America, provided funding for LAGR.
How you will benefit
(I) Insights, and validations about the following topics:
Chapter 1: DARPA LAGR Program
Chapter 2: DARPA
Chapter 3: Autonomous robot
Chapter 4: Military robot
Chapter 5: DARPA Grand Challenge
Chapter 6: Unmanned ground vehicle
Chapter 7: European Land-Robot Trial
Chapter 8: Mobile robot
Chapter 9: Crusher (robot)
Chapter 10: National Robotics Engineering Center
(II) Answering the public top questions about learning applied to ground vehicles.
(III) Real world examples for the usage of learning applied to ground vehicles in many fields.
Who this book is for
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Learning Applied to Ground Vehicles.
Read more from Fouad Sabry
Emerging Technologies in Robotics
Related to Learning Applied to Ground Vehicles
Titles in the series (100)
Inpainting: Bridging Gaps in Computer Vision Rating: 0 out of 5 stars0 ratingsComputer Vision: Exploring the Depths of Computer Vision Rating: 0 out of 5 stars0 ratingsAnisotropic Diffusion: Enhancing Image Analysis Through Anisotropic Diffusion Rating: 0 out of 5 stars0 ratingsArticulated Body Pose Estimation: Unlocking Human Motion in Computer Vision Rating: 0 out of 5 stars0 ratingsHomography: Homography: Transformations in Computer Vision Rating: 0 out of 5 stars0 ratingsHadamard Transform: Unveiling the Power of Hadamard Transform in Computer Vision Rating: 0 out of 5 stars0 ratingsRetinex: Unveiling the Secrets of Computational Vision with Retinex Rating: 0 out of 5 stars0 ratingsVisual Perception: Insights into Computational Visual Processing Rating: 0 out of 5 stars0 ratingsComputer Stereo Vision: Exploring Depth Perception in Computer Vision Rating: 0 out of 5 stars0 ratingsColor Space: Exploring the Spectrum of Computer Vision Rating: 0 out of 5 stars0 ratingsJoint Photographic Experts Group: Unlocking the Power of Visual Data with the JPEG Standard Rating: 0 out of 5 stars0 ratingsHistogram Equalization: Enhancing Image Contrast for Enhanced Visual Perception Rating: 0 out of 5 stars0 ratingsImage Histogram: Unveiling Visual Insights, Exploring the Depths of Image Histograms in Computer Vision Rating: 0 out of 5 stars0 ratingsContour Detection: Unveiling the Art of Visual Perception in Computer Vision Rating: 0 out of 5 stars0 ratingsColor Appearance Model: Understanding Perception and Representation in Computer Vision Rating: 0 out of 5 stars0 ratingsNoise Reduction: Enhancing Clarity, Advanced Techniques for Noise Reduction in Computer Vision Rating: 0 out of 5 stars0 ratingsHough Transform: Unveiling the Magic of Hough Transform in Computer Vision Rating: 0 out of 5 stars0 ratingsBlob Detection: Unveiling Patterns in Visual Data Rating: 0 out of 5 stars0 ratingsAffine Transformation: Unlocking Visual Perspectives: Exploring Affine Transformation in Computer Vision Rating: 0 out of 5 stars0 ratingsImage Compression: Efficient Techniques for Visual Data Optimization Rating: 0 out of 5 stars0 ratingsRadon Transform: Unveiling Hidden Patterns in Visual Data Rating: 0 out of 5 stars0 ratingsEdge Detection: Exploring Boundaries in Computer Vision Rating: 0 out of 5 stars0 ratingsGamma Correction: Enhancing Visual Clarity in Computer Vision: The Gamma Correction Technique Rating: 0 out of 5 stars0 ratingsAdaptive Filter: Enhancing Computer Vision Through Adaptive Filtering Rating: 0 out of 5 stars0 ratingsColor Matching Function: Understanding Spectral Sensitivity in Computer Vision Rating: 0 out of 5 stars0 ratingsTone Mapping: Tone Mapping: Illuminating Perspectives in Computer Vision Rating: 0 out of 5 stars0 ratingsUnderwater Computer Vision: Exploring the Depths of Computer Vision Beneath the Waves Rating: 0 out of 5 stars0 ratingsActive Contour: Advancing Computer Vision with Active Contour Techniques Rating: 0 out of 5 stars0 ratingsColor Model: Understanding the Spectrum of Computer Vision: Exploring Color Models Rating: 0 out of 5 stars0 ratingsColor Profile: Exploring Visual Perception and Analysis in Computer Vision Rating: 0 out of 5 stars0 ratings
Related ebooks
Vehicle Infrastructure Integration: Unlocking Insights and Advancements through Computer Vision Rating: 0 out of 5 stars0 ratingsRobot Operating System (ROS): The Complete Reference (Volume 6) Rating: 0 out of 5 stars0 ratingsMulticopter Design and Control Practice: A Series Experiments based on MATLAB and Pixhawk Rating: 0 out of 5 stars0 ratingsAutomatic Target Recognition: Advances in Computer Vision Techniques for Target Recognition Rating: 0 out of 5 stars0 ratingsMulti-rotor Platform Based UAV Systems Rating: 0 out of 5 stars0 ratingsAutomatic Target Recognition: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsTrackers Rating: 0 out of 5 stars0 ratingsFace Detection and Recognition on Mobile Devices Rating: 1 out of 5 stars1/5Rule Based System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAccelerated Computing with HIP Rating: 4 out of 5 stars4/5TopGun Veg-Tool: User's Manual Rating: 0 out of 5 stars0 ratingsModel Driven Development for Embedded Software: Application to Communications for Drone Swarm Rating: 0 out of 5 stars0 ratingsInteraction Flow Modeling Language: Model-Driven UI Engineering of Web and Mobile Apps with IFML Rating: 0 out of 5 stars0 ratingsMastering Concurrent Programming with Go Rating: 0 out of 5 stars0 ratingsPython Geospatial Development Essentials Rating: 0 out of 5 stars0 ratingsGPU-based Parallel Implementation of Swarm Intelligence Algorithms Rating: 0 out of 5 stars0 ratingsPipelined Processor Farms: Structured Design for Embedded Parallel Systems Rating: 0 out of 5 stars0 ratingsAutonomy and Unmanned Vehicles: Augmented Reactive Mission and Motion Planning Architecture Rating: 0 out of 5 stars0 ratingsIntelligent Digital Oil and Gas Fields: Concepts, Collaboration, and Right-Time Decisions Rating: 5 out of 5 stars5/5Robot Systems for Rail Transit Applications Rating: 0 out of 5 stars0 ratingsBuilding Mobile Apps at Scale: 39 Engineering Challenges Rating: 5 out of 5 stars5/5The Multibody Systems Approach to Vehicle Dynamics Rating: 5 out of 5 stars5/5TopGun Veg-Tool: ROW Vegetation Management Rating: 0 out of 5 stars0 ratingsSpatial Cognitive Engine Technology Rating: 0 out of 5 stars0 ratingsSwarm Robotics: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHeterogeneous Computing with OpenCL Rating: 1 out of 5 stars1/5Mastering Mobile Test Automation Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
2084: Artificial Intelligence and the Future of Humanity Rating: 4 out of 5 stars4/5Dark Aeon: Transhumanism and the War Against Humanity Rating: 5 out of 5 stars5/5101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5Summary of Super-Intelligence From Nick Bostrom Rating: 5 out of 5 stars5/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/5ChatGPT For Fiction Writing: AI for Authors Rating: 5 out of 5 stars5/5ChatGPT For Dummies Rating: 0 out of 5 stars0 ratingsThe Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5Enterprise AI For Dummies Rating: 3 out of 5 stars3/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratings10 Great Ways to Earn Money Through Artificial Intelligence(AI) Rating: 3 out of 5 stars3/5THE CHATGPT MILLIONAIRE'S HANDBOOK: UNLOCKING WEALTH THROUGH AI AUTOMATION Rating: 5 out of 5 stars5/5ChatGPT for Marketing: A Practical Guide Rating: 3 out of 5 stars3/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5Impromptu: Amplifying Our Humanity Through AI Rating: 5 out of 5 stars5/5AI for Educators: AI for Educators Rating: 5 out of 5 stars5/5ChatGPT for Screenwriters Rating: 0 out of 5 stars0 ratingsWhat Makes Us Human: An Artificial Intelligence Answers Life's Biggest Questions Rating: 5 out of 5 stars5/5
Reviews for Learning Applied to Ground Vehicles
0 ratings0 reviews
Book preview
Learning Applied to Ground Vehicles - Fouad Sabry
Chapter 1: DARPA LAGR Program
The Learning Applied to Ground Vehicles (LAGR) program, which lasted from 2004 to 2008, aimed to expedite autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). DARPA, the research arm of the United States Department of Defense, supported LAGR.
While mobile robots (e.g. Shakey) have existed since the 1960s, progress in developing robots that can travel outdoors, off-road, and on terrain with numerous obstacles has been gradual. In reality, there were no defined criteria in place to monitor progress. separate research teams fielded robotic cars in unrehearsed government testing that monitored average speed and number of required operator interventions along a set track with widely spaced waypoints. These testing revealed the tremendous difficulties associated with off-road navigation. While the PerceptOR vehicles were equipped with cutting-edge sensors and algorithms at the turn of the 21st century, the limited range of their perception technology caused them to become stranded in natural dead ends. In addition, their reliance on predetermined behaviors prevented them from adapting to unforeseen events. Except for essentially open terrain with little obstructions or along dirt roads, the PerceptOR cars were incapable of navigating without repetitive and frequent operator assistance.
The LAGR initiative was aimed to expand on the PerceptOR technique while addressing the technological issues shown by the PerceptOR testing.
LAGR's primary objective was to expedite the off-road navigation of UGVs. Additional synergistic objectives included (1) establishing a benchmarking methodology for measuring the progress of autonomous robots operating in unstructured environments, (2) advancing machine vision and thereby enabling long-range perception, and (3) increasing the number of institutions and individuals able to contribute to cutting-edge UGV research.
Instead of focusing on new hardware, the LAGR program was intended to emphasize the development of new science for robot perception and control. Thus, it was decided to provide the LAGR researchers, who were members of competing teams, with a fleet of identical, relatively simple robots, allowing them to focus on algorithm development. Each team was provided with two robots of the standard design. They created new software for these robots and then sent the code to a government testing team, who tested the software on government robots at several test courses. These courses were dispersed around the United States and were previously unknown to the teams. In this manner, all teams' code could be tested under virtually identical conditions. After an initial period of launch, the code development/test cycle was repeated around once per month.
Official Website of the Carnegie Mellon University National Robotics Engineering Center (CMU NREC) created and constructed the standard robot.
The vehicles’ computers were preloaded with a modular Baseline
perception and navigation system that was essentially the same system that CMU NREC had created for the PerceptOR program and was considered to represent the state-of-the-art at the inception of LAGR.
The modularity of the Baseline system allowed the researchers to replace portions of the Baseline code with their own modules and still have a fully functional navigation system without having to develop an entirely new navigation system.
Thus, for example, They were able to compare their own obstacle detection module's performance to that of the Baseline code, while holding all other variables constant.
The Baseline code also acted as a constant reference — in every software environment and at any time, teams’ code could be compared to the Baseline code.
This rapid cycle provided the Government team and the performance teams with immediate input and enabled the Government team to build examination courses that pushed the performers in specific perception tasks and whose difficulty was expected to challenge the performers, but not dominate, the performers’ current capabilities.
Each test did not demand that teams submit new code, but commonly did.
Despite this latitude, Some teams regarded the rapid test cycle to be detrimental to their long-term development and would have preferred a longer break between testing.
To proceed to Phase II, Each team was required to modify the Baseline code in order to pass the final three Phase I Government tests, On average, robots executing the team's code were at least 10% quicker than those executing the original Baseline code.
This rather modest Go/ No Go
metric was chosen to allow teams to choose risky, yet potential techniques that may not be fully developed in the program's first 18 months.
Each of the eight teams met this statistic, Phase II's objective was for some to achieve twice the speed of the Baseline on later tests, and this was achieved by a few.
Note that the Phase I Go/No Go criteria was designed such that teams were not competing for a fixed number of Phase II slots: any number of teams were eligible, from eight to zero could pass the examination.
This DARPA method was intended to promote cooperation and even code exchange across the teams.
In Phase I, the first 18 months of LAGR, eight teams were identified as performers. The teams represented Applied Perception (PI Mark Ollis), Georgia Tech (PI Tucker Balch), Jet Propulsion Laboratory (PI Larry Matthies), Net-Scale Technologies (PI Uri Muller), NIST (PI James Albus), Stanford University (PI Sebastian Thrun), SRI International (PI Robert Bolles), and the University of Pennsylvania (PI Daniel Lee).
The Stanford team left at the conclusion of Phase I to work on the DARPA Grand Challenge; it was replaced by the University of Colorado, Boulder team (PI Greg Grudic). Also during Phase II, the NIST team withdrew from the competition and focused on combining the best software components from each team into a single solution. Roger Bostelman became the effort's leader.
The LAGR vehicle, which was roughly the size of a grocery store shopping cart, was intended to be easy to operate. Learning Locomotion, a sister DARPA initiative, tackled sophisticated motor control. It was powered by a battery and had two independently driven wheelchair motors in the front, as well as two caster wheels in the back. When the front wheels were spun in the same direction, the robot could be propelled forward or backward. The robot turned when these wheels were propelled in different directions.
The $30,000 price of the LAGR vehicle allowed for the construction of a fleet that could be dispersed to a number of teams that expanded the pool of researchers who had previously participated in DARPA robotics programs. The vehicle's top speed of roughly 3 miles per hour and relatively modest weight of 100 kg meant that it offered a significantly lower safety risk compared to vehicles employed in past autonomous ground vehicle initiatives, hence substantially reducing the funding required for each team to handle its robot.
Despite this, the LAGR vehicles were highly advanced machinery. Their sensor suite comprised two sets of stereo cameras, an accelerometer, a bumper sensor, wheel encoders, and a global positioning system. The vehicle also featured three user-programmable computers.
The robots' assimilation of learnt behaviors was a pillar of the program. In addition, the program employed passive optical equipment for long-distance scene analysis.