Next Article in Journal
Path Following for an Omnidirectional Robot Using a Non-Linear Model Predictive Controller for Intelligent Warehouses
Previous Article in Journal
Non-Commutative Logic for Collective Decision-Making with Perception Bias
Previous Article in Special Issue
Design and Scaling of Exoskeleton Power Units Considering Load Cycles of Humans
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AutoDRIVE: A Comprehensive, Flexible and Integrated Digital Twin Ecosystem for Autonomous Driving Research & Education

1
Autonomous Systems Lab (ASL), Department of Mechatronics Engineering, SRM Institute of Science and Technology (SRMIST), Kattankulathur 603203, Tamil Nadu, India
2
Automation, Robotics and Mechatronics Lab (ARMLab), Department of Automotive Engineering, Clemson University International Center for Automotive Research (CU-ICAR), Greenville, SC 29607, USA
3
School of Mechanical and Aerospace Engineering, Nanyang Technological University (NTU), Singapore 639798, Singapore
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2023, 12(3), 77; https://doi.org/10.3390/robotics12030077
Submission received: 23 April 2023 / Revised: 20 May 2023 / Accepted: 23 May 2023 / Published: 26 May 2023
(This article belongs to the Special Issue Mechatronics Systems and Robots)

Abstract

:
Prototyping and validating hardware–software components, sub-systems and systems within the intelligent transportation system-of-systems framework requires a modular yet flexible and open-access ecosystem. This work presents our attempt to develop such a comprehensive research and education ecosystem, called AutoDRIVE, for synergistically prototyping, simulating and deploying cyber-physical solutions pertaining to autonomous driving as well as smart city management. AutoDRIVE features both software as well as hardware-in-the-loop testing interfaces with openly accessible scaled vehicle and infrastructure components. The ecosystem is compatible with a variety of development frameworks, and supports both single- and multi-agent paradigms through local as well as distributed computing. Most critically, AutoDRIVE is intended to be modularly expandable to explore emergent technologies, and this work highlights various complementary features and capabilities of the proposed ecosystem by demonstrating four such deployment use-cases: (i) autonomous parking using probabilistic robotics approach for mapping, localization, path-planning and control; (ii) behavioral cloning using computer vision and deep imitation learning; (iii) intersection traversal using vehicle-to-vehicle communication and deep reinforcement learning; and (iv) smart city management using vehicle-to-infrastructure communication and internet-of-things.

1. Introduction

Advancing the field of connected autonomous vehicles (CAVs) [1] requires scientific and technological research in conjunction with comprehensive education of methods and tools to overcome existing challenges and prepare the next generation of practitioners.
Inasmuch as meaningful verification and validation (V&V) efforts demand end-to-end stress testing across scales at component/sub-system/system levels, there is a great incentive for exploring the creation and exploitation of varying grades of virtual (simulation-based) and physical (hardware-in-the-loop) testing platforms to alleviate the monetary, spatial, temporal and safety constraints associated with rapid-prototyping of CAV solutions. In a research setting, such platforms can accelerate the process of designing experiments, recording datasets as well as re-iteratively prototyping and validating autonomy solutions. In an educational setting, such platforms can aid in designing interactive demonstrations, hands-on assignments, projects, and competitions.
However, existing platforms for this purpose are observed to limit the throughput of developing and validating connected autonomy solutions. Firstly, most of these platforms lack the integrity required to promote hardware-software co-development; some only offer software simulation tools (e.g., [2,3,4]), while others only provide scaled physical vehicles (e.g., [5,6,7,8]) to test autonomy algorithms. Such isolated platforms not only decelerate the prototyping phase due to compatibility issues, but also adversely affect the validation phase involving simulation to real-world (sim2real) deployments. Secondly, most of these platforms focus specifically on vehicles rather than a holistic intelligent transportation ecosystem involving infrastructure, traffic elements and peer agents, which limits their applications. Thirdly, some of these platforms (e.g., [9,10]) are domain-specific with limited sensing modalities, stringent design requirements and/or fixed development frameworks; some (e.g., [11]) even lack a high-level computation unit and are merely teleoperated from a remote server to execute the intended mission.
This work does not propose “yet another” research and education platform targeting selective aspects of autonomous driving technology. AutoDRIVE (refer Figure 1) aims to provide a cyber-physical ecosystem that is:
  • Comprehensive: The ecosystem offers a scaled car-like vehicle with abundant sensors, which supports single- as well as multi-agent algorithms with or without vehicle-to-vehicle (V2V) communication. It also provides a modular infrastructure development kit comprising various environment modules, traffic elements and surveillance elements, which supports internet-of-things (IoT) and vehicle-to-infrastructure (V2I) communication. On the software front, the ecosystem hosts a high-fidelity simulator and supports the development of autonomous driving as well as smart city solutions.
  • Flexible: The ecosystem offers modular hardware components, a convenient high-fidelity simulator, and an extensive software development support, which enables the end-users to flexibly prototype and validate their autonomy solutions right out of the box. Additionally, the completely open-hardware, open-software architecture of the ecosystem allows users to adapt any of the existing hardware (including the design of the vehicle as well as the infrastructure modules) and/or software (including the codebase of the development framework as well as the simulator) to better fit their use-cases.
  • Integrated: The ecosystem hosts a tightly coupled trio, comprising AutoDRIVE Devkit (to flexibly develop connected autonomy solutions), AutoDRIVE Simulator (to virtually prototype and test them under a variety of conditions and edge-cases), and AutoDRIVE Testbed (to deploy and validate them in controlled real-world settings). The harmony among these three platforms not only enhances the hardware–software co-development of autonomy solutions, but also helps to seamlessly bridge the gap between software simulation and hardware deployment for the verification and validation of these safety-critical systems.
This work also describes sample use-cases of four emergent applications in the field of CAVs, with each exploiting, and thus exhibiting, distinct features and capabilities of the AutoDRIVE Ecosystem. These include state-of-the-art implementations such as autonomous parking, behavioral cloning and intersection traversal, along with a novel implementation of smart city management.

2. State of the Art

The deployment of full-scale CAV solutions generally requires extensive verification and validation, which poses several challenges, especially in university settings. The time, expense, resources and knowledge of full-scale testing present an interplay with the infrastructural requirements as well as the safety of the personnel and property involved, often hindering research and education progress. Consequently, the past decade has witnessed many university-based deployments exploring the development of scaled autonomous vehicles. As described in Table 1, such vehicles include the MIT Racecar [5], AutoRally [6], F1TENTH [7], Multi-agent System for non-Holonomic Racing (MuSHR) [8], Optimal RC Racing (ORCA) Project [11], Delft Scaled Vehicle (DSV) [12], and Berkeley Autonomous Race Car (BARC) [13], to name a few. Some of the other community-driven platforms for autonomous driving include HyphaROS RaceCar [9] and Donkey Car [10], both of which are application specific—the former to map-based navigation, and the later to vision-aided imitation learning. Apart from these, commercial products such as QCar [14] by Quanser and DeepRacer [15] by Amazon Web Services (AWS) are now surfacing the market. However, the fact that most of these products are expensive and employ some form of proprietary hardware and/or software components restricts their openness and flexibility to the community, and there are potential issues such as warranty-voids and vendor lock-ins. Some of the other scaled platforms for autonomy research and education include Duckietown [16], TurtleBot3 [17] and Pheeno [18]. However, the differentially driven robots proposed by these platforms/ecosystems fail to fully satisfy the community requirements for a kinodynamically constrained car-like vehicle.
In terms of comprehensiveness, some of these platforms lack diverse sensing modalities, some lack adequate computational power, some lack an Ackermann steering mechanism, and most lack active or passive infrastructural elements. Only a few satisfy the prominent community requirements, but they can prove to be prohibitively expensive for university programs.
In terms of flexibility, most of these platforms, if not all, use commercial-off-the-shelf (COTS) radio-controlled (RC) cars as their base-chassis, which: (a) are quite expensive; (b) may not be available all around the world; (c) limit research on the “mechatronics engineering” front, which is equally important for cyber-physical systems such as CAVs. Additionally, most of these platforms only support a specific software framework, such as a Robot Operating System (ROS) [19], which inherently creates a skillset-dependency for the end-users. Furthermore, providing assets and plugins for pre-packaged simulators such as Gazebo [20] or OpenAI Gym [21] environments offers only so much flexibility to the users in terms of designing and running the simulated scenarios.
In terms of integrity, some of these platforms do not support simulation in any form, some ROS-based ones support kinematic/dynamic simulation using RViz [22] and/or Gazebo, while others offer task-specific Gym environments for imitation/reinforcement learning, none of which is ideal.

3. AutoDRIVE Testbed

AutoDRIVE Testbed is a hardware platform featuring a native scaled vehicle along with a modular and reconfigurable infrastructure development kit for deploying and validating autonomy algorithms in controlled real-world settings. It adopts a completely open-hardware and open-software architecture to push the “systems engineering and integration” front with regard to CAVs.

3.1. Vehicle

AutoDRIVE’s native vehicle, named Nigel (refer Figure 2A,B), offers realistic driving and steering actuation, comprehensive sensor suite, high-performance computational resources, and a standard vehicular lighting system.

3.1.1. Chassis

Nigel is a 1:14 scale model vehicle comprising four modular platforms, each housing distinct components of the vehicle. It adopts rear-wheel drive, the Ackermann steered mechanism (refer Figure 2C), and therefore resembles an actual car in terms of kinodynamic constraints.

3.1.2. Power Electronics

Nigel is powered using an 11.1 V 5200 mAh lithium-polymer (LiPo) battery, whose health is monitored by a voltage checker. A 10 A rated buck converter steps down the voltage to 5 V, which is then routed, via a 3 A rated master switch, to all the electrical sub-systems including a 20 A rated motor driver module.

3.1.3. Sensor Suite

Nigel hosts a comprehensive sensor suite comprising throttle and steering sensors (actuator feedbacks), 1920 CPR incremental encoders (wheel rotation/velocity), a three-axis indoor-positioning system (IPS) using fiducial markers (mm/cm-level accurate small-scale positioning analogous to m-level accurate full-scale GNSS), 9-axis IMU (raw inertial data and calibrated AHRS data using Madgwick/Mahony filter), two 62.2 FOV cameras with 3.04 mm focal length (front and/or rear RGB frames) and a 7–10 Hz, 360 FOV LIDAR with 12 m range and 1 resolution (2D laser scan).

3.1.4. Computation, Communication and Software

Nigel adopts Jetson Nano Developer Kit - B01 for most of its high-level computation (autonomy algorithms), communication (V2V and V2I) and software installation (JetPack SDK, ROS Melodic and AutoDRIVE Devkit). Additionally, it also hosts Arduino Nano (running the vehicle firmware) for acquiring and filtering raw sensor data and controlling actuators/lights.

3.1.5. Actuators

Nigel is provided with two 6 V 160 RPM rated 120:1 DC geared motors to drive its rear wheels, and a 9.4 kgf.cm servo motor to steer its front wheels; the steering actuator is saturated at ±30 w.r.t. zero-steer value. All the actuators are operated at 5 V, which translates to a maximum speed of ∼130 RPM for driving (∼0.267 m/s @ vehicle) and ∼0.19 s/60 for steering (∼0.805 rad/s @ vehicle).

3.1.6. Lights and Indicators

Nigel’s lighting system comprises dual-mode headlights, automated taillights, triple-mode turning indicators and automated reverse indicators.

3.2. Infrastructure

AutoDRIVE offers a modular and reconfigurable infrastructure development kit (refer Figure 3) for rapidly designing and prototyping custom driving scenarios. This kit includes a range of environment modules, traffic elements and surveillance elements, along with several preconfigured maps.

3.2.1. Environment Modules

Environment modules include static layouts and objects meant for rapidly designing custom scenarios. Apart from these, experts may also choose to design scaled real-world or imaginary scenarios using third-party tools, and import them into the AutoDRIVE Ecosystem.
  • Terrain Modules: These define off-road segments of the environment. AutoDRIVE currently supports five terrains with tunable physical properties (refer Figure 3A).
  • Road Kits: These enable the reconfigurable construction of drivable segments of the environment. AutoDRIVE currently supports 1, 2, 4 and 6 lane road kits, each having 8 different modules (refer Figure 3B).
  • Obstruction Modules: These 3D objects define static obstacles within the scene. AutoDRIVE currently supports two such modules (refer Figure 3C).

3.2.2. Traffic Elements

Traffic elements (refer Figure 3D) define traffic laws within a particular driving scenario, thereby governing the traffic flow. AutoDRIVE currently supports modular traffic signs and lights. These modules support IoT and V2I communication technologies, and can be therefore integrated with AutoDRIVE Smart City Manager (SCM).

3.2.3. Surveillance Elements

AutoDRIVE features a surveillance element called AutoDRIVE Eye to view the entire scene from a bird’s-eye view. The said element is also integrated with AutoDRIVE SCM, and, upon calibration of its intrinsic parameters, is capable of estimating vehicle’s 2D pose within the map by detecting and tracking the AprilTag markers attached to each of them; this functionality is illustrated in Figure 3E (notice the roof-mounted camera).

3.2.4. Preconfigured Maps

AutoDRIVE currently offers four preconfigured maps (refer Figure 3F). Parking School is designed specifically for autonomous parking applications, wherein construction boxes define static obstacles and all the available free-space is drivable. The Driving School covers driving over straight roads, driving over curved roads and crossing an intersection. An Intersection School is designed specifically for intersection traversal applications, wherein lane bounds play an important role. Finally, Tiny Town is meant to be a comprehensive driving scenario, which covers each and every infrastructure element currently available in AutoDRIVE.

4. AutoDRIVE Simulator

AutoDRIVE Simulator [23,24] acts as the digital twin of the AutoDRIVE Testbed. It is primarily targeted towards the virtual prototyping of autonomy solutions, either for variability testing or as a part of a recursive simulation-deployment workflow, but can also be used for synthetic data generation.

4.1. Vehicle Dynmaics Simulation

The vehicle is jointly modelled (refer Figure 4A) as a rigid body and as a collection of sprung masses i M , such that the total mass of the rigid body is M = i M . The rigid body center of mass X C O M = i M i X i M is what links the said representations, where i X are the sprung mass coordinates.
The suspension force acting on each of the sprung masses is i M i Z ¨ + i B ( i Z ˙ i z ˙ ) + i K ( i Z i z ) ; where, i Z and i z are the displacements of sprung and unsprung masses, and  i B and i K are the damping and spring coefficients of i-th suspension, respectively.
The wheels of the vehicle are also modelled as rigid bodies of mass m that are acted upon by gravitational and suspension forces: i m i z ¨ + i B ( i z ˙ i Z ˙ ) + i K ( i z i Z ) .
The tire forces are computed based on the friction curve for each tire: i F t x = F ( i S x ) i F t y = F ( i S y ) ; where, i S x and i S y are the longitudinal and lateral slips of i-th tire, respectively. Here, the friction curve is approximated by a two-piece cubic spline F ( S ) = f 0 ( S ) ; S 0 S < S e f 1 ( S ) ; S e S < S a ; where, f k ( S ) = a k S 3 + b k S 2 + c k S + d k is a cubic polynomial function. The first segment of the said spline starts at zero ( S 0 , F 0 ) and reaches the extremum point ( S e , F e ) , while the other segment starts at the extremum point ( S e , F e ) and saturates at the asymptote point ( S a , F a ) , as shown in the inset of Figure 4A.
The tire slip is itself affected by various factors including tire stiffness i C α , steering angle δ , wheel speeds i ω , suspension forces i F s , and rigid-body momentum i P , all of which affect the longitudinal/lateral/both components of the linear velocity of the vehicle. Longitudinal slip i S x of i-th tire is computed by comparing the longitudinal components of surface velocity of i-th wheel (i.e., longitudinal linear velocity of vehicle) v x with the angular velocity i ω of i-th wheel: i S x = i r i ω v x v x . Lateral slip i S y of i-th tire is dependent on the direction in which the tire is pointing and the direction in which it is moving, commonly called the slip angle α . It is computed by comparing the longitudinal v x (a.k.a. forward velocity) and lateral v y (a.k.a. side-slip velocity) components of the vehicle’s linear velocity: i S y = t a n ( α ) = v y v x .

4.2. Sensor Simulation

The simulated vehicle is provided with the same sensing modalities as its real-world counterpart. The throttle ( τ ) and steering ( δ ) sensors are simulated through a simple feedback loop.
The incremental encoders are simulated by measuring the rotation of rear wheels (i.e., output shaft of driving actuators): i N t i c k s = i P P R i G R i N r e v ; where, i N t i c k s represents ticks measured by the i-th encoder, i P P R is the base resolution (pulses per revolution) of i-th encoder, i G R is the gear-ratio of i-th motor, and  i N r e v represents the number of revolutions of the output shaft of i-th motor.
The IPS and IMU are simulated based on temporally coherent rigid-body transform updates of the vehicle { v } w.r.t. the world { w } : w T v = [ R 3 × 3 0 1 × 3 t 3 × 1 1 ] S E ( 3 ) . The IPS provides 3-DOF positional coordinates { x , y , z } of the vehicle, whereas the IMU provides linear accelerations { a x , a y , a z } , angular velocities { ω x , ω y , ω z } and a 3-DOF orientation of the vehicle as Euler angles { ϕ x , θ y , ψ z } or quaternion { q 0 , q 1 , q 2 , q 3 } .
The LIDAR is simulated using iterative ray-casting raycast{ w T l , R , r m a x } θ θ m i n : θ r e s : θ m a x at ∼7 Hz update rate, where, w T l = w T v v T l S E ( 3 ) is the relative transform of LIDAR {l} w.r.t. vehicle {v} w.r.t. world {w}, R = r m a x s i n ( θ ) r m i n c o s ( θ ) 0 T is the direction vector of each ray-cast R, r m i n = 0.15 m and r m a x = 12 m are, respectively, the minimum and maximum linear ranges of LIDAR, θ m i n = 0 and θ m a x = 360 are, respectively, the minimum and maximum angular ranges of LIDAR, and  θ r e s = 1 is the angular resolution of LIDAR. The laser scan ranges are recorded by checking the ray-cast hits and thresholding the minimum linear range of LIDAR: ranges[i] = hit . dist if ray [ i ] . hit and hit . dist r m i n otherwise , where ray.hit is a Boolean flag that checks if a ray-cast hits any colliders in the scene and hit.dist is the Euclidean distance from source of the ray-cast { x r a y , y r a y , z r a y } to the hit-point { x h i t , y h i t , z h i t } : ( x h i t x r a y ) 2 + ( y h i t y r a y ) 2 + ( z h i t z r a y ) 2 .
The simulated physical cameras are parameterized by their focal length ( f = 3.04 mm), sensor size ( { s x , s y } = {3.68, 2.76} mm), target resolution (default = 720p) and distance from near and far clipping planes ( N = 0.01 m and F = 1000 m). The viewport-rendering pipeline for simulated cameras works in three stages. First, the camera view matrix V S E ( 3 ) is computed by taking the relative homogeneous transform of the camera { c } w.r.t. the world { w } : V = r 00 r 01 r 02 t 0 r 10 r 11 r 12 t 1 r 20 r 21 r 22 t 2 0 0 0 1 , where r i j and t i denote rotational and translational components, respectively. Next, the camera projection matrix P R 4 × 4 is computed by projecting the world coordinates to image space coordinates: P = 2 N R L 0 R + L R L 0 0 2 N T B T + B T B 0 0 0 F + N F N 2 F N F N 0 0 1 0 , where N and F denote distances to near and far clipping planes of the camera, respectively, and L, R, T and B denote the left, right, top and bottom offsets of the sensor, respectively. The camera parameters { f , s x , s y } are related to the projection matrix terms through the following relations: f = 2 N R L , a = s y s x , f a = 2 N T B . The perspective projection from the simulated camera’s viewport is given by C = P V W ; where, C = x c y c z c w c T represents the image space coordinates and W = x w y w z w w w T represents the world coordinates. Finally, this camera projection is converted into normalized device coordinates (NDC) by performing a perspective divide (i.e., dividing throughout by w c ), obtaining a viewport projection by scaling and shifting the result, and then using the rasterization process of the graphics API (e.g., DirectX for Windows, Metal for macOS and Vulkan for Linux). Additionally, a post-processing step simulates the lens and film effects of the camera, such as lens distortion, depth of field, exposure, ambient occlusion, contact shadows, bloom, motion blur, film grain, and chromatic aberration.

4.3. Actuator Simulation

The vehicle is actuated using two driving actuators and a steering actuator, the response delays and saturation limits of which are matched with their real-world counterparts by tuning their torque profiles and actuation limits, respectively.
The driving actuators drive the rear wheels by applying a torque: i τ d r i v e = i I w i ω ˙ w , where i I w = 1 2 i m w i r w 2 is the moment of inertia, i ω ˙ w is the angular acceleration, i m w is the mass and i r w is the radius of i-th wheel. Additionally, the holding torque of the driving actuators is simulated by applying an idle motor torque equivalent to the braking torque: i τ i d l e = i τ b r a k e .
The front wheels are steered using a steering actuator, which produces a torque proportional to the required angular acceleration: τ s t e e r = I s t e e r ω ˙ s t e e r . The individual turning angles, δ l and δ r , for left and right wheels, respectively, are calculated based on the commanded steering angle δ , using the Ackermann steering geometry defined by wheelbase l and track width w, as follows: δ l = t a n 1 2 l t a n ( δ ) 2 l + w t a n ( δ ) δ r = t a n 1 2 l t a n ( δ ) 2 l w t a n ( δ ) .

4.4. Infrastructure Simulation

Simulated environments can be set up in one of the following ways:
  • AutoDRIVE IDK: The modular and reconfigurable infrastructure development kit (IDK) can be used to create custom scenarios and maps by setting up the terrain modules, road networks, obstruction modules and traffic elements. These assets are present within the simulator source files. Particularly, the preconfigured maps depicted in Figure 3F(i,iii,iv) were constructed using the AutoDRIVE IDK.
  • Plug-In Scenarios: AutoDRIVE Simulator supports third-party tools (e.g., RoadRunner [25]) and modular open-source architecture (MOSA) standards (e.g., OpenSCENARIO [26], OpenDRIVE [27], etc.) that enable extensibility. Additionally, users can import a wide array of plugins, packages and assets in a variety of industry-standard formats (FBX, OBJ, SKP, 3DS, USD, etc.) for developing or customizing driving scenarios. Furthermore, the graphics textures designed for AutoDRIVE Testbed can first be imported into the simulator before large-scale printing and real-world setup. The preconfigured map depicted in Figure 3F(ii) was designed using a third-party graphics’ editing software, imported in AutoDRIVE Simulator, and finally printed and set up using AutoDRIVE Testbed.
  • Unity Terrain: Being built atop the Unity game engine, AutoDRIVE Simulator natively supports scenario design and development using Unity Terrain [28]. Users can define the terrain mesh, texture, heightmap, vegetation, skybox, wind, etc., to design on-road/off-road scenarios and perform variability testing.
At every time step, the simulator performs mesh–mesh interference detection and computes the contact forces, frictional forces and momentum transfer, along with the linear and angular drag acting on the rigid-bodies (refer Figure 4B).

4.5. Simulator Features

AutoDRIVE Simulator is developed atop the Unity [29] game engine, which employs PhysX [30] to simulate the multi-threaded framerate-independent kinematics and dynamics of all the physical entities and exploits the High-Definition Render Pipeline (HDRP) [31] along with the Post-Processing Stack [32] to render photorealistic graphics.
The simulator features an interactive graphical user interface (GUI) consisting of the Menu Panel on the left-hand side and Heads-Up Display (HUD) on the right-hand side. Figure 4C depicts the simulator’s GUI with both the panels enabled. The Menu Panel hosts input fields and buttons to configure and control various features of the simulator (refer Figure 4D). This includes controls for the communication bridge, along with a series of buttons for (a) toggling between manual and autonomous driving modes for the ego vehicle; (b) switching between the available scene cameras, with each providing a distinct view; (c) altering the graphics quality to match the quality–performance trade-off; (d) toggling the scene light to simulate day and night driving conditions; (e) resetting the scene to initial conditions; and (f) quitting the simulator application. The HUD Panel, on the other hand, displays prominent simulation parameters along with vehicle status and sensory data in real-time. It also hosts a time-synchronized data-recording functionality, which can be used to export vehicle as well as infrastructure data for a specific run, thereby fostering data-driven approaches aimed at autonomous driving and smart city management.
The simulator natively supports C# scripting, which can be leveraged to customize existing and/or introduce new features, functionalities, modules, behaviors, physics, graphics, communication bridges, and APIs, and even set up co-simulation frameworks with other simulation tools.
Finally, it is worth mentioning that the simulator, in its source form, was integrated with various plugins and packages, such as the Unity ML Agents Toolkit [33], a machine learning framework for developing and deploying deep imitation/reinforcement learning-based applications directly from within the simulator.

5. AutoDRIVE Devkit

AutoDRIVE Devkit is a collection of software packages, application programming interfaces (APIs) and tools, which enables the flexible development of autonomous driving, as well as smart city management algorithms targeting the testbed and/or simulator. It supports both local as well as distributed computing, thereby allowing for the development of both centralized and decentralized autonomy algorithms.

5.1. Autonomous Driving Software Stack

The autonomous driving software stack (ADSS) aids in the development of autonomy algorithms specifically targeting the vehicle. It can be used to develop single as well as multi-agent autonomous driving algorithms.

5.1.1. ROS Package

AutoDRIVE ROS package supports the flexible development of modular autonomy algorithms. It can be installed on an ROS-compatible workstation for interfacing with the simulator (locally/remotely), or directly on Nigel’s on-board computer for hardware deployment.

5.1.2. Scripting APIs

AutoDRIVE Devkit currently offers scripting APIs for Python and C++, which can be exploited to develop high-performance autonomy algorithms, without ROS as an intermediary. Such source codes can be interfaced with the simulator (locally/remotely), or directly deployed on Nigel’s on-board computer for hardware validation.

5.2. Smart City Software Stack

The smart city software stack (SCSS) aids in the development of autonomy algorithms specifically targeting the infrastructure. It can work in tandem with ADSS to develop smart city applications pertaining to traffic management.

5.2.1. SCM Server

AutoDRIVE Devkit offers a centralized Smart City Manager (SCM) server to monitor and control various “smart” elements. The server hosts a database to keep track of all the vehicles along with the active and passive traffic elements within a particular scene.

5.2.2. SCM Webapp

AutoDRIVE SCM hosts an interactive webapp, which allows for the users to connect with the database for monitoring and controlling the traffic flow in real-time.

6. Demonstration Case-Studies

This work showcases key features and capabilities of the AutoDRIVE Ecosystem through four carefully shortlisted case-studies (refer Table 2). Although this paper cannot provide exhaustive details pertaining to any particular demonstration, we recommend that interested readers peruse this technical report [34].
It is to be noted that the presented demonstrations are by no means exhaustive and that AutoDRIVE Ecosystem can be employed to develop, simulate and deploy a much wider array of applications including (but not limited to) synthetic/real/hybrid data collection and labeling; traditional (deterministic/probabilistic, classical/optimal, etc.) as well as modern (deep imitation/reinforcement/hybrid learning, etc.) algorithms for perception, state estimation, path/motion-planning and motion control; modular as well as end-to-end autonomy stacks; benchmarking existing solutions (i.e., education) or innovating novel scientific and technological approaches for autonomy (i.e., research), etc.

6.1. Autonomous Parking

This demonstration leveraged AutoDRIVE’s ROS-enabled capabilities to demonstrate autonomous parking (refer Figure 5A). First, the vehicle mapped its surroundings using the Hector SLAM algorithm [35] (refer Figure 5B). It could then localize itself against this known static map using range-flow-based odometry [36] (refer Figure 5C) and an adaptive particle filter algorithm [37] (refer Figure 5D). For autonomous navigation, the vehicle planned a feasible global path from its current pose to parking pose using the A* algorithm [38], while also re-planning its local trajectory for dynamic collision avoidance using the timed-elastic-band approach [39]. A proportional controller generated driving (throttle/brake) and steering commands for the vehicle to track the local trajectory (refer Figure 5E).
Future work in this direction can include the benchmarking of various SOTA and/or novel algorithms for mapping, localization, path-planning and motion control.

6.2. Behavioral Cloning

This demonstration was based on [40], wherein the objective was to employ a convolutional neural network (CNN) [41] for cloning the end-to-end driving behavior of a human. As such, AutoDRIVE Simulator was exploited to record five laps’ worth of temporally coherent labeled manual driving data, which were balanced, augmented and pre-processed using standard computer vision techniques, to train a six-layer-deep CNN (refer Figure 6A). Post training for four epochs, with a learning rate of 1e-3 using the Adam optimizer [42], the training and validation losses converged stably without over/under-fitting. The trained model was then deployed back into the virtual world to validate its performance through activation and prediction analyses (refer Figure 6B).
Further, the same model was transferred to AutoDRIVE Testbed to validate the sim2real capability of the ecosystem through activation and prediction analyses (refer Figure 6C). To obtain a zero-shot sim2real transition: (i) the physical and visual aspects of virtual world were set up to be as close to the real world as possible, and (ii) the exhaustive data augmentation pipeline implicitly performed domain randomization. However, further investigation is required to comment on and improve the robustness of sim2real transfer considering sensor simulation, vehicle modeling and scenario representation. Finally, although this work adopted a coupled-control law for vehicle motion smoothing, a potential improvement could be to investigate independent actuation smoothing techniques such as low-pass filters or proximal bounds.

6.3. Intersection Traversal

Inspired by [43], this work demonstrates single and multi-agent (refer Figure 7A) intersection traversal using deep reinforcement learning (refer Figure 7B). Each agent collected a vectorized observation o t i = g i , p ˜ i , ψ ˜ i , v ˜ i t , including its relative goal location g t i , along with relative location p ˜ t i , relative yaw ψ ˜ t i and velocity v ˜ t i of its peers obtained through V2V communication. The action space of each agent, a t i , included discretized steering δ t i 1 , 0 , 1 and constant throttle ( τ t i = 80 % ). An extrinsic reward function r t i = r g o a l = + 1 r c o l l i s i o n = 0.425 g t i 2 kept agents in check while training a three-layer, fully connected neural-network-based policy, π θ a t | o t using a PPO algorithm [44] (refer Figure 7C). This ultimately resulted in all agents being able to safely traverse the intersection (refer Figure 7D). Although we could not implement this application in the real world owing to monetary constraints, investigating the sim2real transition of this application would be a natural progression of this work. Additionally, the application of actuation smoothing techniques could be a potential improvement.

6.4. Smart City Management

This novel use-case of smart city traffic management was possible due to AutoDRIVE’s V2I and IoT abilities. As depicted in Figure 8A), the SCM server hosted a database to keep track of all the traffic elements, and acted as a high-level behavior planner for the ego vehicle. It switched the vehicle behavior upon detecting respective traffic signs and lights by setting the appropriate throttle and steering trims, which were then passed on to a proximally optimal predictive (POP) controller coupled with adaptive longitudinal controller (ALC) [45]. Finally, the SCM server teleoperated the vehicle to achieve the mission objective (refer Figure 8B).
A natural progression of this work would be to implement a multi-agent scenario, preferably within a mixed-reality digital-twin setting, to investigate different strategies for efficient smart city traffic management.

7. Summary

AutoDRIVE was developed with the aim of tightly integrating real and virtual worlds into a common toolchain, without compromising the comprehensiveness, flexibility and accessibility required for prototyping and validating autonomy solutions. It has numerous applications, which are bound to increase as the ecosystem is upgraded. Potential improvements include supporting heterogeneous vehicles and robotic pedestrians, full-scale vehicles and environments, expanding API support and adding extended reality capabilities, to name a few. We hope that the community benefits from adopting this ecosystem for education, research or anything inbetween.

Author Contributions

Conceptualization, T.S. and C.S.; methodology, T.S. and C.S.; software, T.S. and C.S.; validation, C.S., T.S. and S.K.; investigation, C.S. and T.S.; resources, T.S., C.S., S.K. and V.K.; writing—original draft preparation, C.S. and T.S.; writing—review and editing, T.S., C.S., S.K., V.K. and M.X.; visualization, T.S. and C.S.; supervision, S.K., V.K. and M.X.; project administration, S.K., V.K. and M.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

AutoDRIVE Ecosystem is openly accessible. Website: https://AutoDRIVE-Ecosystem.github.io (accessed on 26 November 2022); GitHub: https://github.com/AutoDRIVE-Ecosystem (accessed on 26 November 2022); YouTube: https://www.youtube.com/@AutoDRIVE-Ecosystem (accessed on 25 February 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  2. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. Proc. Mach. Learn. Res. 2017, 78, 1–16. [Google Scholar]
  3. Rong, G.; Shin, B.H.; Tabatabaee, H.; Lu, Q.; Lemke, S.; Možeiko, M.; Boise, E.; Uhm, G.; Gerow, M.; Mehta, S.; et al. LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
  4. Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. In Field and Service Robotics; Hutter, M., Siegwart, R., Eds.; Springer: Cham, Switzerland, 2018; pp. 621–635. [Google Scholar]
  5. Karaman, S.; Anders, A.; Boulet, M.; Connor, J.; Gregson, K.; Guerra, W.; Guldner, O.; Mohamoud, M.; Plancher, B.; Shin, R.; et al. Project-based, collaborative, algorithmic robotics for high school students: Programming self-driving race cars at MIT. In Proceedings of the 2017 IEEE Integrated STEM Education Conference (ISEC), Princeton, NJ, USA, 11 March 2017; pp. 195–203. [Google Scholar] [CrossRef]
  6. Goldfain, B.; Drews, P.; You, C.; Barulic, M.; Velev, O.; Tsiotras, P.; Rehg, J.M. AutoRally: An Open Platform for Aggressive Autonomous Driving. IEEE Control. Syst. Mag. 2019, 39, 26–55. [Google Scholar] [CrossRef]
  7. O’Kelly, M.; Sukhil, V.; Abbas, H.; Harkins, J.; Kao, C.; Pant, Y.V.; Mangharam, R.; Agarwal, D.; Behl, M.; Burgio, P.; et al. F1/10: An Open-Source Autonomous Cyber-Physical Platform. arXiv 2019, arXiv:1901.08567. [Google Scholar]
  8. Srinivasa, S.S.; Lancaster, P.; Michalove, J.; Schmittle, M.; Summers, C.; Rockett, M.; Smith, J.R.; Choudhury, S.; Mavrogiannis, C.; Sadeghi, F. MuSHR: A Low-Cost, Open-Source Robotic Racecar for Education and Research. arXiv 2019, arXiv:1908.08031. [Google Scholar]
  9. HyphaROS Workshop. HyphaROS Racecar. Available online: https://github.com/Hypha-ROS/hypharos_racecar (accessed on 13 February 2021).
  10. Donkey Community. An Open-Source DIY Self-Driving Platform for Small-Scale Cars. Available online: https://www.donkeycar.com (accessed on 21 February 2021).
  11. Automatic Control Laboratory, ETH Zürich. ORCA (Optimal RC Racing) Project; ETH Zürich: Zürich, Switzerland; Available online: https://control.ee.ethz.ch/research/team-projects/autonomous-rc-car-racing.html (accessed on 12 March 2021).
  12. Kalidien, T.; van der Burg, P.; Mulder, A.; Rietveld, E.; Vonk, M.; Hellendoorn, H.; Alirezaei, M. Design and Development of the Delft Scaled Vehicle: A Platform for Autonomous Driving Tests. Bachelor’s Thesis, Delft Center for Systems & Control, Delft University of Technology, Delft, The Netherlands, 2017. [Google Scholar]
  13. Pappas, J.; Yuan, C.H.; Lu, C.S.; Nassar, N.; Miller, A.; van Leeuwen, S.; Borrelli, F. Berkeley Autonomous Race Car (BARC). Available online: https://sites.google.com/site/berkeleybarcproject (accessed on 1 March 2021).
  14. Quanser Consulting Inc. QCar–A Sensor-Rich Autonomous Vehicle; Quanser Consulting Inc.: Markham, ON, Canada; Available online: https://www.quanser.com/products/qcar (accessed on 15 March 2021).
  15. Amazon Web Services. AWS DeepRacer. Available online: https://aws.amazon.com/deepracer (accessed on 15 March 2021).
  16. Paull, L.; Tani, J.; Ahn, H.; Alonso-Mora, J.; Carlone, L.; Cap, M.; Chen, Y.F.; Choi, C.; Dusek, J.; Fang, Y.; et al. Duckietown: An Open, Inexpensive and Flexible Platform for Autonomy Education and Research. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1497–1504. [Google Scholar] [CrossRef]
  17. Robotis Inc. TurtleBot3; Robotis Inc.: Beijing, China; Available online: https://emanual.robotis.com/docs/en/platform/turtlebot3/overview (accessed on 17 March 2021).
  18. Wilson, S.; Gameros, R.; Sheely, M.; Lin, M.; Dover, K.; Gevorkyan, R.; Haberland, M.; Bertozzi, A.; Berman, S. Pheeno, A Versatile Swarm Robotic Research and Education Platform. IEEE Robot. Autom. Lett. 2016, 1, 884–891. [Google Scholar] [CrossRef]
  19. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A. ROS: An open-source Robot Operating System. In Proceedings of the ICRA 2009 Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; Volume 3. [Google Scholar]
  20. Koenig, N.P.; Howard, A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar] [CrossRef]
  21. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; Zaremba, W. OpenAI Gym. arXiv 2016, arXiv:1606.01540. [Google Scholar]
  22. Hershberger, D.; Gossow, D.; Faust, J. RViz: 3D Visualization Tool for ROS. Available online: http://wiki.ros.org/rviz (accessed on 23 March 2021).
  23. Samak, T.V.; Samak, C.V.; Xie, M. AutoDRIVE Simulator: A Simulator for Scaled Autonomous Vehicle Research and Education. In Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System, CCRIS’21, Qingdao, China, 20–22 August 2021; pp. 1–5. [Google Scholar] [CrossRef]
  24. Samak, T.V.; Samak, C.V. AutoDRIVE Simulator–Technical Report. arXiv 2022, arXiv:2211.07022. [Google Scholar] [CrossRef]
  25. Mathworks Inc. RoadRunner; Mathworks Inc.: Natick, MA, USA; Available online: https://www.mathworks.com/products/roadrunner.html (accessed on 27 March 2021).
  26. Association for Standardization of Automation and Measuring Systems (ASAM). OpenSCENARIO; ASAM: Dresden, Germany; Available online: https://www.asam.net/standards/detail/openscenario (accessed on 30 March 2021).
  27. Association for Standardization of Automation and Measuring Systems (ASAM). OpenDRIVE; ASAM: Dresden, Germany; Available online: https://www.asam.net/standards/detail/opendrive (accessed on 30 March 2021).
  28. Unity Technologies. Unity Terrain; Unity Technologies: San Francisco, CA, USA; Available online: https://docs.unity3d.com/Manual/script-Terrain.html (accessed on 22 February 2021).
  29. Unity Technologies. Unity; Unity Technologies: San Francisco, CA, USA; Available online: https://unity.com (accessed on 26 January 2021).
  30. NVIDIA GameWorks. NVIDIA PhysX SDK 4.1; NVIDIA GameWorks: Seattle, WA, USA; Available online: https://github.com/NVIDIAGameWorks/PhysX-3.4 (accessed on 28 January 2021).
  31. Unity Technologies Technical Marketing. Unity Scriptable Render Pipeline; Unity Technologies: San Francisco, CA, USA; Available online: https://github.com/UnityTechnologies/ScriptableRenderPipeline (accessed on 31 January 2021).
  32. Unity Technologies. Post-Processing Stack v2; Unity Technologies: San Francisco, CA, USA; Available online: https://github.com/Unity-Technologies/PostProcessing (accessed on 31 January 2021).
  33. Juliani, A.; Berges, V.P.; Teng, E.; Cohen, A.; Harper, J.; Elion, C.; Goy, C.; Gao, Y.; Henry, H.; Mattar, M.; et al. Unity: A General Platform for Intelligent Agents. arXiv 2018, arXiv:1809.02627. [Google Scholar] [CrossRef]
  34. Samak, T.V.; Samak, C.V. AutoDRIVE–Technical Report. arXiv 2022, arXiv:2211.08475. [Google Scholar] [CrossRef]
  35. Kohlbrecher, S.; von Stryk, O.; Meyer, J.; Klingauf, U. A Flexible and Scalable SLAM System with Full 3D Motion Estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar] [CrossRef]
  36. Jaimez, M.; Monroy, J.G.; Gonzalez-Jimenez, J. Planar Odometry from a Radial Laser Scanner. A Range Flow-based Approach. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4479–4485. [Google Scholar] [CrossRef]
  37. Fox, D. KLD-Sampling: Adaptive Particle Filters. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 3–8 December 2001; Dietterich, T., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2001; Volume 14. [Google Scholar]
  38. Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  39. Rösmann, C.; Hoffmann, F.; Bertram, T. Kinodynamic Trajectory Optimization and Control for Car-Like Robots. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5681–5686. [Google Scholar] [CrossRef]
  40. Samak, T.V.; Samak, C.V.; Kandhasamy, S. Robust Behavioral Cloning for Autonomous Vehicles Using End-to-End Imitation Learning. SAE Int. J. Connect. Autom. Veh. 2021, 4, 279–295. [Google Scholar] [CrossRef]
  41. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: New York, NY, USA, 2012; Volume 25. [Google Scholar]
  42. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  43. Sivanathan, K.; Vinayagam, B.K.; Samak, T.; Samak, C. Decentralized Motion Planning for Multi-Robot Navigation using Deep Reinforcement Learning. In Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 3–5 December 2020; pp. 709–716. [Google Scholar] [CrossRef]
  44. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
  45. Samak, C.V.; Samak, T.V.; Kandhasamy, S. Proximally Optimal Predictive Control Algorithm for Path Tracking of Self-Driving Cars. In Proceedings of the Advances in Robotics—5th International Conference of The Robotics Society, AIR2021, Kanpur, Uttar Pradesh, India, 30 June–4 July 2021; pp. 1–5. [Google Scholar] [CrossRef]
Figure 1. High -level overview of the AutoDRIVE Ecosystem, depicting the key modules and their interactions within the three platforms of the ecosystem, viz. AutoDRIVE Testbed, AutoDRIVE Simulator and AutoDRIVE Devkit.
Figure 1. High -level overview of the AutoDRIVE Ecosystem, depicting the key modules and their interactions within the three platforms of the ecosystem, viz. AutoDRIVE Testbed, AutoDRIVE Simulator and AutoDRIVE Devkit.
Robotics 12 00077 g001
Figure 2. Native vehicle (Nigel) of the AutoDRIVE Ecosystem: (A) high-level vehicle architecture; (B) various components and sub-systems of the vehicle; (C) open-source chassis of the vehicle adopting rear-wheel drive, Ackermann steered actuation mechanism.
Figure 2. Native vehicle (Nigel) of the AutoDRIVE Ecosystem: (A) high-level vehicle architecture; (B) various components and sub-systems of the vehicle; (C) open-source chassis of the vehicle adopting rear-wheel drive, Ackermann steered actuation mechanism.
Robotics 12 00077 g002
Figure 3. Modular and reconfigurable infrastructure development kit of the AutoDRIVE Ecosystem: (A) terrain modules—(i) asphalt, (ii) dirt, (iii) lawn, (iv) snow, (v) water; (B) road kit—(i) road patch, (ii) straight road, (iii) dead-end, (iv) curved road, (v) 3-way intersection, (vi) 4-way intersection, (vii) parking lot, (viii) roadside parking; (C) obstruction modules—(i) construction box, (ii) traffic cone; (D) traffic elements—(i) traffic light, (ii) stop sign, (iii) give way sign, (iv) regulatory sign, (v) cautionary sign, (vi) informatory sign; (E) surveillance elements—vehicle localization using the AutoDRIVE Eye; (F) preconfigured maps—(i) Parking School, (ii) Driving School, (iii) Intersection School, (iv) Tiny Town.
Figure 3. Modular and reconfigurable infrastructure development kit of the AutoDRIVE Ecosystem: (A) terrain modules—(i) asphalt, (ii) dirt, (iii) lawn, (iv) snow, (v) water; (B) road kit—(i) road patch, (ii) straight road, (iii) dead-end, (iv) curved road, (v) 3-way intersection, (vi) 4-way intersection, (vii) parking lot, (viii) roadside parking; (C) obstruction modules—(i) construction box, (ii) traffic cone; (D) traffic elements—(i) traffic light, (ii) stop sign, (iii) give way sign, (iv) regulatory sign, (v) cautionary sign, (vi) informatory sign; (E) surveillance elements—vehicle localization using the AutoDRIVE Eye; (F) preconfigured maps—(i) Parking School, (ii) Driving School, (iii) Intersection School, (iv) Tiny Town.
Robotics 12 00077 g003
Figure 4. Native simulation platform of the AutoDRIVE Ecosystem: (A) simulation of vehicle dynamics, sensors and actuators; (B) simulation of infrastructure dynamics and interaction physics; (C) graphical user interface of the simulator; (D) simulator features—(i) Driver’s Eye camera, (ii) Bird’s Eye camera, (iii) God’s Eye camera, (iv) scene light enabled, (v) scene light disabled, (vi) low-quality graphics, (vii) high-quality graphics, (viii) ultra-quality graphics.
Figure 4. Native simulation platform of the AutoDRIVE Ecosystem: (A) simulation of vehicle dynamics, sensors and actuators; (B) simulation of infrastructure dynamics and interaction physics; (C) graphical user interface of the simulator; (D) simulator features—(i) Driver’s Eye camera, (ii) Bird’s Eye camera, (iii) God’s Eye camera, (iv) scene light enabled, (v) scene light disabled, (vi) low-quality graphics, (vii) high-quality graphics, (viii) ultra-quality graphics.
Robotics 12 00077 g004
Figure 5. Autonomous parking: (A) high-level architecture of the autonomy algorithm; (BE) respectively depict temporal analysis of simultaneous localization and mapping, odometry, localization, and navigation modules—(i) physical vehicle driving in real-world settings, (ii) visualization of software algorithm; note the additional boxes acting as unmapped obstacles in (E). Video: https://youtu.be/oBqIZZA0wkc (accessed on 9 May 2021).
Figure 5. Autonomous parking: (A) high-level architecture of the autonomy algorithm; (BE) respectively depict temporal analysis of simultaneous localization and mapping, odometry, localization, and navigation modules—(i) physical vehicle driving in real-world settings, (ii) visualization of software algorithm; note the additional boxes acting as unmapped obstacles in (E). Video: https://youtu.be/oBqIZZA0wkc (accessed on 9 May 2021).
Robotics 12 00077 g005
Figure 6. Behavioral cloning: (A) high-level architecture of the training and deployment pipelines; (B,C) depict behavioral analysis of the autonomous vehicle in virtual and real-world settings, respectively—(i) trajectory tracked by the vehicle, (ii) a sample pre-processed camera frame fed as input to the neural network, (iii), (iv) and (v) depict activation maps of the first, second and third convolutional layers of the neural network, respectively, (vi) salient activations from all the activation maps, (vii) neural network prediction analysis for one complete lap. Video: https://youtu.be/rejpoogaXOE (accessed on 11 May 2021).
Figure 6. Behavioral cloning: (A) high-level architecture of the training and deployment pipelines; (B,C) depict behavioral analysis of the autonomous vehicle in virtual and real-world settings, respectively—(i) trajectory tracked by the vehicle, (ii) a sample pre-processed camera frame fed as input to the neural network, (iii), (iv) and (v) depict activation maps of the first, second and third convolutional layers of the neural network, respectively, (vi) salient activations from all the activation maps, (vii) neural network prediction analysis for one complete lap. Video: https://youtu.be/rejpoogaXOE (accessed on 11 May 2021).
Robotics 12 00077 g006
Figure 7. Intersection traversal: (A) learning scenario descriptions; (B) deep reinforcement learning architecture; (C,D), respectively, depict training and deployment results—(i) single-agent learning scenario, (ii) multi-agent learning scenario. Video: https://youtu.be/AEFJbDzOpcM (accessed on 8 April 2021).
Figure 7. Intersection traversal: (A) learning scenario descriptions; (B) deep reinforcement learning architecture; (C,D), respectively, depict training and deployment results—(i) single-agent learning scenario, (ii) multi-agent learning scenario. Video: https://youtu.be/AEFJbDzOpcM (accessed on 8 April 2021).
Robotics 12 00077 g007
Figure 8. Smart city management: (A) high-level architecture of the autonomy algorithm; (B) snapshot instances from simulation—(i) vehicle observing left-curve sign, (ii) vehicle observing right-curve sign, (iii) vehicle crossing the intersection, (iv) vehicle stopping at red light, (v) vehicle stopping at yellow light, (vi) vehicle resuming on green light; the traffic lights are toggled manually. Video: https://youtu.be/fnxOpV1gFXo (accessed on 5 May 2021).
Figure 8. Smart city management: (A) high-level architecture of the autonomy algorithm; (B) snapshot instances from simulation—(i) vehicle observing left-curve sign, (ii) vehicle observing right-curve sign, (iii) vehicle crossing the intersection, (iv) vehicle stopping at red light, (v) vehicle stopping at yellow light, (vi) vehicle resuming on green light; the traffic lights are toggled manually. Video: https://youtu.be/fnxOpV1gFXo (accessed on 5 May 2021).
Robotics 12 00077 g008
Table 1. Comparative analysis of scaled platforms/ecosystems for autonomy research and education.
Table 1. Comparative analysis of scaled platforms/ecosystems for autonomy research and education.

Platform/Ecosystem

Cost *
Sensing ModalitiesComputational Resources
Actuation
Mechanism

Dedicated
Simulator
V2X SupportAPI Support
ScaleOpen HardwareOpen Software ThrottleSteeringWheel EncodersGPS/IPSIMULIDARCameraHigh-LevelLow-LevelAckermann
Steered
Differetial-Drive/
Skid-Steered
Multi-Agent
Support
V2VV2IC++PythonROSMATLAB/SimulinkWebapp
AutoDRIVE1:14$450Jetson NanoArduino Nano
MIT Racecar1:10$2600Jetson TX2VESCGazebo
AutoRally1:5$23,300CustomTeensy LC/Arduino MicroGazebo
F1TENTH1:10$3260Jetson TX2VESC 6MkVRViz/Gazebo
DSV1:10$1000ODROID-XU4Arduino (Mega + Uno)
MuSHR1:10$930Jetson NanoTurnigy SK8-ESCRViz
HyphaROS RaceCar1:10$600ODROID-XU4RC ESC TBLE-02S
Donkey Car1:16$370Raspberry PiESCGym
BARC1:10$1030ODROID-XU4Arduino Nano
OCRA1:43$960NoneARM Cortex M4 μ C
QCar1:10$20,000Jetson TX2ProprietarySimulink
AWS DeepRacer1:18$400ProprietaryProprietaryGym
DuckietownN/A$450Raspberry Pi/Jetson NanoNoneGym
TurtleBot3N/A$590Raspberry PiOpenCRGazebo
PheenoN/A$350Raspberry PiArduino Pro Mini
indicates complete fulfillment; indicates conditional, unsupported or partial fulfillment; and indicates non-fulfillment. * All cost values are ceiled to the nearest $10.
Table 2. Decision matrix for choosing the demonstration case-studies.
Table 2. Decision matrix for choosing the demonstration case-studies.
Autonomy AlgorithmPlatform ExploitedDevelopment FrameworkAutonomy StackScience and Technology DemonstratedAgents InvolvedSensors EmployedActuators Controlled
Autonomous ParkingAutoDRIVE TestbedAutoDRIVE ROS Package
(Python, C++)
Modular (Perception,
Planning and Control)
Teleoperation, SLAM, Probabilistic
Map-Based Localization, Global Planning,
Local Planning, Motion Control,
Static/Dynamic Collision Avoidance
Single-Agent SystemLIDARDriving Actuators,
Steering Actuator
Behavioral CloningAutoDRIVE Simulator,
AutoDRIVE Testbed
AutoDRIVE Python API
(Python)
End-to-End (Sensorimotor
Policy)
Computer Vision, Deep Imitation Learning,
Lane Keeping, Sim2Real Transition
Single-Agent SystemFront CameraDriving Actuators,
Steering Actuator
Intersection TraversalAutoDRIVE SimulatorUnity ML-Agents (C#)End-to-End (Sensorimotor
Policy)
V2V Communication, Deep Reinforcement
Learning, Dynamic Collision Avoidance,
Multi-Agent Cooperation and Coordination
Multi-Agent SystemIncremental
Encoders,
IPS, IMU
Steering Actuator
(Constant Throttle)
Smart City ManagementAutoDRIVE SimulatorAutoDRIVE Webapp API
(Python)
Modular (Surveillance,
Planning and Control)
V2I Communication, IoT, Centralized Control
and Coordination
Single-Agent SystemNoneDriving Actuators,
Steering Actuator
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Samak, T.; Samak, C.; Kandhasamy, S.; Krovi, V.; Xie, M. AutoDRIVE: A Comprehensive, Flexible and Integrated Digital Twin Ecosystem for Autonomous Driving Research & Education. Robotics 2023, 12, 77. https://doi.org/10.3390/robotics12030077

AMA Style

Samak T, Samak C, Kandhasamy S, Krovi V, Xie M. AutoDRIVE: A Comprehensive, Flexible and Integrated Digital Twin Ecosystem for Autonomous Driving Research & Education. Robotics. 2023; 12(3):77. https://doi.org/10.3390/robotics12030077

Chicago/Turabian Style

Samak, Tanmay, Chinmay Samak, Sivanathan Kandhasamy, Venkat Krovi, and Ming Xie. 2023. "AutoDRIVE: A Comprehensive, Flexible and Integrated Digital Twin Ecosystem for Autonomous Driving Research & Education" Robotics 12, no. 3: 77. https://doi.org/10.3390/robotics12030077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop