This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:av:autonomy_and_autonomous_systems:technology:sensor_technology [2021/01/21 13:22] – admin | en:av:autonomy_and_autonomous_systems:technology:sensor_technology [Unknown date] (current) – external edit (Unknown date) 127.0.0.1 | ||
|---|---|---|---|
| Line 2: | Line 2: | ||
| Sensors are used for estimating the state of the autonomous system as well as its operating environment. As shown in the chapter on intelligent control depending on particular architecture sensor data processing might vary from architecture to architecture, | Sensors are used for estimating the state of the autonomous system as well as its operating environment. As shown in the chapter on intelligent control depending on particular architecture sensor data processing might vary from architecture to architecture, | ||
| - | In general, sensors provide information measuring the same phenomena as sensors of biological systems – light, sound, physical orientation, | + | In general, sensors provide information measuring the same phenomena as sensors of biological systems – light, sound, physical orientation, |
| ==== Ultrasonic sensors ==== | ==== Ultrasonic sensors ==== | ||
| Line 13: | Line 13: | ||
| </ | </ | ||
| - | Sonar sensors differ mainly by the wavelength of the impulse. Depending on particular configurations changes to distance and wave propagation speed of the impulse. It must be emphasized, that speed of sound is different in different environments (in terms of density), at different altitudes and different temperatures. Usually, time difference is measured by the on-board | + | Sonar sensors differ mainly by the wavelength of the impulse. Depending on particular configurations changes to distance and wave propagation speed of the impulse. It must be emphasized, that speed of sound is different in different environments (in terms of density), at different altitudes, and at different temperatures. Usually, |
| These sensors are used as simple contactless bumping sensors or in more complex scenarios as “sound radars” enabling reveal high dualization of robot environment especially in high-density environments like water in underwater applications. | These sensors are used as simple contactless bumping sensors or in more complex scenarios as “sound radars” enabling reveal high dualization of robot environment especially in high-density environments like water in underwater applications. | ||
| Line 19: | Line 19: | ||
| ==== Lidars ==== | ==== Lidars ==== | ||
| - | Lidars (Light detection and ranging) sensors are very widely used in autonomous systems. In the same way, as sonars, Lidars exploit time difference. However, they might use other measuring techniques as well. Therefore, several types of Lidars sensors might be used in autonomous systems: | + | Lidars (Light detection and ranging) sensors are very widely used in autonomous systems. In the same way, as sonars, Lidars exploit time differences. However, they might use other measuring techniques as well. Therefore, several types of Lidars sensors might be used in autonomous systems: |
| - | **Pulse Lidars** use time of flight principle in the same way as sonars do. Knowing the speed of light gives enough information to calculate distance from the object hit by the laser ray. Another mechanism used in scanning lasers is a rotating prism, which enables to control the angle of the emitted laser pulse. Thereby both angle and distance might be estimated, which provides data to calculate the relative position of the object hit by the laser ray. | + | **Pulse Lidars** use the time of flight principle in the same way as sonars do. Knowing the speed of light gives enough information to calculate |
| - | **Continuous-wave** amplitude Modulated (CWAM) Lidars | + | **Continuous-wave** amplitude Modulated (CWAM) Lidars |
| **Continuous-wave** frequency modulated (CWFM) Lidars mixes emitted and reflected signals using the principle of heterodyne via heterodyning (a method of mixing two frequencies). Using frequency shifts it is possible to estimate object motion speed and direction. | **Continuous-wave** frequency modulated (CWFM) Lidars mixes emitted and reflected signals using the principle of heterodyne via heterodyning (a method of mixing two frequencies). Using frequency shifts it is possible to estimate object motion speed and direction. | ||
| Line 36: | Line 36: | ||
| - | Since the laser ray is very compact the sensing resolution is much higher than sonar sensors could provide. Another advantage is a relative energy-efficiency enabling the use of Lidars even to scan the object at significant distances. Currently, the market provides single beam Lidars, 2D/3D scanning Lidars. Currently, even 4D Lidars are in development to provide object motion data along with simple distance. | + | Since the laser ray is very compact the sensing resolution is much higher than sonar sensors could provide. Another advantage is a relative energy efficiency enabling the use of Lidars even to scan the object at significant distances. Currently, the market provides single beam Lidars, 2D/3D scanning Lidars. Currently, even 4D Lidars are in development to provide object motion data along with simple distance. |
| ==== Radars ==== | ==== Radars ==== | ||
| - | Radars use radio signals and their features to estimate the distance to the object, its speed and direction of motion. Mainly two types of radars are used in autonomous systems – pulses radars and frequency modulation radars. | + | Radars use radio signals and their features to estimate the distance to the object, its speed, and direction of motion. Mainly two types of radars are used in autonomous systems – pulses radars and frequency modulation radars. |
| - | **Pulsed radars** in the same way as sonars or pulse Lidars, pulse radars use time difference of emitted and received signal pulses enabling to estimate the distance to the object detected. | + | **Pulsed radars** in the same way as sonars or pulse Lidars, pulse radars use the time difference of emitted and received signal pulses enabling |
| - | **Frequency modulated Continuous wave (FMCW) radars** use frequency modulated signal, which might very from 30 – 300GHz. The emitted signal is mixed with the received signal to produce so-called intermediate frequency signal of IF. IF signal is used to estimate object range, speed and direction. Dedicated high-resolution FMCW radars are used to receive radar images enabling not only to detect but also to recognize the objects detected. Sometimes these radars are called broad-band radars or imaging radars. Currently mainly broad-band radars are used in combination with multiple receiving antennas enabling effective operation with different frequencies. | + | **Frequency modulated Continuous wave (FMCW) radars** use frequency modulated signal, which might vary from 30 GHz to 300 GHz. The emitted signal is mixed with the received signal to produce |
| < | < | ||
| Line 53: | Line 53: | ||
| ==== Digital cameras ==== | ==== Digital cameras ==== | ||
| - | Digital cameras, like web cameras, are used to visual information | + | Digital cameras, like web cameras, are used to visual information |
| - | **Single-camera solution** uses a single digital camera to obtain a series of frames, which enable to recognize an object in each frame, compare their position relative to the autonomous system and thus enables to estimate object relative speeds and displacements throughout the series of the frames. This is the most simples and the most imprecise solution due to imperfection of cameras, limited frames per second, sensitivity of the given sensor and other parameters. | + | **Single-camera solution** uses a single digital camera to obtain a series of frames, which enable to recognize an object in each frame, compare their position relative to the autonomous system, and thus enables to estimate object relative speeds and displacements throughout the series of the frames. This is the most simples and the most imprecise solution due to the imperfection of cameras, limited frames per second, sensitivity of the given sensor, and other parameters. |
| - | **Stereo vision systems** are using two horizontally aligned cameras, which are time-synchronized (frames are taken simultaneously). Time synchronization minimizes the difference between frames. Horizontal alignment allows observing a distant object from a slightly different angle, which creates a slightly different frame. These differences – binocular disparity - allow to calculate point location in a 3D environment like the human brain does working with natural vision sensors – eyes. Acquisition of data of the third dimension requires additional calculations and inevitably additional computing power on-board. | + | **Stereo vision systems** are using two horizontally aligned cameras, which are time-synchronized (frames are taken simultaneously). Time synchronization minimizes the difference between frames. Horizontal alignment allows observing a distant object from a slightly different angle, which creates a slightly different frame. These differences – binocular disparity - allow us to calculate point location in a 3D environment like the human brain does working with natural vision sensors – eyes. Acquisition of data of the third dimension requires additional calculations and inevitably additional computing power on-board. |
| Unfortunately, | Unfortunately, | ||
| - | * Motion blur – caused by motion and sensor sensitivity. The less sensitivity the higher blur effects might be observed. Blurred images decrease object detection and distance estimation precision | + | * Motion blur – caused by motion and sensor sensitivity. The less sensitivity the higher blur effects might be observed. Blurred images decrease object detection and distance estimation precision; |
| - | * Lens distortion – distorts images in an unpredictable way as a result of imperfection of manufacturing | + | * Lens distortion – distorts images in an unpredictable way as a result of imperfection of manufacturing; |
| - | * Frames per second – fewer frames per second, less accurate the derived estimates will be | + | * Frames per second – fewer frames per second, less accurate the derived estimates will be; |
| - | * Changes of light condition from one frame to another, which complicates the overall processing. One of the obvious results is changes in colours, which reduces the usability of the frames detected. | + | * Changes of light condition from one frame to another, which complicates the overall processing. One of the obvious results is changes in colors, which reduces the usability of the frames detected. |
| - | **Event-based cameras** allow avoiding all of the mentioned disadvantages at a cost of more complicated data processing. The essence of the working principle is similar to the natural light-sensing retina in eyes of biological systems, where only differences of light intensity are submitted instead of the whole frame. Thus, motion blur as a phenomenon and the related unwanted phenomena are lost. Therefore, the cameras might be an excellent option for visual autonomous system pose-estimation applications. Unfortunately, | + | **Event-based cameras** allow avoiding all of the mentioned disadvantages at a cost of more complicated data processing. The essence of the working principle is similar to the natural light-sensing retina in eyes of biological systems, where only differences of light intensity are submitted instead of the whole frame. Thus, motion blur as a phenomenon, and the related unwanted phenomena are lost. Therefore, the cameras might be an excellent option for visual autonomous system pose-estimation applications. Unfortunately, |
| < | < | ||
| Line 72: | Line 72: | ||
| </ | </ | ||
| - | Image is taken from experimental agriculture robot that uses multiple time-synchronized sensors – Single RGB camera (upper-left), | + | Image is taken from an experimental agriculture robot that uses multiple time-synchronized sensors – Single RGB camera (upper-left), |
| ==== Inertial Measurement Unit (IMU) ==== | ==== Inertial Measurement Unit (IMU) ==== | ||
| - | IMUs are the core of modern autonomous | + | IMUs are the core of the modern autonomous |
| - | Today IMUs are exploiting different technical solutions, where the most affordable are MEMS (Micro Electro-Mechanical Mechanical System) systems. MEMS gyroscopes use lithographically constructed versions of one or more of the vibrating mechanisms i.e. tuning forks, vibrating wheels, or resonant solids of various designs ((https:// | + | Today IMUs are exploiting different technical solutions, where the most affordable are MEMS (Micro-Electro-Mechanical Mechanical System) systems. MEMS gyroscopes use lithographically constructed versions of one or more of the vibrating mechanisms i.e. tuning forks, vibrating wheels, or resonant solids of various designs ((https:// |
| - | This design uses the Coriolis effect – a vibrating body tends to maintain its vibration plane even if its supporting body plane changes (the autonomous system has moved). As a result, some forces are created | + | This design uses the Coriolis effect – a vibrating body tends to maintain its vibration plane even if its supporting body plane changes (the autonomous system has moved). As a result, some forces are created |
| - | Currently, the most precise gyroscopic sensor available is fibre-optic gyroscope (FOG), which exploits the Sagnac effect ((http:// | + | Currently, the most precise gyroscopic sensor available is fiber-optic gyroscope (FOG), which exploits the Sagnac effect ((http:// |
| - | The basic principle is the use of two laser beams injected into a fibre optical channel with significant length (5km). Due to the Sagnac effect if the sensor is rotating one of the beans experiences a slightly shorter path, which results in a phase shift. The phase shift is measured using interferometry method, which results in angular velocity estimate. | + | The basic principle is the use of two laser beams injected into a fiber optical channel with a significant length (5km). Due to the Sagnac effect if the sensor is rotating one of the beans experiences a slightly shorter path, which results in a phase shift. The phase shift is measured using the interferometry method, which results in an angular velocity estimate. |
| - | Despite various measuring methods, IMUs suffer from inherent problem – error accumulation, | + | Despite various measuring methods, IMUs suffer from inherent problem–error accumulation, |
| ==== Rotary encoders ==== | ==== Rotary encoders ==== | ||
| - | Rotary encoders are widely used in ground systems, providing an additional relatively precise and reliable source of displacement estimate. The main purpose of the sensor is to provide | + | Rotary encoders are widely used in ground systems, providing an additional relatively precise and reliable source of displacement estimate. The main purpose of the sensor is to provide output data on wheel or shaft angular displacement. There are two main types of rotary encoders – absolute and incremental ((https:// |
| As all sensors, rotary sensors have several main technologies: | As all sensors, rotary sensors have several main technologies: | ||
| - | * **Mechanical** – in its essence they are potentiometers, | + | * **Mechanical** – in its essence they are potentiometers, |
| - | * **Optical** – it uses opto-pair to detect a reflected signal from the rotating disk (mounted on the shaft) or a light going through the disk trough | + | * **Optical** – it uses opto-pair to detect a reflected signal from the rotating disk (mounted on the shaft) or a light going through the disk through |
| - | * **On- and Off-axis magnetic** – these types of senor use stationary or rotary magnets and exploit hall effect to sense changes in a magnetic field. | + | * **On- and Off-axis magnetic** – these types of senor use stationary or rotary magnets and exploit |
| - | Using one of the designs absolute sensors allow to encode every single rotation angle with a unique code, while incremental produce a series of impulses. In case of incremental encoding usually a quadrature A_B phase shifts are used to determine both direction and displacement. | + | Using one of the designs absolute sensors allow to encode every single rotation angle with a unique code, while incremental produce a series of impulses. In the case of incremental encoding usually a quadrature A_B phase shifts are used to determine both direction and displacement. |
| < | < | ||
| Line 99: | Line 99: | ||
| ==== SLAM ==== | ==== SLAM ==== | ||
| - | In autonomous systems, as discussed previously, sensors are used to acquire important data about the system itself and its surrounding environment, | + | In autonomous systems, as discussed previously, sensors are used to acquire important data about the system itself and its surrounding environment, |
| - | However, in reality, a predefined reliable map is very rarely available. Therefore, the map has to be constructed during the exploration of the environment, | + | However, in reality, a predefined reliable map is very rarely available. Therefore, the map has to be constructed during the exploration of the environment, |
| The overall process might be split into several steps: | The overall process might be split into several steps: | ||
| - | * Unordered List ItemSensing the environment before any action is executed. This helps to acquire the first data and probably find there some distinctive features like corners in office or crossings in an open traffic application | + | * Unordered List ItemSensing the environment before any action is executed. This helps to acquire the first data and probably find there some distinctive features like corners in office or crossings in an open traffic application; |
| - | * Execution of motion which provides motion data from IMU, rotary encoders or other internal sensors, that provide data on the actual motion of the system. One might imagine this step as moving for short enough time with closed eyes | + | * Execution of motion which provides motion data from IMU, rotary encoders, or other internal sensors, that provide data on the actual motion of the system. One might imagine this step as moving for short enough time with closed eyes; |
| * Location calculation and map update this step is the most complicated since it combines the map data acquired before, with sensed motion data and uses this data to update the map. | * Location calculation and map update this step is the most complicated since it combines the map data acquired before, with sensed motion data and uses this data to update the map. | ||