Hardware Platform
NAO is a
programmable, 57-cm tall humanoid robot with the following key components:
- Body with
25 degrees of freedom (DOF) whose key elements are electric motors and actuators
- Sensor
network, including 2 cameras, 4 microphones, sonar rangefinder, 2 IR emitters
and receivers, 1 inertial board, 9 tactile sensors, and 8 pressure sensors
- Various
communication devices, including voice synthesizer, LED lights, and 2
high-fidelity speakers
- Intel
ATOM 1,6ghz CPU (located in the head) that runs a Linux kernel and supports
Aldebaran’s proprietary middleware (NAOqi)
- Second
CPU (located in the torso)
- 27,6-watt-hour
battery that provides NAO with 1.5 or more hours of autonomy, depending on usage
Motion
Omnidirectional walking:
NAO's walking uses a simple dynamic model (linear inverse
pendulum) and quadratic programming. It is stabilized using feedback from joint
sensors. This makes walking robust and resistant to small disturbances, and
torso oscillations in the frontal and lateral planes are absorbed. NAO can walk
on a variety of floor surfaces, such as carpeted, tiled, and wooden floors. NAO
can transition between these surfaces while walking.
Whole body motion:
NAO's motion module is based on generalized inverse
kinematics, which handles Cartesian coordinates, joint control, balance,
redundancy, and task priority. This means that when asking NAO to extend its
arm, it bends over because its arms and leg joints are taken into account. NAO
will stop its movement to maintain balance.
Fall Manager:
The Fall Manager protects NAO when it falls. Its main
function is to detect when NAO's center of mass (CoM) shifts outside the
support polygon. The support polygon is determined by the position of the foot
or feet in contact with the ground. When a fall is detected, all motion tasks
are killed and, depending on the direction, NAO's arms assume protective
positioning, the CoM is lowered, and robot stiffness is reduced to zero.
Vision
NAO has two
cameras and can track, learn, and recognize images and faces.
NAO sees using two 920p cameras, which can capture up to 30
images per second.
The first camera, located on NAO’s forehead, scans the
horizon, while the second located at mouth level scans the immediate
surroundings.
The software lets you recover photos and video streams of
what NAO sees. But eyes are only useful if you can interpret what you see.
That’s why NAO contains a set of algorithms for detecting
and recognizing faces and shapes. NAO can recognize who is talking to it or
find a ball or, eventually, more complex objects.
These algorithms have been specially developed, with
constant attention to using a minimum of processor resources.
Furthermore,
NAO’s SDK lets you develop your own modules to interface with OpenCV (the Open
Source Computer Vision library originally developed by Intel).
Since you can execute modules on NAO or transfer them to a
PC connected to NAO, you can easily use the OpenCV display functions to develop
and test your algorithms with image feedback.
Tactile Sensors:
Besides cameras and microphones, NAO is fitted with
capacitive sensors positioned on top of its head in three sections and on its
hands.
You can therefore give NAO information through touch:
pressing once to tell it shut down, for example, or using the sensors as a
series of buttons to trigger an associated action.
The system comes with LED lights that indicate the type of
contact. You can also program complex sequences.
Sonar Rangefinders:
NAO is equipped with two sonar channels: two transmitters
and two receivers.
They allow NAO to estimate the distances to obstacles in its
environment. The detection range is 0–70 cm.
Less than 15
cm , there is no distance information; NAO only knows
that an object is present.
No comments:
Post a Comment