Sensing deals with the need for AI to be able to investigate and understand the environment of the game world. The first step in accomplishing that is to add information to the world that the AI can detect and process. Then, AI can be rigged with sensors designed to do that detection.
Aspects are object attributes that can be added to any GameObject and detected by a Sensor. Add aspects to GameObjects with the Add Aspect options in the RAIN menu. Adding an Aspect to an object will automatically cause an Entity and a Decoration to be added. Entities are components designed to be detectable by sensors and signal that an object has one or more Decorations to be sensed. Aspects are special Decorations that include both an Aspect Name and an associated Sensation.
Think of aspects as a way to tag GameObjects with information for AI representing important characteristics of an object. For example, trees and bushes could have aspects of 'green' and 'flammable'. Enemy players might have tags like 'enemytype1'. Any information transmitted from the world into the sensor will be conveyed by Aspects.
Aspect information is categorized by sensation. Sensors can be built to all sensations, or only specific sensations. The default sensations are visual, tactile, auditory, olfactory, and taste but any word can be used as a custom sensation.
Sensors allow an AI to detect aspects in the scene. There are three different kinds of sensor in RAIN{one}: BoxSensor, ExpandingBoxSensor, and SphereSensor. Each of these sensors use a trigger collider to detect collisions with entity GameObjects and check their aspects. Using the Add Sensor options in the RAIN menu will add a GameObject with a Rigidbody, a sensor component of the appropriate type, and a collider that matches the sensor. Using the Create AI options in the RAIN menu adds an ExpandingBoxSensor to the new AI in the same way.
Each sensor has one or more sensations that are used to narrow the field of aspects it should detect. Sensors with a sensation of visual will be able to detect visual aspects but not auditory aspects, for instance. This allows an AI to have differently configured sensors for different sensations.
Objects detected by Sensors are stored in a special Belief cache associated with the Sensor. In order to access information about sensed objects, use GetObjectWithAspect() and GetAllObjectsWithAspect() calls to retrieve matching objects. Alternatively, when using behavior trees you can include a Detect node, which will automatically bind a detected object to a variable in the AI's ActionContext. See the API documentation for more information about dealing with sensors in code.
Although all three sensors perform detection with collisions, they each have three options that deal with adding a line-of-sight raycast test as an additional check for detection.
Note: When using Raycasting, it is important to remember that AI Sensors don't distinguish between aspects/colliders defined within the AI (the AI itself) and aspects/colliders outside the AI. When specifying a Raycast Location, ensure that the Raycast begins outside any colliders and does not pass through the AI itself.