The vision sensor allows robots to see things. The things it can see are blobs of color - which colors it looks for is up to us to set up. When using the vision sensor, the flow works something like this: ask the camera to find matches for a particular color or color code, the camera saves information on up to four things it saw, and then the program can ask about the details of any of the things it saw.
Vision Sensor support was added to VEX IQ with firmware version 2.1.1 in November of 2018, so be sure to check if your firmware needs to be updated. Older firmware will not recognize the Vision Sensor.
Before the camera can recognize colors, it must be instructed on which colors to look for. First, choose which port you would like the camera to connect to the robot's brain on. Click on the small gear icon opposite the port number you will be plugging the camera into. A Vex IQ device list should show up:
Select the Vision Sensor from the list, but don't close the device list yet.
To configure the camera, we're going to need the horsepower of a big computer so we can see what the camera sees in real time. The IQ Brain isn't fast enough to transfer live video back to our monitor, so we're going to plug the camera into the computer via USB. You can use the same micro USB cord used for the IQ Brain, and you don't have to unplug the IQ Smart Cable from the Vision Sensor. Once the Vision Sensor is plugged into the computer, click on the "Configure Vision Sensor..." button at the bottom left of the device list window. The Vision Sensor configuration window should appear:
To set a signature, start by pointing the Vision Sensor at a sample of the target color. This is best done in the same lighting that the robot will be operating in. If there is no where good to set the camera down while you work, you can click the Freeze button at the lower right of the image pane to hold a frame stationary so that you can put the camera down while you work. Once you have a stable image to work with, click and drag on the image to create a red rectangle around a swatch of the desired color:
If you selected a discernable color block for the signature out of, the "Set" buttons will change to green. Click one of the set buttons to save a raw signature based on the selected region to the corresponding signature slot. After doing so, the red box will be cleared, the Set buttons will change back to blue, and your new color signature will be highlighted in the frame wherever it appears, along with some basic information on each identifiable blob of color the camera could find:
After a signature is saved, you can give it a name and adjust how selective the camera should be. To set a name, click on the existing name (s1-7 by default), highlight the old name, and type in a new name. Press Enter or click outside the naming window to save the name. To set the tolerance, click on the bidirectional arrow (↔) to the right of the signature. This will open another box with a tolerance slider in it. Drag the slider to the right to make the Vision Sensor more tolerant - that is, make the Vision Sensor accept more colors similar to the raw color. Drag the slider to the left to make the Vision Sensor less tolerant (more selective) and more likely to reject colors that are close-to-but-not-quite matching. You will be able to see what the camera considers matching or non-matching in the video feed (or frozen video frame) on the left side. The difference between tolerant and intolerant settings is shown in the following two screen grabs. For comparison, the default tolerance is 3.0.
If you can't get a signature to be reliable just by image area and tolerance selection, you might have to adjust the image brightness. Image brightness affects everything the camera sees and not just one signature or code, so adjust it sparingly if you are trying to configure multiple signatures! Image brightness is configured with the bidirectional arrow (↔) located to the right of the Brightness label.
The Vision Sensor can also be used to look for codes constructed of multiple color signatures. To define a code, there must be at least two signatures defined. Once you have at least two signatures, you can switch to the Codes tab (at the bottom in the picture below) and define codes. To do so, click on one of the white fields with "Enter Code...", then enter your code. Codes are of the format #,#[,#[,#[,#]]], where the #'s are the IDs of signatures from 1-7. A signature's ID is based on its position in the list of signatures. A minimum of two signatures must be in a code, and they can accept up to five. Signatures can repeat in a code, but cannot appear next to each other, so 1,2,1 would be okay but 1,1,2 would not be. After a code is defined, you can give it a name in the gray box. It will also start being highlighted in the video feed if it can identify it. Important to note is that a signature being recognized as part of a code will not show up as a distinct signature object, so use codes wisely.
Once the camera is configured, it's time to build code to use it. Here are three blocks and one example snippet:
The first of these blocks is the most important for using a vision sensor. This asks the camera about what it is currently seeing that matches the selected signature or code then saves the response for use in later blocks. This should usually be the first block in any section of Vision Sensor related work.
This checks if there was an object seen and saved for the specified index (from 0-3, with 0 as the first to be filled) the last time the Vision Sensor Get Objects block was used. If there was an object seen for the index in question, this block counts as True. If there wasn't, it evaluates as False. This can change after every use of Vision Sensor Get Objects, so always be sure to check that the object you want to investigate exists before trying to do things with it!
This block gets information on the objects saved by Vision Sensor Get Objects. It takes the same indexes as Vision Sensor Object Exists (0-3), where 0 is the first possible object. There are several possible properties that can be investigated by this block:
This snippet is included as an example of all of the pieces working together how you might expect to use them in a program. It starts by saving information from the Vision Sensor with the Vision Sensor Get Objects block, then checks each of the potential objects (max 4) it might have recorded. If it finds an object recorded, it reports the x location of the object's center. Other actions could be things like figuring out of an object is to the robot's left or right then having the robot turn towards the object, or advancing towards an object and picking it up when it is seen to be close enough.
There are some example projects of accomplishing some tasks with a Vision Sensor on Robot Mesh Studio. Here are links to a couple of them: