Copyright | (c) 2013-2021 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay+amazonka@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.
For an example, see Analyzing Images Stored in an Amazon S3 Bucket in the Amazon Rekognition Developer Guide.
DetectLabels
does not support the detection of activities. However,
activity detection is supported for label detection in videos. For more
information, see StartLabelDetection in the Amazon Rekognition Developer
Guide.
You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.
For each object, scene, and concept the API returns one or more labels. Each label provides the object name, and the level of confidence that the image contains the object. For example, suppose the input image has a lighthouse, the sea, and a rock. The response includes all three labels, one for each object.
{Name: lighthouse, Confidence: 98.4629}
{Name: rock,Confidence: 79.2097}
{Name: sea,Confidence: 75.061}
In the preceding example, the operation returns one label for each of the three objects. The operation can also return multiple labels for the same object in the image. For example, if the input image shows a flower (for example, a tulip), the operation might return the following three labels.
{Name: flower,Confidence: 99.0562}
{Name: plant,Confidence: 99.0562}
{Name: tulip,Confidence: 99.0562}
In this example, the detection algorithm more precisely identifies the flower as a tulip.
In response, the API returns an array of labels. In addition, the
response also includes the orientation correction. Optionally, you can
specify MinConfidence
to control the confidence threshold for the
labels returned. The default is 55%. You can also add the MaxLabels
parameter to limit the number of labels returned.
If the object detected is a person, the operation doesn't provide the same facial details that the DetectFaces operation provides.
DetectLabels
returns bounding boxes for instances of common object
labels in an array of Instance objects. An Instance
object contains a
BoundingBox object, for the location of the label on the image. It also
includes the confidence by which the bounding box was detected.
DetectLabels
also returns a hierarchical taxonomy of detected labels.
For example, a detected car might be assigned the label car. The label
car has two parent labels: Vehicle (its parent) and Transportation
(its grandparent). The response returns the entire list of ancestors for
a label. Each ancestor is a unique label in the response. In the
previous example, Car, Vehicle, and Transportation are returned as
unique labels in the response.
This is a stateless API operation. That is, the operation does not persist any data.
This operation requires permissions to perform the
rekognition:DetectLabels
action.
Synopsis
- data DetectLabels = DetectLabels' {}
- newDetectLabels :: Image -> DetectLabels
- detectLabels_minConfidence :: Lens' DetectLabels (Maybe Double)
- detectLabels_maxLabels :: Lens' DetectLabels (Maybe Natural)
- detectLabels_image :: Lens' DetectLabels Image
- data DetectLabelsResponse = DetectLabelsResponse' {}
- newDetectLabelsResponse :: Int -> DetectLabelsResponse
- detectLabelsResponse_labels :: Lens' DetectLabelsResponse (Maybe [Label])
- detectLabelsResponse_orientationCorrection :: Lens' DetectLabelsResponse (Maybe OrientationCorrection)
- detectLabelsResponse_labelModelVersion :: Lens' DetectLabelsResponse (Maybe Text)
- detectLabelsResponse_httpStatus :: Lens' DetectLabelsResponse Int
Creating a Request
data DetectLabels Source #
See: newDetectLabels
smart constructor.
DetectLabels' | |
|
Instances
Create a value of DetectLabels
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:minConfidence:DetectLabels'
, detectLabels_minConfidence
- Specifies the minimum confidence level for the labels to return. Amazon
Rekognition doesn't return any labels with confidence lower than this
specified value.
If MinConfidence
is not specified, the operation returns labels with a
confidence values greater than or equal to 55 percent.
$sel:maxLabels:DetectLabels'
, detectLabels_maxLabels
- Maximum number of labels you want the service to return in the response.
The service returns the specified number of highest confidence labels.
$sel:image:DetectLabels'
, detectLabels_image
- The input image as base64-encoded bytes or an S3 object. If you use the
AWS CLI to call Amazon Rekognition operations, passing image bytes is
not supported. Images stored in an S3 Bucket do not need to be
base64-encoded.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
Request Lenses
detectLabels_minConfidence :: Lens' DetectLabels (Maybe Double) Source #
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.
If MinConfidence
is not specified, the operation returns labels with a
confidence values greater than or equal to 55 percent.
detectLabels_maxLabels :: Lens' DetectLabels (Maybe Natural) Source #
Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
detectLabels_image :: Lens' DetectLabels Image Source #
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
Destructuring the Response
data DetectLabelsResponse Source #
See: newDetectLabelsResponse
smart constructor.
DetectLabelsResponse' | |
|
Instances
newDetectLabelsResponse Source #
Create a value of DetectLabelsResponse
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:labels:DetectLabelsResponse'
, detectLabelsResponse_labels
- An array of labels for the real-world objects detected.
$sel:orientationCorrection:DetectLabelsResponse'
, detectLabelsResponse_orientationCorrection
- The value of OrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
$sel:labelModelVersion:DetectLabelsResponse'
, detectLabelsResponse_labelModelVersion
- Version number of the label detection model that was used to detect
labels.
$sel:httpStatus:DetectLabelsResponse'
, detectLabelsResponse_httpStatus
- The response's http status code.
Response Lenses
detectLabelsResponse_labels :: Lens' DetectLabelsResponse (Maybe [Label]) Source #
An array of labels for the real-world objects detected.
detectLabelsResponse_orientationCorrection :: Lens' DetectLabelsResponse (Maybe OrientationCorrection) Source #
The value of OrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
detectLabelsResponse_labelModelVersion :: Lens' DetectLabelsResponse (Maybe Text) Source #
Version number of the label detection model that was used to detect labels.
detectLabelsResponse_httpStatus :: Lens' DetectLabelsResponse Int Source #
The response's http status code.