libZSservicesZSamazonka-rekognitionZSamazonka-rekognition
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Rekognition.DetectFaces

Description

Detects faces within an image that is provided as input.

DetectFaces detects the 100 largest faces in the image. For each face detected, the operation returns face details. These details include a bounding box of the face, a confidence value (that the bounding box contains a face), and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), presence of beard, sunglasses, and so on.

The face-detection algorithm is most effective on frontal faces. For non-frontal or obscured faces, the algorithm might not detect the faces or might detect faces with lower confidence.

You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

This is a stateless API operation. That is, the operation does not persist any data.

This operation requires permissions to perform the rekognition:DetectFaces action.

Synopsis

Creating a Request

data DetectFaces Source #

See: newDetectFaces smart constructor.

Constructors

DetectFaces' 

Fields

  • attributes :: Maybe [Attribute]

    An array of facial attributes you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide ["ALL"], all facial attributes are returned, but the operation takes longer to complete.

    If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

  • image :: Image

    The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

    If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

Instances

Instances details
Eq DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Read DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Show DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Generic DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Associated Types

type Rep DetectFaces :: Type -> Type #

NFData DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Methods

rnf :: DetectFaces -> () #

Hashable DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

ToJSON DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

AWSRequest DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Associated Types

type AWSResponse DetectFaces #

ToHeaders DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Methods

toHeaders :: DetectFaces -> [Header] #

ToPath DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

ToQuery DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

type Rep DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

type Rep DetectFaces = D1 ('MetaData "DetectFaces" "Amazonka.Rekognition.DetectFaces" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "DetectFaces'" 'PrefixI 'True) (S1 ('MetaSel ('Just "attributes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Attribute])) :*: S1 ('MetaSel ('Just "image") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Image)))
type AWSResponse DetectFaces Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

newDetectFaces Source #

Create a value of DetectFaces with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:attributes:DetectFaces', detectFaces_attributes - An array of facial attributes you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide ["ALL"], all facial attributes are returned, but the operation takes longer to complete.

If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

$sel:image:DetectFaces', detectFaces_image - The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

Request Lenses

detectFaces_attributes :: Lens' DetectFaces (Maybe [Attribute]) Source #

An array of facial attributes you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide ["ALL"], all facial attributes are returned, but the operation takes longer to complete.

If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

detectFaces_image :: Lens' DetectFaces Image Source #

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

Destructuring the Response

data DetectFacesResponse Source #

See: newDetectFacesResponse smart constructor.

Constructors

DetectFacesResponse' 

Fields

  • orientationCorrection :: Maybe OrientationCorrection

    The value of OrientationCorrection is always null.

    If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.

    Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

  • faceDetails :: Maybe [FaceDetail]

    Details of each face found in the image.

  • httpStatus :: Int

    The response's http status code.

Instances

Instances details
Eq DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Read DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Show DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Generic DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Associated Types

type Rep DetectFacesResponse :: Type -> Type #

NFData DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

Methods

rnf :: DetectFacesResponse -> () #

type Rep DetectFacesResponse Source # 
Instance details

Defined in Amazonka.Rekognition.DetectFaces

type Rep DetectFacesResponse = D1 ('MetaData "DetectFacesResponse" "Amazonka.Rekognition.DetectFaces" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "DetectFacesResponse'" 'PrefixI 'True) (S1 ('MetaSel ('Just "orientationCorrection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OrientationCorrection)) :*: (S1 ('MetaSel ('Just "faceDetails") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [FaceDetail])) :*: S1 ('MetaSel ('Just "httpStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Int))))

newDetectFacesResponse Source #

Create a value of DetectFacesResponse with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:orientationCorrection:DetectFacesResponse', detectFacesResponse_orientationCorrection - The value of OrientationCorrection is always null.

If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.

Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

$sel:faceDetails:DetectFacesResponse', detectFacesResponse_faceDetails - Details of each face found in the image.

$sel:httpStatus:DetectFacesResponse', detectFacesResponse_httpStatus - The response's http status code.

Response Lenses

detectFacesResponse_orientationCorrection :: Lens' DetectFacesResponse (Maybe OrientationCorrection) Source #

The value of OrientationCorrection is always null.

If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.

Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

detectFacesResponse_faceDetails :: Lens' DetectFacesResponse (Maybe [FaceDetail]) Source #

Details of each face found in the image.