Copyright | (c) 2013-2021 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay+amazonka@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Compares a face in the source input image with each of the 100 largest faces detected in the target input image.
If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image.
CompareFaces uses machine learning algorithms, which are probabilistic.
A false negative is an incorrect prediction that a face in the target
image has a low similarity confidence score when compared to the face in
the source image. To reduce the probability of false negatives, we
recommend that you compare the target image against multiple source
images. If you plan to use CompareFaces
to make a decision that
impacts an individual's rights, privacy, or access to services, we
recommend that you pass the result to a human for review and further
validation before taking action.
You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file.
In response, the operation returns an array of face matches ordered by similarity score in descending order. For each face match, the response provides a bounding box of the face, facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and confidence value (indicating the level of confidence that the bounding box contains a face). The response also provides a similarity score, which indicates how closely the faces match.
By default, only faces with a similarity score of greater than or equal
to 80% are returned in the response. You can change this value by
specifying the SimilarityThreshold
parameter.
CompareFaces
also returns an array of faces that don't match the
source image. For each face, it returns a bounding box, confidence
value, landmarks, pose details, and quality. The response also returns
information about the face in the source image, including the bounding
box of the face and confidence value.
The QualityFilter
input parameter allows you to filter out detected
faces that don’t meet a required quality bar. The quality bar is based
on a variety of common use cases. Use QualityFilter
to set the quality
bar by specifying LOW
, MEDIUM
, or HIGH
. If you do not want to
filter detected faces, specify NONE
. The default value is NONE
.
If the image doesn't contain Exif metadata, CompareFaces
returns
orientation information for the source and target images. Use these
values to display the images with the correct image orientation.
If no faces are detected in the source or target images, CompareFaces
returns an InvalidParameterException
error.
This is a stateless API operation. That is, data returned by this operation doesn't persist.
For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.
This operation requires permissions to perform the
rekognition:CompareFaces
action.
Synopsis
- data CompareFaces = CompareFaces' {}
- newCompareFaces :: Image -> Image -> CompareFaces
- compareFaces_qualityFilter :: Lens' CompareFaces (Maybe QualityFilter)
- compareFaces_similarityThreshold :: Lens' CompareFaces (Maybe Double)
- compareFaces_sourceImage :: Lens' CompareFaces Image
- compareFaces_targetImage :: Lens' CompareFaces Image
- data CompareFacesResponse = CompareFacesResponse' {}
- newCompareFacesResponse :: Int -> CompareFacesResponse
- compareFacesResponse_faceMatches :: Lens' CompareFacesResponse (Maybe [CompareFacesMatch])
- compareFacesResponse_unmatchedFaces :: Lens' CompareFacesResponse (Maybe [ComparedFace])
- compareFacesResponse_targetImageOrientationCorrection :: Lens' CompareFacesResponse (Maybe OrientationCorrection)
- compareFacesResponse_sourceImageOrientationCorrection :: Lens' CompareFacesResponse (Maybe OrientationCorrection)
- compareFacesResponse_sourceImageFace :: Lens' CompareFacesResponse (Maybe ComparedSourceImageFace)
- compareFacesResponse_httpStatus :: Lens' CompareFacesResponse Int
Creating a Request
data CompareFaces Source #
See: newCompareFaces
smart constructor.
CompareFaces' | |
|
Instances
Create a value of CompareFaces
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:qualityFilter:CompareFaces'
, compareFaces_qualityFilter
- A filter that specifies a quality bar for how much filtering is done to
identify faces. Filtered faces aren't compared. If you specify AUTO
,
Amazon Rekognition chooses the quality bar. If you specify LOW
,
MEDIUM
, or HIGH
, filtering removes all faces that don’t meet the
chosen quality bar. The quality bar is based on a variety of common use
cases. Low-quality detections can occur for a number of reasons. Some
examples are an object that's misidentified as a face, a face that's
too blurry, or a face with a pose that's too extreme to use. If you
specify NONE
, no filtering is performed. The default value is NONE
.
To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
$sel:similarityThreshold:CompareFaces'
, compareFaces_similarityThreshold
- The minimum level of confidence in the face matches that a match must
meet to be included in the FaceMatches
array.
$sel:sourceImage:CompareFaces'
, compareFaces_sourceImage
- The input image as base64-encoded bytes or an S3 object. If you use the
AWS CLI to call Amazon Rekognition operations, passing base64-encoded
image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
$sel:targetImage:CompareFaces'
, compareFaces_targetImage
- The target image as base64-encoded bytes or an S3 object. If you use the
AWS CLI to call Amazon Rekognition operations, passing base64-encoded
image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
Request Lenses
compareFaces_qualityFilter :: Lens' CompareFaces (Maybe QualityFilter) Source #
A filter that specifies a quality bar for how much filtering is done to
identify faces. Filtered faces aren't compared. If you specify AUTO
,
Amazon Rekognition chooses the quality bar. If you specify LOW
,
MEDIUM
, or HIGH
, filtering removes all faces that don’t meet the
chosen quality bar. The quality bar is based on a variety of common use
cases. Low-quality detections can occur for a number of reasons. Some
examples are an object that's misidentified as a face, a face that's
too blurry, or a face with a pose that's too extreme to use. If you
specify NONE
, no filtering is performed. The default value is NONE
.
To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
compareFaces_similarityThreshold :: Lens' CompareFaces (Maybe Double) Source #
The minimum level of confidence in the face matches that a match must
meet to be included in the FaceMatches
array.
compareFaces_sourceImage :: Lens' CompareFaces Image Source #
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
compareFaces_targetImage :: Lens' CompareFaces Image Source #
The target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not
need to base64-encode image bytes passed using the Bytes
field. For
more information, see Images in the Amazon Rekognition developer guide.
Destructuring the Response
data CompareFacesResponse Source #
See: newCompareFacesResponse
smart constructor.
CompareFacesResponse' | |
|
Instances
newCompareFacesResponse Source #
Create a value of CompareFacesResponse
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:faceMatches:CompareFacesResponse'
, compareFacesResponse_faceMatches
- An array of faces in the target image that match the source image face.
Each CompareFacesMatch
object provides the bounding box, the
confidence level that the bounding box contains a face, and the
similarity score for the face in the bounding box and the face in the
source image.
$sel:unmatchedFaces:CompareFacesResponse'
, compareFacesResponse_unmatchedFaces
- An array of faces in the target image that did not match the source
image face.
$sel:targetImageOrientationCorrection:CompareFacesResponse'
, compareFacesResponse_targetImageOrientationCorrection
- The value of TargetImageOrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
$sel:sourceImageOrientationCorrection:CompareFacesResponse'
, compareFacesResponse_sourceImageOrientationCorrection
- The value of SourceImageOrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
$sel:sourceImageFace:CompareFacesResponse'
, compareFacesResponse_sourceImageFace
- The face in the source image that was used for comparison.
$sel:httpStatus:CompareFacesResponse'
, compareFacesResponse_httpStatus
- The response's http status code.
Response Lenses
compareFacesResponse_faceMatches :: Lens' CompareFacesResponse (Maybe [CompareFacesMatch]) Source #
An array of faces in the target image that match the source image face.
Each CompareFacesMatch
object provides the bounding box, the
confidence level that the bounding box contains a face, and the
similarity score for the face in the bounding box and the face in the
source image.
compareFacesResponse_unmatchedFaces :: Lens' CompareFacesResponse (Maybe [ComparedFace]) Source #
An array of faces in the target image that did not match the source image face.
compareFacesResponse_targetImageOrientationCorrection :: Lens' CompareFacesResponse (Maybe OrientationCorrection) Source #
The value of TargetImageOrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
compareFacesResponse_sourceImageOrientationCorrection :: Lens' CompareFacesResponse (Maybe OrientationCorrection) Source #
The value of SourceImageOrientationCorrection
is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
compareFacesResponse_sourceImageFace :: Lens' CompareFacesResponse (Maybe ComparedSourceImageFace) Source #
The face in the source image that was used for comparison.
compareFacesResponse_httpStatus :: Lens' CompareFacesResponse Int Source #
The response's http status code.