Copyright | (c) 2013-2021 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay+amazonka@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Gets the face search results for Amazon Rekognition Video face search started by StartFaceSearch. The search returns faces in a collection that match the faces of persons detected in a video. It also includes the time(s) that faces are matched in the video.
Face search in a video is an asynchronous operation. You start face
search by calling to StartFaceSearch which returns a job identifier
(JobId
). When the search operation finishes, Amazon Rekognition Video
publishes a completion status to the Amazon Simple Notification Service
topic registered in the initial call to StartFaceSearch
. To get the
search results, first check that the status value published to the
Amazon SNS topic is SUCCEEDED
. If so, call GetFaceSearch
and pass
the job identifier (JobId
) from the initial call to StartFaceSearch
.
For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
The search results are retured in an array, Persons
, of PersonMatch
objects. EachPersonMatch
element contains details about the matching
faces in the input collection, person information (facial attributes,
bounding boxes, and person identifer) for the matched person, and the
time the person was matched in the video.
GetFaceSearch
only returns the default facial attributes
(BoundingBox
, Confidence
, Landmarks
, Pose
, and Quality
). The
other facial attributes listed in the Face
object of the following
response syntax are not returned. For more information, see FaceDetail
in the Amazon Rekognition Developer Guide.
By default, the Persons
array is sorted by the time, in milliseconds
from the start of the video, persons are matched. You can also sort by
persons by specifying INDEX
for the SORTBY
input parameter.
Synopsis
- data GetFaceSearch = GetFaceSearch' {}
- newGetFaceSearch :: Text -> GetFaceSearch
- getFaceSearch_nextToken :: Lens' GetFaceSearch (Maybe Text)
- getFaceSearch_maxResults :: Lens' GetFaceSearch (Maybe Natural)
- getFaceSearch_sortBy :: Lens' GetFaceSearch (Maybe FaceSearchSortBy)
- getFaceSearch_jobId :: Lens' GetFaceSearch Text
- data GetFaceSearchResponse = GetFaceSearchResponse' {}
- newGetFaceSearchResponse :: Int -> GetFaceSearchResponse
- getFaceSearchResponse_nextToken :: Lens' GetFaceSearchResponse (Maybe Text)
- getFaceSearchResponse_videoMetadata :: Lens' GetFaceSearchResponse (Maybe VideoMetadata)
- getFaceSearchResponse_statusMessage :: Lens' GetFaceSearchResponse (Maybe Text)
- getFaceSearchResponse_jobStatus :: Lens' GetFaceSearchResponse (Maybe VideoJobStatus)
- getFaceSearchResponse_persons :: Lens' GetFaceSearchResponse (Maybe [PersonMatch])
- getFaceSearchResponse_httpStatus :: Lens' GetFaceSearchResponse Int
Creating a Request
data GetFaceSearch Source #
See: newGetFaceSearch
smart constructor.
GetFaceSearch' | |
|
Instances
Create a value of GetFaceSearch
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:nextToken:GetFaceSearch'
, getFaceSearch_nextToken
- If the previous response was incomplete (because there is more search
results to retrieve), Amazon Rekognition Video returns a pagination
token in the response. You can use this pagination token to retrieve the
next set of search results.
$sel:maxResults:GetFaceSearch'
, getFaceSearch_maxResults
- Maximum number of results to return per paginated call. The largest
value you can specify is 1000. If you specify a value greater than 1000,
a maximum of 1000 results is returned. The default value is 1000.
$sel:sortBy:GetFaceSearch'
, getFaceSearch_sortBy
- Sort to use for grouping faces in the response. Use TIMESTAMP
to group
faces by the time that they are recognized. Use INDEX
to sort by
recognized faces.
$sel:jobId:GetFaceSearch'
, getFaceSearch_jobId
- The job identifer for the search request. You get the job identifier
from an initial call to StartFaceSearch
.
Request Lenses
getFaceSearch_nextToken :: Lens' GetFaceSearch (Maybe Text) Source #
If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.
getFaceSearch_maxResults :: Lens' GetFaceSearch (Maybe Natural) Source #
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
getFaceSearch_sortBy :: Lens' GetFaceSearch (Maybe FaceSearchSortBy) Source #
Sort to use for grouping faces in the response. Use TIMESTAMP
to group
faces by the time that they are recognized. Use INDEX
to sort by
recognized faces.
getFaceSearch_jobId :: Lens' GetFaceSearch Text Source #
The job identifer for the search request. You get the job identifier
from an initial call to StartFaceSearch
.
Destructuring the Response
data GetFaceSearchResponse Source #
See: newGetFaceSearchResponse
smart constructor.
GetFaceSearchResponse' | |
|
Instances
newGetFaceSearchResponse Source #
Create a value of GetFaceSearchResponse
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:nextToken:GetFaceSearch'
, getFaceSearchResponse_nextToken
- If the response is truncated, Amazon Rekognition Video returns this
token that you can use in the subsequent request to retrieve the next
set of search results.
$sel:videoMetadata:GetFaceSearchResponse'
, getFaceSearchResponse_videoMetadata
- Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a
Amazon Rekognition Video operation.
$sel:statusMessage:GetFaceSearchResponse'
, getFaceSearchResponse_statusMessage
- If the job fails, StatusMessage
provides a descriptive error message.
$sel:jobStatus:GetFaceSearchResponse'
, getFaceSearchResponse_jobStatus
- The current status of the face search job.
$sel:persons:GetFaceSearchResponse'
, getFaceSearchResponse_persons
- An array of persons, PersonMatch, in the video whose face(s) match the
face(s) in an Amazon Rekognition collection. It also includes time
information for when persons are matched in the video. You specify the
input collection in an initial call to StartFaceSearch
. Each Persons
element includes a time the person was matched, face match details
(FaceMatches
) for matching faces in the collection, and person
information (Person
) for the matched person.
$sel:httpStatus:GetFaceSearchResponse'
, getFaceSearchResponse_httpStatus
- The response's http status code.
Response Lenses
getFaceSearchResponse_nextToken :: Lens' GetFaceSearchResponse (Maybe Text) Source #
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.
getFaceSearchResponse_videoMetadata :: Lens' GetFaceSearchResponse (Maybe VideoMetadata) Source #
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a
Amazon Rekognition Video operation.
getFaceSearchResponse_statusMessage :: Lens' GetFaceSearchResponse (Maybe Text) Source #
If the job fails, StatusMessage
provides a descriptive error message.
getFaceSearchResponse_jobStatus :: Lens' GetFaceSearchResponse (Maybe VideoJobStatus) Source #
The current status of the face search job.
getFaceSearchResponse_persons :: Lens' GetFaceSearchResponse (Maybe [PersonMatch]) Source #
An array of persons, PersonMatch, in the video whose face(s) match the
face(s) in an Amazon Rekognition collection. It also includes time
information for when persons are matched in the video. You specify the
input collection in an initial call to StartFaceSearch
. Each Persons
element includes a time the person was matched, face match details
(FaceMatches
) for matching faces in the collection, and person
information (Person
) for the matched person.
getFaceSearchResponse_httpStatus :: Lens' GetFaceSearchResponse Int Source #
The response's http status code.