libZSservicesZSamazonka-rekognitionZSamazonka-rekognition
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Rekognition.GetFaceSearch

Description

Gets the face search results for Amazon Rekognition Video face search started by StartFaceSearch. The search returns faces in a collection that match the faces of persons detected in a video. It also includes the time(s) that faces are matched in the video.

Face search in a video is an asynchronous operation. You start face search by calling to StartFaceSearch which returns a job identifier (JobId). When the search operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartFaceSearch. To get the search results, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetFaceSearch and pass the job identifier (JobId) from the initial call to StartFaceSearch.

For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

The search results are retured in an array, Persons, of PersonMatch objects. EachPersonMatch element contains details about the matching faces in the input collection, person information (facial attributes, bounding boxes, and person identifer) for the matched person, and the time the person was matched in the video.

GetFaceSearch only returns the default facial attributes (BoundingBox, Confidence, Landmarks, Pose, and Quality). The other facial attributes listed in the Face object of the following response syntax are not returned. For more information, see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Persons array is sorted by the time, in milliseconds from the start of the video, persons are matched. You can also sort by persons by specifying INDEX for the SORTBY input parameter.

Synopsis

Creating a Request

data GetFaceSearch Source #

See: newGetFaceSearch smart constructor.

Constructors

GetFaceSearch' 

Fields

  • nextToken :: Maybe Text

    If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.

  • maxResults :: Maybe Natural

    Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

  • sortBy :: Maybe FaceSearchSortBy

    Sort to use for grouping faces in the response. Use TIMESTAMP to group faces by the time that they are recognized. Use INDEX to sort by recognized faces.

  • jobId :: Text

    The job identifer for the search request. You get the job identifier from an initial call to StartFaceSearch.

Instances

Instances details
Eq GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Read GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Show GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Generic GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Associated Types

type Rep GetFaceSearch :: Type -> Type #

NFData GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Methods

rnf :: GetFaceSearch -> () #

Hashable GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

ToJSON GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

AWSRequest GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Associated Types

type AWSResponse GetFaceSearch #

ToHeaders GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

ToPath GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

ToQuery GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

type Rep GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

type Rep GetFaceSearch = D1 ('MetaData "GetFaceSearch" "Amazonka.Rekognition.GetFaceSearch" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "GetFaceSearch'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "nextToken") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "maxResults") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "sortBy") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FaceSearchSortBy)) :*: S1 ('MetaSel ('Just "jobId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))
type AWSResponse GetFaceSearch Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

newGetFaceSearch Source #

Create a value of GetFaceSearch with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nextToken:GetFaceSearch', getFaceSearch_nextToken - If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.

$sel:maxResults:GetFaceSearch', getFaceSearch_maxResults - Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

$sel:sortBy:GetFaceSearch', getFaceSearch_sortBy - Sort to use for grouping faces in the response. Use TIMESTAMP to group faces by the time that they are recognized. Use INDEX to sort by recognized faces.

$sel:jobId:GetFaceSearch', getFaceSearch_jobId - The job identifer for the search request. You get the job identifier from an initial call to StartFaceSearch.

Request Lenses

getFaceSearch_nextToken :: Lens' GetFaceSearch (Maybe Text) Source #

If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.

getFaceSearch_maxResults :: Lens' GetFaceSearch (Maybe Natural) Source #

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

getFaceSearch_sortBy :: Lens' GetFaceSearch (Maybe FaceSearchSortBy) Source #

Sort to use for grouping faces in the response. Use TIMESTAMP to group faces by the time that they are recognized. Use INDEX to sort by recognized faces.

getFaceSearch_jobId :: Lens' GetFaceSearch Text Source #

The job identifer for the search request. You get the job identifier from an initial call to StartFaceSearch.

Destructuring the Response

data GetFaceSearchResponse Source #

See: newGetFaceSearchResponse smart constructor.

Constructors

GetFaceSearchResponse' 

Fields

  • nextToken :: Maybe Text

    If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

  • videoMetadata :: Maybe VideoMetadata

    Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

  • statusMessage :: Maybe Text

    If the job fails, StatusMessage provides a descriptive error message.

  • jobStatus :: Maybe VideoJobStatus

    The current status of the face search job.

  • persons :: Maybe [PersonMatch]

    An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

  • httpStatus :: Int

    The response's http status code.

Instances

Instances details
Eq GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Read GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Show GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Generic GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Associated Types

type Rep GetFaceSearchResponse :: Type -> Type #

NFData GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

Methods

rnf :: GetFaceSearchResponse -> () #

type Rep GetFaceSearchResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetFaceSearch

type Rep GetFaceSearchResponse = D1 ('MetaData "GetFaceSearchResponse" "Amazonka.Rekognition.GetFaceSearch" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "GetFaceSearchResponse'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "nextToken") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "videoMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoMetadata)) :*: S1 ('MetaSel ('Just "statusMessage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: (S1 ('MetaSel ('Just "jobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoJobStatus)) :*: (S1 ('MetaSel ('Just "persons") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [PersonMatch])) :*: S1 ('MetaSel ('Just "httpStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Int)))))

newGetFaceSearchResponse Source #

Create a value of GetFaceSearchResponse with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nextToken:GetFaceSearch', getFaceSearchResponse_nextToken - If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

$sel:videoMetadata:GetFaceSearchResponse', getFaceSearchResponse_videoMetadata - Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

$sel:statusMessage:GetFaceSearchResponse', getFaceSearchResponse_statusMessage - If the job fails, StatusMessage provides a descriptive error message.

$sel:jobStatus:GetFaceSearchResponse', getFaceSearchResponse_jobStatus - The current status of the face search job.

$sel:persons:GetFaceSearchResponse', getFaceSearchResponse_persons - An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

$sel:httpStatus:GetFaceSearchResponse', getFaceSearchResponse_httpStatus - The response's http status code.

Response Lenses

getFaceSearchResponse_nextToken :: Lens' GetFaceSearchResponse (Maybe Text) Source #

If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

getFaceSearchResponse_videoMetadata :: Lens' GetFaceSearchResponse (Maybe VideoMetadata) Source #

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

getFaceSearchResponse_statusMessage :: Lens' GetFaceSearchResponse (Maybe Text) Source #

If the job fails, StatusMessage provides a descriptive error message.

getFaceSearchResponse_persons :: Lens' GetFaceSearchResponse (Maybe [PersonMatch]) Source #

An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.