libZSservicesZSamazonka-rekognitionZSamazonka-rekognition
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Rekognition.GetSegmentDetection

Description

Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection.

Segment detection with Amazon Rekognition Video is an asynchronous operation. You start segment detection by calling StartSegmentDetection which returns a job identifier (JobId). When the segment detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartSegmentDetection. To get the results of the segment detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. if so, call GetSegmentDetection and pass the job identifier (JobId) from the initial call of StartSegmentDetection.

GetSegmentDetection returns detected segments in an array (Segments) of SegmentDetection objects. Segments is sorted by the segment types specified in the SegmentTypes input parameter of StartSegmentDetection. Each element of the array includes the detected segment, the precentage confidence in the acuracy of the detected segment, the type of the segment, and the frame in which the segment was detected.

Use SelectedSegmentTypes to find out the type of segment detection requested in the call to StartSegmentDetection.

Use the MaxResults parameter to limit the number of segment detections returned. If there are more results than specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetSegmentDetection and populate the NextToken request parameter with the token value returned from the previous call to GetSegmentDetection.

For more information, see Detecting Video Segments in Stored Video in the Amazon Rekognition Developer Guide.

Synopsis

Creating a Request

data GetSegmentDetection Source #

See: newGetSegmentDetection smart constructor.

Constructors

GetSegmentDetection' 

Fields

  • nextToken :: Maybe Text

    If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.

  • maxResults :: Maybe Natural

    Maximum number of results to return per paginated call. The largest value you can specify is 1000.

  • jobId :: Text

    Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to StartSegmentDetection.

Instances

Instances details
Eq GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Read GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Show GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Generic GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Associated Types

type Rep GetSegmentDetection :: Type -> Type #

NFData GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Methods

rnf :: GetSegmentDetection -> () #

Hashable GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

ToJSON GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

AWSRequest GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Associated Types

type AWSResponse GetSegmentDetection #

ToHeaders GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

ToPath GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

ToQuery GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

type Rep GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

type Rep GetSegmentDetection = D1 ('MetaData "GetSegmentDetection" "Amazonka.Rekognition.GetSegmentDetection" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "GetSegmentDetection'" 'PrefixI 'True) (S1 ('MetaSel ('Just "nextToken") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "maxResults") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "jobId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))
type AWSResponse GetSegmentDetection Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

newGetSegmentDetection Source #

Create a value of GetSegmentDetection with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nextToken:GetSegmentDetection', getSegmentDetection_nextToken - If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.

$sel:maxResults:GetSegmentDetection', getSegmentDetection_maxResults - Maximum number of results to return per paginated call. The largest value you can specify is 1000.

$sel:jobId:GetSegmentDetection', getSegmentDetection_jobId - Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to StartSegmentDetection.

Request Lenses

getSegmentDetection_nextToken :: Lens' GetSegmentDetection (Maybe Text) Source #

If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.

getSegmentDetection_maxResults :: Lens' GetSegmentDetection (Maybe Natural) Source #

Maximum number of results to return per paginated call. The largest value you can specify is 1000.

getSegmentDetection_jobId :: Lens' GetSegmentDetection Text Source #

Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to StartSegmentDetection.

Destructuring the Response

data GetSegmentDetectionResponse Source #

See: newGetSegmentDetectionResponse smart constructor.

Constructors

GetSegmentDetectionResponse' 

Fields

  • selectedSegmentTypes :: Maybe [SegmentTypeInfo]

    An array containing the segment types requested in the call to StartSegmentDetection.

  • nextToken :: Maybe Text

    If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.

  • videoMetadata :: Maybe [VideoMetadata]

    Currently, Amazon Rekognition Video returns a single object in the VideoMetadata array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection.

  • statusMessage :: Maybe Text

    If the job fails, StatusMessage provides a descriptive error message.

  • segments :: Maybe [SegmentDetection]

    An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes input parameter of StartSegmentDetection. Within each segment type the array is sorted by timestamp values.

  • jobStatus :: Maybe VideoJobStatus

    Current status of the segment detection job.

  • audioMetadata :: Maybe [AudioMetadata]

    An array of objects. There can be multiple audio streams. Each AudioMetadata object contains metadata for a single audio stream. Audio information in an AudioMetadata objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection.

  • httpStatus :: Int

    The response's http status code.

Instances

Instances details
Eq GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Read GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Show GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Generic GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

Associated Types

type Rep GetSegmentDetectionResponse :: Type -> Type #

NFData GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

type Rep GetSegmentDetectionResponse Source # 
Instance details

Defined in Amazonka.Rekognition.GetSegmentDetection

type Rep GetSegmentDetectionResponse = D1 ('MetaData "GetSegmentDetectionResponse" "Amazonka.Rekognition.GetSegmentDetection" "libZSservicesZSamazonka-rekognitionZSamazonka-rekognition" 'False) (C1 ('MetaCons "GetSegmentDetectionResponse'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "selectedSegmentTypes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [SegmentTypeInfo])) :*: S1 ('MetaSel ('Just "nextToken") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "videoMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [VideoMetadata])) :*: S1 ('MetaSel ('Just "statusMessage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "segments") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [SegmentDetection])) :*: S1 ('MetaSel ('Just "jobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoJobStatus))) :*: (S1 ('MetaSel ('Just "audioMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [AudioMetadata])) :*: S1 ('MetaSel ('Just "httpStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Int)))))

newGetSegmentDetectionResponse Source #

Create a value of GetSegmentDetectionResponse with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:selectedSegmentTypes:GetSegmentDetectionResponse', getSegmentDetectionResponse_selectedSegmentTypes - An array containing the segment types requested in the call to StartSegmentDetection.

$sel:nextToken:GetSegmentDetection', getSegmentDetectionResponse_nextToken - If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.

$sel:videoMetadata:GetSegmentDetectionResponse', getSegmentDetectionResponse_videoMetadata - Currently, Amazon Rekognition Video returns a single object in the VideoMetadata array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection.

$sel:statusMessage:GetSegmentDetectionResponse', getSegmentDetectionResponse_statusMessage - If the job fails, StatusMessage provides a descriptive error message.

$sel:segments:GetSegmentDetectionResponse', getSegmentDetectionResponse_segments - An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes input parameter of StartSegmentDetection. Within each segment type the array is sorted by timestamp values.

$sel:jobStatus:GetSegmentDetectionResponse', getSegmentDetectionResponse_jobStatus - Current status of the segment detection job.

$sel:audioMetadata:GetSegmentDetectionResponse', getSegmentDetectionResponse_audioMetadata - An array of objects. There can be multiple audio streams. Each AudioMetadata object contains metadata for a single audio stream. Audio information in an AudioMetadata objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection.

$sel:httpStatus:GetSegmentDetectionResponse', getSegmentDetectionResponse_httpStatus - The response's http status code.

Response Lenses

getSegmentDetectionResponse_selectedSegmentTypes :: Lens' GetSegmentDetectionResponse (Maybe [SegmentTypeInfo]) Source #

An array containing the segment types requested in the call to StartSegmentDetection.

getSegmentDetectionResponse_nextToken :: Lens' GetSegmentDetectionResponse (Maybe Text) Source #

If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.

getSegmentDetectionResponse_videoMetadata :: Lens' GetSegmentDetectionResponse (Maybe [VideoMetadata]) Source #

Currently, Amazon Rekognition Video returns a single object in the VideoMetadata array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection.

getSegmentDetectionResponse_statusMessage :: Lens' GetSegmentDetectionResponse (Maybe Text) Source #

If the job fails, StatusMessage provides a descriptive error message.

getSegmentDetectionResponse_segments :: Lens' GetSegmentDetectionResponse (Maybe [SegmentDetection]) Source #

An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes input parameter of StartSegmentDetection. Within each segment type the array is sorted by timestamp values.

getSegmentDetectionResponse_audioMetadata :: Lens' GetSegmentDetectionResponse (Maybe [AudioMetadata]) Source #

An array of objects. There can be multiple audio streams. Each AudioMetadata object contains metadata for a single audio stream. Audio information in an AudioMetadata objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection.