Copyright | (c) 2013-2021 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay+amazonka@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection.
Segment detection with Amazon Rekognition Video is an asynchronous
operation. You start segment detection by calling StartSegmentDetection
which returns a job identifier (JobId
). When the segment detection
operation finishes, Amazon Rekognition publishes a completion status to
the Amazon Simple Notification Service topic registered in the initial
call to StartSegmentDetection
. To get the results of the segment
detection operation, first check that the status value published to the
Amazon SNS topic is SUCCEEDED
. if so, call GetSegmentDetection
and
pass the job identifier (JobId
) from the initial call of
StartSegmentDetection
.
GetSegmentDetection
returns detected segments in an array (Segments
)
of SegmentDetection objects. Segments
is sorted by the segment types
specified in the SegmentTypes
input parameter of
StartSegmentDetection
. Each element of the array includes the detected
segment, the precentage confidence in the acuracy of the detected
segment, the type of the segment, and the frame in which the segment was
detected.
Use SelectedSegmentTypes
to find out the type of segment detection
requested in the call to StartSegmentDetection
.
Use the MaxResults
parameter to limit the number of segment detections
returned. If there are more results than specified in MaxResults
, the
value of NextToken
in the operation response contains a pagination
token for getting the next set of results. To get the next page of
results, call GetSegmentDetection
and populate the NextToken
request
parameter with the token value returned from the previous call to
GetSegmentDetection
.
For more information, see Detecting Video Segments in Stored Video in the Amazon Rekognition Developer Guide.
Synopsis
- data GetSegmentDetection = GetSegmentDetection' {}
- newGetSegmentDetection :: Text -> GetSegmentDetection
- getSegmentDetection_nextToken :: Lens' GetSegmentDetection (Maybe Text)
- getSegmentDetection_maxResults :: Lens' GetSegmentDetection (Maybe Natural)
- getSegmentDetection_jobId :: Lens' GetSegmentDetection Text
- data GetSegmentDetectionResponse = GetSegmentDetectionResponse' {}
- newGetSegmentDetectionResponse :: Int -> GetSegmentDetectionResponse
- getSegmentDetectionResponse_selectedSegmentTypes :: Lens' GetSegmentDetectionResponse (Maybe [SegmentTypeInfo])
- getSegmentDetectionResponse_nextToken :: Lens' GetSegmentDetectionResponse (Maybe Text)
- getSegmentDetectionResponse_videoMetadata :: Lens' GetSegmentDetectionResponse (Maybe [VideoMetadata])
- getSegmentDetectionResponse_statusMessage :: Lens' GetSegmentDetectionResponse (Maybe Text)
- getSegmentDetectionResponse_segments :: Lens' GetSegmentDetectionResponse (Maybe [SegmentDetection])
- getSegmentDetectionResponse_jobStatus :: Lens' GetSegmentDetectionResponse (Maybe VideoJobStatus)
- getSegmentDetectionResponse_audioMetadata :: Lens' GetSegmentDetectionResponse (Maybe [AudioMetadata])
- getSegmentDetectionResponse_httpStatus :: Lens' GetSegmentDetectionResponse Int
Creating a Request
data GetSegmentDetection Source #
See: newGetSegmentDetection
smart constructor.
GetSegmentDetection' | |
|
Instances
newGetSegmentDetection Source #
Create a value of GetSegmentDetection
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:nextToken:GetSegmentDetection'
, getSegmentDetection_nextToken
- If the response is truncated, Amazon Rekognition Video returns this
token that you can use in the subsequent request to retrieve the next
set of text.
$sel:maxResults:GetSegmentDetection'
, getSegmentDetection_maxResults
- Maximum number of results to return per paginated call. The largest
value you can specify is 1000.
$sel:jobId:GetSegmentDetection'
, getSegmentDetection_jobId
- Job identifier for the text detection operation for which you want
results returned. You get the job identifer from an initial call to
StartSegmentDetection
.
Request Lenses
getSegmentDetection_nextToken :: Lens' GetSegmentDetection (Maybe Text) Source #
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
getSegmentDetection_maxResults :: Lens' GetSegmentDetection (Maybe Natural) Source #
Maximum number of results to return per paginated call. The largest value you can specify is 1000.
getSegmentDetection_jobId :: Lens' GetSegmentDetection Text Source #
Job identifier for the text detection operation for which you want
results returned. You get the job identifer from an initial call to
StartSegmentDetection
.
Destructuring the Response
data GetSegmentDetectionResponse Source #
See: newGetSegmentDetectionResponse
smart constructor.
GetSegmentDetectionResponse' | |
|
Instances
newGetSegmentDetectionResponse Source #
Create a value of GetSegmentDetectionResponse
with all optional fields omitted.
Use generic-lens or optics to modify other optional fields.
The following record fields are available, with the corresponding lenses provided for backwards compatibility:
$sel:selectedSegmentTypes:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_selectedSegmentTypes
- An array containing the segment types requested in the call to
StartSegmentDetection
.
$sel:nextToken:GetSegmentDetection'
, getSegmentDetectionResponse_nextToken
- If the previous response was incomplete (because there are more labels
to retrieve), Amazon Rekognition Video returns a pagination token in the
response. You can use this pagination token to retrieve the next set of
text.
$sel:videoMetadata:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_videoMetadata
- Currently, Amazon Rekognition Video returns a single object in the
VideoMetadata
array. The object contains information about the video
stream in the input file that Amazon Rekognition Video chose to analyze.
The VideoMetadata
object includes the video codec, video format and
other information. Video metadata is returned in each page of
information returned by GetSegmentDetection
.
$sel:statusMessage:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_statusMessage
- If the job fails, StatusMessage
provides a descriptive error message.
$sel:segments:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_segments
- An array of segments detected in a video. The array is sorted by the
segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes
input parameter of StartSegmentDetection
. Within each segment type the
array is sorted by timestamp values.
$sel:jobStatus:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_jobStatus
- Current status of the segment detection job.
$sel:audioMetadata:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_audioMetadata
- An array of objects. There can be multiple audio streams. Each
AudioMetadata
object contains metadata for a single audio stream.
Audio information in an AudioMetadata
objects includes the audio
codec, the number of audio channels, the duration of the audio stream,
and the sample rate. Audio metadata is returned in each page of
information returned by GetSegmentDetection
.
$sel:httpStatus:GetSegmentDetectionResponse'
, getSegmentDetectionResponse_httpStatus
- The response's http status code.
Response Lenses
getSegmentDetectionResponse_selectedSegmentTypes :: Lens' GetSegmentDetectionResponse (Maybe [SegmentTypeInfo]) Source #
An array containing the segment types requested in the call to
StartSegmentDetection
.
getSegmentDetectionResponse_nextToken :: Lens' GetSegmentDetectionResponse (Maybe Text) Source #
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
getSegmentDetectionResponse_videoMetadata :: Lens' GetSegmentDetectionResponse (Maybe [VideoMetadata]) Source #
Currently, Amazon Rekognition Video returns a single object in the
VideoMetadata
array. The object contains information about the video
stream in the input file that Amazon Rekognition Video chose to analyze.
The VideoMetadata
object includes the video codec, video format and
other information. Video metadata is returned in each page of
information returned by GetSegmentDetection
.
getSegmentDetectionResponse_statusMessage :: Lens' GetSegmentDetectionResponse (Maybe Text) Source #
If the job fails, StatusMessage
provides a descriptive error message.
getSegmentDetectionResponse_segments :: Lens' GetSegmentDetectionResponse (Maybe [SegmentDetection]) Source #
An array of segments detected in a video. The array is sorted by the
segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes
input parameter of StartSegmentDetection
. Within each segment type the
array is sorted by timestamp values.
getSegmentDetectionResponse_jobStatus :: Lens' GetSegmentDetectionResponse (Maybe VideoJobStatus) Source #
Current status of the segment detection job.
getSegmentDetectionResponse_audioMetadata :: Lens' GetSegmentDetectionResponse (Maybe [AudioMetadata]) Source #
An array of objects. There can be multiple audio streams. Each
AudioMetadata
object contains metadata for a single audio stream.
Audio information in an AudioMetadata
objects includes the audio
codec, the number of audio channels, the duration of the audio stream,
and the sample rate. Audio metadata is returned in each page of
information returned by GetSegmentDetection
.
getSegmentDetectionResponse_httpStatus :: Lens' GetSegmentDetectionResponse Int Source #
The response's http status code.