libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.MediaConvert.Types

Contents

Description

 
Synopsis

Service Configuration

defaultService :: Service Source #

API version 2017-08-29 of the Amazon Elemental MediaConvert SDK configuration.

Errors

_ConflictException :: AsError a => Getting (First ServiceError) a ServiceError Source #

The service couldn't complete your request because there is a conflict with the current state of the resource.

_ForbiddenException :: AsError a => Getting (First ServiceError) a ServiceError Source #

You don't have permissions for this action with the credentials you sent.

_NotFoundException :: AsError a => Getting (First ServiceError) a ServiceError Source #

The resource you requested doesn't exist.

_TooManyRequestsException :: AsError a => Getting (First ServiceError) a ServiceError Source #

Too many requests have been sent in too short of a time. The service limits the rate at which it will accept requests.

_InternalServerErrorException :: AsError a => Getting (First ServiceError) a ServiceError Source #

The service encountered an unexpected condition and can't fulfill your request.

_BadRequestException :: AsError a => Getting (First ServiceError) a ServiceError Source #

The service can't process your request because of a problem in the request. Please check your request form and syntax.

AacAudioDescriptionBroadcasterMix

newtype AacAudioDescriptionBroadcasterMix Source #

Choose BROADCASTER_MIXED_AD when the input contains pre-mixed main audio + audio description (AD) as a stereo pair. The value for AudioType will be set to 3, which signals to downstream systems that this stream contains "broadcaster mixed AD". Note that the input received by the encoder must contain pre-mixed audio; the encoder does not perform the mixing. When you choose BROADCASTER_MIXED_AD, the encoder ignores any values you provide in AudioType and FollowInputAudioType. Choose NORMAL when the input does not contain pre-mixed audio + audio description (AD). In this case, the encoder will use any values you provide for AudioType and FollowInputAudioType.

Instances

Instances details
Eq AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Ord AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Read AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Show AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Generic AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Associated Types

type Rep AacAudioDescriptionBroadcasterMix :: Type -> Type #

NFData AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

Hashable AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToJSON AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToJSONKey AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

FromJSON AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

FromJSONKey AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToLog AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToHeader AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToQuery AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

FromXML AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToXML AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToByteString AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

FromText AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

ToText AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

type Rep AacAudioDescriptionBroadcasterMix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix

type Rep AacAudioDescriptionBroadcasterMix = D1 ('MetaData "AacAudioDescriptionBroadcasterMix" "Amazonka.MediaConvert.Types.AacAudioDescriptionBroadcasterMix" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacAudioDescriptionBroadcasterMix'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacAudioDescriptionBroadcasterMix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacCodecProfile

newtype AacCodecProfile Source #

AAC Profile.

Constructors

AacCodecProfile' 

Instances

Instances details
Eq AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Ord AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Read AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Show AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Generic AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Associated Types

type Rep AacCodecProfile :: Type -> Type #

NFData AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Methods

rnf :: AacCodecProfile -> () #

Hashable AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToJSON AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToJSONKey AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

FromJSON AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

FromJSONKey AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToLog AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToHeader AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToQuery AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

FromXML AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToXML AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

Methods

toXML :: AacCodecProfile -> XML #

ToByteString AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

FromText AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

ToText AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

type Rep AacCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodecProfile

type Rep AacCodecProfile = D1 ('MetaData "AacCodecProfile" "Amazonka.MediaConvert.Types.AacCodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacCodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacCodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacCodingMode

newtype AacCodingMode Source #

Mono (Audio Description), Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control mode and profile. "1.0 - Audio Description (Receiver Mix)" setting receives a stereo description plus control track and emits a mono AAC encode of the description track, with control data emitted in the PES header as per ETSI TS 101 154 Annex E.

Constructors

AacCodingMode' 

Instances

Instances details
Eq AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Ord AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Read AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Show AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Generic AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Associated Types

type Rep AacCodingMode :: Type -> Type #

NFData AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Methods

rnf :: AacCodingMode -> () #

Hashable AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToJSON AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToJSONKey AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

FromJSON AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

FromJSONKey AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToLog AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToHeader AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToQuery AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

FromXML AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToXML AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Methods

toXML :: AacCodingMode -> XML #

ToByteString AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

FromText AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

ToText AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

Methods

toText :: AacCodingMode -> Text #

type Rep AacCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacCodingMode

type Rep AacCodingMode = D1 ('MetaData "AacCodingMode" "Amazonka.MediaConvert.Types.AacCodingMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacCodingMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacCodingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacRateControlMode

newtype AacRateControlMode Source #

Rate Control Mode.

Instances

Instances details
Eq AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Ord AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Read AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Show AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Generic AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Associated Types

type Rep AacRateControlMode :: Type -> Type #

NFData AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

Methods

rnf :: AacRateControlMode -> () #

Hashable AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToJSON AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToJSONKey AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

FromJSON AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

FromJSONKey AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToLog AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToHeader AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToQuery AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

FromXML AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToXML AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToByteString AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

FromText AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

ToText AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

type Rep AacRateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRateControlMode

type Rep AacRateControlMode = D1 ('MetaData "AacRateControlMode" "Amazonka.MediaConvert.Types.AacRateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacRateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacRateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacRawFormat

newtype AacRawFormat Source #

Enables LATM/LOAS AAC output. Note that if you use LATM/LOAS AAC in an output, you must choose "No container" for the output container.

Constructors

AacRawFormat' 

Bundled Patterns

pattern AacRawFormat_LATM_LOAS :: AacRawFormat 
pattern AacRawFormat_NONE :: AacRawFormat 

Instances

Instances details
Eq AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Ord AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Read AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Show AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Generic AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Associated Types

type Rep AacRawFormat :: Type -> Type #

NFData AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Methods

rnf :: AacRawFormat -> () #

Hashable AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToJSON AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToJSONKey AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

FromJSON AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

FromJSONKey AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToLog AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToHeader AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToQuery AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

FromXML AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToXML AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Methods

toXML :: AacRawFormat -> XML #

ToByteString AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

FromText AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

ToText AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

Methods

toText :: AacRawFormat -> Text #

type Rep AacRawFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacRawFormat

type Rep AacRawFormat = D1 ('MetaData "AacRawFormat" "Amazonka.MediaConvert.Types.AacRawFormat" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacRawFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacRawFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacSpecification

newtype AacSpecification Source #

Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream containers.

Instances

Instances details
Eq AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Ord AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Read AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Show AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Generic AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Associated Types

type Rep AacSpecification :: Type -> Type #

NFData AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

Methods

rnf :: AacSpecification -> () #

Hashable AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToJSON AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToJSONKey AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

FromJSON AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

FromJSONKey AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToLog AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToHeader AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToQuery AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

FromXML AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToXML AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToByteString AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

FromText AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

ToText AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

type Rep AacSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSpecification

type Rep AacSpecification = D1 ('MetaData "AacSpecification" "Amazonka.MediaConvert.Types.AacSpecification" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacSpecification'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacSpecification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacVbrQuality

newtype AacVbrQuality Source #

VBR Quality Level - Only used if rate_control_mode is VBR.

Constructors

AacVbrQuality' 

Instances

Instances details
Eq AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Ord AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Read AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Show AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Generic AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Associated Types

type Rep AacVbrQuality :: Type -> Type #

NFData AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Methods

rnf :: AacVbrQuality -> () #

Hashable AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToJSON AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToJSONKey AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

FromJSON AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

FromJSONKey AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToLog AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToHeader AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToQuery AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

FromXML AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToXML AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Methods

toXML :: AacVbrQuality -> XML #

ToByteString AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

FromText AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

ToText AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

Methods

toText :: AacVbrQuality -> Text #

type Rep AacVbrQuality Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacVbrQuality

type Rep AacVbrQuality = D1 ('MetaData "AacVbrQuality" "Amazonka.MediaConvert.Types.AacVbrQuality" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AacVbrQuality'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAacVbrQuality") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3BitstreamMode

newtype Ac3BitstreamMode Source #

Specify the bitstream mode for the AC-3 stream that the encoder emits. For more information about the AC3 bitstream mode, see ATSC A/52-2012 (Annex E).

Instances

Instances details
Eq Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Ord Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Read Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Show Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Generic Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Associated Types

type Rep Ac3BitstreamMode :: Type -> Type #

NFData Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

Methods

rnf :: Ac3BitstreamMode -> () #

Hashable Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToJSON Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToJSONKey Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

FromJSON Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

FromJSONKey Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToLog Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToHeader Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToQuery Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

FromXML Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToXML Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToByteString Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

FromText Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

ToText Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

type Rep Ac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3BitstreamMode

type Rep Ac3BitstreamMode = D1 ('MetaData "Ac3BitstreamMode" "Amazonka.MediaConvert.Types.Ac3BitstreamMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3BitstreamMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3BitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3CodingMode

newtype Ac3CodingMode Source #

Dolby Digital coding mode. Determines number of channels.

Constructors

Ac3CodingMode' 

Instances

Instances details
Eq Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Ord Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Read Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Show Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Generic Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Associated Types

type Rep Ac3CodingMode :: Type -> Type #

NFData Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Methods

rnf :: Ac3CodingMode -> () #

Hashable Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToJSON Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToJSONKey Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

FromJSON Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

FromJSONKey Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToLog Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToHeader Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToQuery Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

FromXML Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToXML Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Methods

toXML :: Ac3CodingMode -> XML #

ToByteString Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

FromText Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

ToText Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

Methods

toText :: Ac3CodingMode -> Text #

type Rep Ac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3CodingMode

type Rep Ac3CodingMode = D1 ('MetaData "Ac3CodingMode" "Amazonka.MediaConvert.Types.Ac3CodingMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3CodingMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3CodingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3DynamicRangeCompressionLine

newtype Ac3DynamicRangeCompressionLine Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Ord Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Read Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Show Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Generic Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Associated Types

type Rep Ac3DynamicRangeCompressionLine :: Type -> Type #

NFData Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

Hashable Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToJSON Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToJSONKey Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

FromJSON Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

FromJSONKey Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToLog Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToHeader Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToQuery Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

FromXML Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToXML Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToByteString Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

FromText Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

ToText Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

type Rep Ac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine

type Rep Ac3DynamicRangeCompressionLine = D1 ('MetaData "Ac3DynamicRangeCompressionLine" "Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionLine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3DynamicRangeCompressionLine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3DynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3DynamicRangeCompressionProfile

newtype Ac3DynamicRangeCompressionProfile Source #

When you want to add Dolby dynamic range compression (DRC) signaling to your output stream, we recommend that you use the mode-specific settings instead of Dynamic range compression profile (DynamicRangeCompressionProfile). The mode-specific settings are Dynamic range compression profile, line mode (dynamicRangeCompressionLine) and Dynamic range compression profile, RF mode (dynamicRangeCompressionRf). Note that when you specify values for all three settings, MediaConvert ignores the value of this setting in favor of the mode-specific settings. If you do use this setting instead of the mode-specific settings, choose None (NONE) to leave out DRC signaling. Keep the default Film standard (FILM_STANDARD) to set the profile to Dolby's film standard profile for all operating modes.

Instances

Instances details
Eq Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Ord Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Read Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Show Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Generic Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Associated Types

type Rep Ac3DynamicRangeCompressionProfile :: Type -> Type #

NFData Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

Hashable Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToJSON Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToJSONKey Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

FromJSON Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

FromJSONKey Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToLog Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToHeader Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToQuery Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

FromXML Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToXML Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToByteString Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

FromText Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

ToText Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

type Rep Ac3DynamicRangeCompressionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile

type Rep Ac3DynamicRangeCompressionProfile = D1 ('MetaData "Ac3DynamicRangeCompressionProfile" "Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3DynamicRangeCompressionProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3DynamicRangeCompressionProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3DynamicRangeCompressionRf

newtype Ac3DynamicRangeCompressionRf Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Ord Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Read Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Show Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Generic Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Associated Types

type Rep Ac3DynamicRangeCompressionRf :: Type -> Type #

NFData Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

Hashable Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToJSON Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToJSONKey Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

FromJSON Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

FromJSONKey Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToLog Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToHeader Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToQuery Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

FromXML Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToXML Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToByteString Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

FromText Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

ToText Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

type Rep Ac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf

type Rep Ac3DynamicRangeCompressionRf = D1 ('MetaData "Ac3DynamicRangeCompressionRf" "Amazonka.MediaConvert.Types.Ac3DynamicRangeCompressionRf" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3DynamicRangeCompressionRf'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3DynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3LfeFilter

newtype Ac3LfeFilter Source #

Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

Constructors

Ac3LfeFilter' 

Bundled Patterns

pattern Ac3LfeFilter_DISABLED :: Ac3LfeFilter 
pattern Ac3LfeFilter_ENABLED :: Ac3LfeFilter 

Instances

Instances details
Eq Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Ord Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Read Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Show Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Generic Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Associated Types

type Rep Ac3LfeFilter :: Type -> Type #

NFData Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Methods

rnf :: Ac3LfeFilter -> () #

Hashable Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToJSON Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToJSONKey Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

FromJSON Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

FromJSONKey Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToLog Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToHeader Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToQuery Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

FromXML Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToXML Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Methods

toXML :: Ac3LfeFilter -> XML #

ToByteString Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

FromText Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

ToText Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

Methods

toText :: Ac3LfeFilter -> Text #

type Rep Ac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3LfeFilter

type Rep Ac3LfeFilter = D1 ('MetaData "Ac3LfeFilter" "Amazonka.MediaConvert.Types.Ac3LfeFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3LfeFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3LfeFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Ac3MetadataControl

newtype Ac3MetadataControl Source #

When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

Instances

Instances details
Eq Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Ord Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Read Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Show Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Generic Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Associated Types

type Rep Ac3MetadataControl :: Type -> Type #

NFData Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

Methods

rnf :: Ac3MetadataControl -> () #

Hashable Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToJSON Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToJSONKey Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

FromJSON Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

FromJSONKey Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToLog Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToHeader Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToQuery Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

FromXML Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToXML Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToByteString Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

FromText Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

ToText Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

type Rep Ac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3MetadataControl

type Rep Ac3MetadataControl = D1 ('MetaData "Ac3MetadataControl" "Amazonka.MediaConvert.Types.Ac3MetadataControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Ac3MetadataControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAc3MetadataControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AccelerationMode

newtype AccelerationMode Source #

Specify whether the service runs your job with accelerated transcoding. Choose DISABLED if you don't want accelerated transcoding. Choose ENABLED if you want your job to run with accelerated transcoding and to fail if your input files or your job settings aren't compatible with accelerated transcoding. Choose PREFERRED if you want your job to run with accelerated transcoding if the job is compatible with the feature and to run at standard speed if it's not.

Instances

Instances details
Eq AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Ord AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Read AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Show AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Generic AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Associated Types

type Rep AccelerationMode :: Type -> Type #

NFData AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

Methods

rnf :: AccelerationMode -> () #

Hashable AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToJSON AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToJSONKey AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

FromJSON AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

FromJSONKey AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToLog AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToHeader AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToQuery AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

FromXML AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToXML AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToByteString AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

FromText AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

ToText AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

type Rep AccelerationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationMode

type Rep AccelerationMode = D1 ('MetaData "AccelerationMode" "Amazonka.MediaConvert.Types.AccelerationMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AccelerationMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAccelerationMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AccelerationStatus

newtype AccelerationStatus Source #

Describes whether the current job is running with accelerated transcoding. For jobs that have Acceleration (AccelerationMode) set to DISABLED, AccelerationStatus is always NOT_APPLICABLE. For jobs that have Acceleration (AccelerationMode) set to ENABLED or PREFERRED, AccelerationStatus is one of the other states. AccelerationStatus is IN_PROGRESS initially, while the service determines whether the input files and job settings are compatible with accelerated transcoding. If they are, AcclerationStatus is ACCELERATED. If your input files and job settings aren't compatible with accelerated transcoding, the service either fails your job or runs it without accelerated transcoding, depending on how you set Acceleration (AccelerationMode). When the service runs your job without accelerated transcoding, AccelerationStatus is NOT_ACCELERATED.

Instances

Instances details
Eq AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Ord AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Read AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Show AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Generic AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Associated Types

type Rep AccelerationStatus :: Type -> Type #

NFData AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

Methods

rnf :: AccelerationStatus -> () #

Hashable AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToJSON AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToJSONKey AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

FromJSON AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

FromJSONKey AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToLog AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToHeader AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToQuery AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

FromXML AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToXML AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToByteString AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

FromText AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

ToText AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

type Rep AccelerationStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationStatus

type Rep AccelerationStatus = D1 ('MetaData "AccelerationStatus" "Amazonka.MediaConvert.Types.AccelerationStatus" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AccelerationStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAccelerationStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AfdSignaling

newtype AfdSignaling Source #

This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert AFD signaling (AfdSignaling) to specify whether the service includes AFD values in the output video data and what those values are. * Choose None to remove all AFD values from this output. * Choose Fixed to ignore input AFD values and instead encode the value specified in the job. * Choose Auto to calculate output AFD values based on the input AFD scaler data.

Constructors

AfdSignaling' 

Instances

Instances details
Eq AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Ord AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Read AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Show AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Generic AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Associated Types

type Rep AfdSignaling :: Type -> Type #

NFData AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Methods

rnf :: AfdSignaling -> () #

Hashable AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToJSON AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToJSONKey AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

FromJSON AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

FromJSONKey AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToLog AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToHeader AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToQuery AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

FromXML AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToXML AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Methods

toXML :: AfdSignaling -> XML #

ToByteString AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

FromText AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

ToText AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

Methods

toText :: AfdSignaling -> Text #

type Rep AfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AfdSignaling

type Rep AfdSignaling = D1 ('MetaData "AfdSignaling" "Amazonka.MediaConvert.Types.AfdSignaling" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AfdSignaling'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAfdSignaling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AlphaBehavior

newtype AlphaBehavior Source #

Ignore this setting unless this input is a QuickTime animation with an alpha channel. Use this setting to create separate Key and Fill outputs. In each output, specify which part of the input MediaConvert uses. Leave this setting at the default value DISCARD to delete the alpha channel and preserve the video. Set it to REMAP_TO_LUMA to delete the video and map the alpha channel to the luma channel of your outputs.

Constructors

AlphaBehavior' 

Instances

Instances details
Eq AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Ord AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Read AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Show AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Generic AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Associated Types

type Rep AlphaBehavior :: Type -> Type #

NFData AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Methods

rnf :: AlphaBehavior -> () #

Hashable AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToJSON AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToJSONKey AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

FromJSON AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

FromJSONKey AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToLog AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToHeader AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToQuery AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

FromXML AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToXML AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Methods

toXML :: AlphaBehavior -> XML #

ToByteString AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

FromText AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

ToText AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

Methods

toText :: AlphaBehavior -> Text #

type Rep AlphaBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AlphaBehavior

type Rep AlphaBehavior = D1 ('MetaData "AlphaBehavior" "Amazonka.MediaConvert.Types.AlphaBehavior" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AlphaBehavior'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAlphaBehavior") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AncillaryConvert608To708

newtype AncillaryConvert608To708 Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

Instances

Instances details
Eq AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Ord AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Read AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Show AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Generic AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Associated Types

type Rep AncillaryConvert608To708 :: Type -> Type #

NFData AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

Hashable AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToJSON AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToJSONKey AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

FromJSON AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

FromJSONKey AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToLog AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToHeader AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToQuery AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

FromXML AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToXML AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToByteString AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

FromText AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

ToText AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

type Rep AncillaryConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryConvert608To708

type Rep AncillaryConvert608To708 = D1 ('MetaData "AncillaryConvert608To708" "Amazonka.MediaConvert.Types.AncillaryConvert608To708" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AncillaryConvert608To708'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAncillaryConvert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AncillaryTerminateCaptions

newtype AncillaryTerminateCaptions Source #

By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

Instances

Instances details
Eq AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Ord AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Read AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Show AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Generic AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Associated Types

type Rep AncillaryTerminateCaptions :: Type -> Type #

NFData AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

Hashable AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToJSON AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToJSONKey AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

FromJSON AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

FromJSONKey AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToLog AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToHeader AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToQuery AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

FromXML AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToXML AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToByteString AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

FromText AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

ToText AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

type Rep AncillaryTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillaryTerminateCaptions

type Rep AncillaryTerminateCaptions = D1 ('MetaData "AncillaryTerminateCaptions" "Amazonka.MediaConvert.Types.AncillaryTerminateCaptions" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AncillaryTerminateCaptions'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAncillaryTerminateCaptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AntiAlias

newtype AntiAlias Source #

The anti-alias filter is automatically applied to all outputs. The service no longer accepts the value DISABLED for AntiAlias. If you specify that in your job, the service will ignore the setting.

Constructors

AntiAlias' 

Fields

Bundled Patterns

pattern AntiAlias_DISABLED :: AntiAlias 
pattern AntiAlias_ENABLED :: AntiAlias 

Instances

Instances details
Eq AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Ord AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Read AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Show AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Generic AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Associated Types

type Rep AntiAlias :: Type -> Type #

NFData AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Methods

rnf :: AntiAlias -> () #

Hashable AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToJSON AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToJSONKey AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

FromJSON AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

FromJSONKey AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToLog AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToHeader AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToQuery AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

FromXML AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToXML AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Methods

toXML :: AntiAlias -> XML #

ToByteString AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Methods

toBS :: AntiAlias -> ByteString #

FromText AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

ToText AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

Methods

toText :: AntiAlias -> Text #

type Rep AntiAlias Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AntiAlias

type Rep AntiAlias = D1 ('MetaData "AntiAlias" "Amazonka.MediaConvert.Types.AntiAlias" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AntiAlias'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAntiAlias") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioChannelTag

newtype AudioChannelTag Source #

You can add a tag for this mono-channel audio track to mimic its placement in a multi-channel layout. For example, if this track is the left surround channel, choose Left surround (LS).

Constructors

AudioChannelTag' 

Instances

Instances details
Eq AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Ord AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Read AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Show AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Generic AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Associated Types

type Rep AudioChannelTag :: Type -> Type #

NFData AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Methods

rnf :: AudioChannelTag -> () #

Hashable AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToJSON AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToJSONKey AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

FromJSON AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

FromJSONKey AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToLog AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToHeader AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToQuery AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

FromXML AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToXML AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

Methods

toXML :: AudioChannelTag -> XML #

ToByteString AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

FromText AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

ToText AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

type Rep AudioChannelTag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTag

type Rep AudioChannelTag = D1 ('MetaData "AudioChannelTag" "Amazonka.MediaConvert.Types.AudioChannelTag" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioChannelTag'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioChannelTag") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioCodec

newtype AudioCodec Source #

Choose the audio codec for this output. Note that the option Dolby Digital passthrough (PASSTHROUGH) applies only to Dolby Digital and Dolby Digital Plus audio inputs. Make sure that you choose a codec that's supported with your output container: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#reference-codecs-containers-output-audio For audio-only outputs, make sure that both your input audio codec and your output audio codec are supported for audio-only workflows. For more information, see: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers-input.html#reference-codecs-containers-input-audio-only and https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#audio-only-output

Constructors

AudioCodec' 

Fields

Instances

Instances details
Eq AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Ord AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Read AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Show AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Generic AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Associated Types

type Rep AudioCodec :: Type -> Type #

NFData AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Methods

rnf :: AudioCodec -> () #

Hashable AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToJSON AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToJSONKey AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

FromJSON AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

FromJSONKey AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToLog AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToHeader AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToQuery AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

FromXML AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToXML AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Methods

toXML :: AudioCodec -> XML #

ToByteString AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

FromText AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

ToText AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

Methods

toText :: AudioCodec -> Text #

type Rep AudioCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodec

type Rep AudioCodec = D1 ('MetaData "AudioCodec" "Amazonka.MediaConvert.Types.AudioCodec" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioCodec'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioCodec") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioDefaultSelection

newtype AudioDefaultSelection Source #

Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.

Instances

Instances details
Eq AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Ord AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Read AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Show AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Generic AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Associated Types

type Rep AudioDefaultSelection :: Type -> Type #

NFData AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

Methods

rnf :: AudioDefaultSelection -> () #

Hashable AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToJSON AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToJSONKey AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

FromJSON AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

FromJSONKey AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToLog AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToHeader AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToQuery AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

FromXML AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToXML AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToByteString AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

FromText AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

ToText AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

type Rep AudioDefaultSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDefaultSelection

type Rep AudioDefaultSelection = D1 ('MetaData "AudioDefaultSelection" "Amazonka.MediaConvert.Types.AudioDefaultSelection" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioDefaultSelection'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioDefaultSelection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioLanguageCodeControl

newtype AudioLanguageCodeControl Source #

Specify which source for language code takes precedence for this audio track. When you choose Follow input (FOLLOW_INPUT), the service uses the language code from the input track if it's present. If there's no languge code on the input track, the service uses the code that you specify in the setting Language code (languageCode or customLanguageCode). When you choose Use configured (USE_CONFIGURED), the service uses the language code that you specify.

Instances

Instances details
Eq AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Ord AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Read AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Show AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Generic AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Associated Types

type Rep AudioLanguageCodeControl :: Type -> Type #

NFData AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

Hashable AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToJSON AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToJSONKey AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

FromJSON AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

FromJSONKey AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToLog AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToHeader AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToQuery AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

FromXML AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToXML AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToByteString AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

FromText AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

ToText AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

type Rep AudioLanguageCodeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioLanguageCodeControl

type Rep AudioLanguageCodeControl = D1 ('MetaData "AudioLanguageCodeControl" "Amazonka.MediaConvert.Types.AudioLanguageCodeControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioLanguageCodeControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioLanguageCodeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioNormalizationAlgorithm

newtype AudioNormalizationAlgorithm Source #

Choose one of the following audio normalization algorithms: ITU-R BS.1770-1: Ungated loudness. A measurement of ungated average loudness for an entire piece of content, suitable for measurement of short-form content under ATSC recommendation A/85. Supports up to 5.1 audio channels. ITU-R BS.1770-2: Gated loudness. A measurement of gated average loudness compliant with the requirements of EBU-R128. Supports up to 5.1 audio channels. ITU-R BS.1770-3: Modified peak. The same loudness measurement algorithm as 1770-2, with an updated true peak measurement. ITU-R BS.1770-4: Higher channel count. Allows for more audio channels than the other algorithms, including configurations such as 7.1.

Instances

Instances details
Eq AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Ord AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Read AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Show AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Generic AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Associated Types

type Rep AudioNormalizationAlgorithm :: Type -> Type #

NFData AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

Hashable AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToJSON AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToJSONKey AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

FromJSON AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

FromJSONKey AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToLog AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToHeader AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToQuery AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

FromXML AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToXML AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToByteString AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

FromText AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

ToText AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

type Rep AudioNormalizationAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm

type Rep AudioNormalizationAlgorithm = D1 ('MetaData "AudioNormalizationAlgorithm" "Amazonka.MediaConvert.Types.AudioNormalizationAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioNormalizationAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioNormalizationAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioNormalizationAlgorithmControl

newtype AudioNormalizationAlgorithmControl Source #

When enabled the output audio is corrected using the chosen algorithm. If disabled, the audio will be measured but not adjusted.

Instances

Instances details
Eq AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Ord AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Read AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Show AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Generic AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Associated Types

type Rep AudioNormalizationAlgorithmControl :: Type -> Type #

NFData AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

Hashable AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToJSON AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToJSONKey AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

FromJSON AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

FromJSONKey AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToLog AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToHeader AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToQuery AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

FromXML AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToXML AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToByteString AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

FromText AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

ToText AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

type Rep AudioNormalizationAlgorithmControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl

type Rep AudioNormalizationAlgorithmControl = D1 ('MetaData "AudioNormalizationAlgorithmControl" "Amazonka.MediaConvert.Types.AudioNormalizationAlgorithmControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioNormalizationAlgorithmControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioNormalizationAlgorithmControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioNormalizationLoudnessLogging

newtype AudioNormalizationLoudnessLogging Source #

If set to LOG, log each output's audio track loudness to a CSV file.

Instances

Instances details
Eq AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Ord AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Read AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Show AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Generic AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Associated Types

type Rep AudioNormalizationLoudnessLogging :: Type -> Type #

NFData AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

Hashable AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToJSON AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToJSONKey AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

FromJSON AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

FromJSONKey AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToLog AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToHeader AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToQuery AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

FromXML AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToXML AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToByteString AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

FromText AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

ToText AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

type Rep AudioNormalizationLoudnessLogging Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging

type Rep AudioNormalizationLoudnessLogging = D1 ('MetaData "AudioNormalizationLoudnessLogging" "Amazonka.MediaConvert.Types.AudioNormalizationLoudnessLogging" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioNormalizationLoudnessLogging'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioNormalizationLoudnessLogging") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioNormalizationPeakCalculation

newtype AudioNormalizationPeakCalculation Source #

If set to TRUE_PEAK, calculate and log the TruePeak for each output's audio track loudness.

Instances

Instances details
Eq AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Ord AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Read AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Show AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Generic AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Associated Types

type Rep AudioNormalizationPeakCalculation :: Type -> Type #

NFData AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

Hashable AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToJSON AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToJSONKey AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

FromJSON AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

FromJSONKey AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToLog AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToHeader AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToQuery AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

FromXML AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToXML AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToByteString AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

FromText AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

ToText AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

type Rep AudioNormalizationPeakCalculation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation

type Rep AudioNormalizationPeakCalculation = D1 ('MetaData "AudioNormalizationPeakCalculation" "Amazonka.MediaConvert.Types.AudioNormalizationPeakCalculation" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioNormalizationPeakCalculation'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioNormalizationPeakCalculation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioSelectorType

newtype AudioSelectorType Source #

Specifies the type of the audio selector.

Instances

Instances details
Eq AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Ord AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Read AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Show AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Generic AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Associated Types

type Rep AudioSelectorType :: Type -> Type #

NFData AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

Methods

rnf :: AudioSelectorType -> () #

Hashable AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToJSON AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToJSONKey AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

FromJSON AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

FromJSONKey AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToLog AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToHeader AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToQuery AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

FromXML AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToXML AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToByteString AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

FromText AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

ToText AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

type Rep AudioSelectorType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorType

type Rep AudioSelectorType = D1 ('MetaData "AudioSelectorType" "Amazonka.MediaConvert.Types.AudioSelectorType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioSelectorType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioSelectorType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AudioTypeControl

newtype AudioTypeControl Source #

When set to FOLLOW_INPUT, if the input contains an ISO 639 audio_type, then that value is passed through to the output. If the input contains no ISO 639 audio_type, the value in Audio Type is included in the output. Otherwise the value in Audio Type is included in the output. Note that this field and audioType are both ignored if audioDescriptionBroadcasterMix is set to BROADCASTER_MIXED_AD.

Instances

Instances details
Eq AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Ord AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Read AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Show AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Generic AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Associated Types

type Rep AudioTypeControl :: Type -> Type #

NFData AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

Methods

rnf :: AudioTypeControl -> () #

Hashable AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToJSON AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToJSONKey AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

FromJSON AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

FromJSONKey AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToLog AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToHeader AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToQuery AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

FromXML AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToXML AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToByteString AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

FromText AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

ToText AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

type Rep AudioTypeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioTypeControl

type Rep AudioTypeControl = D1 ('MetaData "AudioTypeControl" "Amazonka.MediaConvert.Types.AudioTypeControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AudioTypeControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAudioTypeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Av1AdaptiveQuantization

newtype Av1AdaptiveQuantization Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization (spatialAdaptiveQuantization).

Instances

Instances details
Eq Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Ord Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Read Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Show Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Generic Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Associated Types

type Rep Av1AdaptiveQuantization :: Type -> Type #

NFData Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

Methods

rnf :: Av1AdaptiveQuantization -> () #

Hashable Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToJSON Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToJSONKey Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

FromJSON Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

FromJSONKey Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToLog Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToHeader Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToQuery Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

FromXML Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToXML Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToByteString Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

FromText Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

ToText Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

type Rep Av1AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1AdaptiveQuantization

type Rep Av1AdaptiveQuantization = D1 ('MetaData "Av1AdaptiveQuantization" "Amazonka.MediaConvert.Types.Av1AdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Av1AdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAv1AdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Av1FramerateControl

newtype Av1FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Ord Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Read Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Show Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Generic Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Associated Types

type Rep Av1FramerateControl :: Type -> Type #

NFData Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

Methods

rnf :: Av1FramerateControl -> () #

Hashable Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToJSON Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToJSONKey Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

FromJSON Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

FromJSONKey Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToLog Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToHeader Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToQuery Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

FromXML Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToXML Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToByteString Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

FromText Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

ToText Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

type Rep Av1FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateControl

type Rep Av1FramerateControl = D1 ('MetaData "Av1FramerateControl" "Amazonka.MediaConvert.Types.Av1FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Av1FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAv1FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Av1FramerateConversionAlgorithm

newtype Av1FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Ord Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Read Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Show Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Generic Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Associated Types

type Rep Av1FramerateConversionAlgorithm :: Type -> Type #

NFData Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

Hashable Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToJSON Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToJSONKey Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

FromJSON Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

FromJSONKey Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToLog Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToHeader Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToQuery Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

FromXML Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToXML Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToByteString Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

FromText Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

ToText Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

type Rep Av1FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm

type Rep Av1FramerateConversionAlgorithm = D1 ('MetaData "Av1FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.Av1FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Av1FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAv1FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Av1RateControlMode

newtype Av1RateControlMode Source #

'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'

Bundled Patterns

pattern Av1RateControlMode_QVBR :: Av1RateControlMode 

Instances

Instances details
Eq Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Ord Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Read Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Show Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Generic Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Associated Types

type Rep Av1RateControlMode :: Type -> Type #

NFData Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

Methods

rnf :: Av1RateControlMode -> () #

Hashable Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToJSON Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToJSONKey Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

FromJSON Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

FromJSONKey Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToLog Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToHeader Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToQuery Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

FromXML Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToXML Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToByteString Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

FromText Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

ToText Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

type Rep Av1RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1RateControlMode

type Rep Av1RateControlMode = D1 ('MetaData "Av1RateControlMode" "Amazonka.MediaConvert.Types.Av1RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Av1RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAv1RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Av1SpatialAdaptiveQuantization

newtype Av1SpatialAdaptiveQuantization Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Ord Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Read Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Show Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Generic Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Associated Types

type Rep Av1SpatialAdaptiveQuantization :: Type -> Type #

NFData Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

Hashable Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToJSON Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToJSONKey Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

FromJSON Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

FromJSONKey Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToLog Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToHeader Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToQuery Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

FromXML Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToXML Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToByteString Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

FromText Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

ToText Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

type Rep Av1SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization

type Rep Av1SpatialAdaptiveQuantization = D1 ('MetaData "Av1SpatialAdaptiveQuantization" "Amazonka.MediaConvert.Types.Av1SpatialAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Av1SpatialAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAv1SpatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraClass

newtype AvcIntraClass Source #

Specify the AVC-Intra class of your output. The AVC-Intra class selection determines the output video bit rate depending on the frame rate of the output. Outputs with higher class values have higher bitrates and improved image quality. Note that for Class 4K/2K, MediaConvert supports only 4:2:2 chroma subsampling.

Constructors

AvcIntraClass' 

Instances

Instances details
Eq AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Ord AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Read AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Show AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Generic AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Associated Types

type Rep AvcIntraClass :: Type -> Type #

NFData AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Methods

rnf :: AvcIntraClass -> () #

Hashable AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToJSON AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToJSONKey AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

FromJSON AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

FromJSONKey AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToLog AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToHeader AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToQuery AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

FromXML AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToXML AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Methods

toXML :: AvcIntraClass -> XML #

ToByteString AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

FromText AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

ToText AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

Methods

toText :: AvcIntraClass -> Text #

type Rep AvcIntraClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraClass

type Rep AvcIntraClass = D1 ('MetaData "AvcIntraClass" "Amazonka.MediaConvert.Types.AvcIntraClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraFramerateControl

newtype AvcIntraFramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Ord AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Read AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Show AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Generic AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Associated Types

type Rep AvcIntraFramerateControl :: Type -> Type #

NFData AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

Hashable AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToJSON AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToJSONKey AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

FromJSON AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

FromJSONKey AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToLog AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToHeader AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToQuery AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

FromXML AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToXML AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToByteString AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

FromText AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

ToText AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

type Rep AvcIntraFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateControl

type Rep AvcIntraFramerateControl = D1 ('MetaData "AvcIntraFramerateControl" "Amazonka.MediaConvert.Types.AvcIntraFramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraFramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraFramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraFramerateConversionAlgorithm

newtype AvcIntraFramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

Ord AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

Read AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

Show AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

Generic AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

NFData AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

Hashable AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToJSON AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToJSONKey AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

FromJSON AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

FromJSONKey AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToLog AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToHeader AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToQuery AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

FromXML AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToXML AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToByteString AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

FromText AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

ToText AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

type Rep AvcIntraFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm

type Rep AvcIntraFramerateConversionAlgorithm = D1 ('MetaData "AvcIntraFramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.AvcIntraFramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraFramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraFramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraInterlaceMode

newtype AvcIntraInterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Ord AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Read AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Show AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Generic AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Associated Types

type Rep AvcIntraInterlaceMode :: Type -> Type #

NFData AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

Methods

rnf :: AvcIntraInterlaceMode -> () #

Hashable AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToJSON AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToJSONKey AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

FromJSON AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

FromJSONKey AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToLog AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToHeader AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToQuery AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

FromXML AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToXML AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToByteString AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

FromText AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

ToText AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

type Rep AvcIntraInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraInterlaceMode

type Rep AvcIntraInterlaceMode = D1 ('MetaData "AvcIntraInterlaceMode" "Amazonka.MediaConvert.Types.AvcIntraInterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraInterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraInterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraScanTypeConversionMode

newtype AvcIntraScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Ord AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Read AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Show AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Generic AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Associated Types

type Rep AvcIntraScanTypeConversionMode :: Type -> Type #

NFData AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

Hashable AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToJSON AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToJSONKey AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

FromJSON AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

FromJSONKey AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToLog AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToHeader AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToQuery AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

FromXML AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToXML AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToByteString AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

FromText AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

ToText AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

type Rep AvcIntraScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode

type Rep AvcIntraScanTypeConversionMode = D1 ('MetaData "AvcIntraScanTypeConversionMode" "Amazonka.MediaConvert.Types.AvcIntraScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraSlowPal

newtype AvcIntraSlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

AvcIntraSlowPal' 

Instances

Instances details
Eq AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Ord AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Read AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Show AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Generic AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Associated Types

type Rep AvcIntraSlowPal :: Type -> Type #

NFData AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Methods

rnf :: AvcIntraSlowPal -> () #

Hashable AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToJSON AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToJSONKey AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

FromJSON AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

FromJSONKey AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToLog AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToHeader AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToQuery AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

FromXML AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToXML AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

Methods

toXML :: AvcIntraSlowPal -> XML #

ToByteString AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

FromText AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

ToText AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

type Rep AvcIntraSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSlowPal

type Rep AvcIntraSlowPal = D1 ('MetaData "AvcIntraSlowPal" "Amazonka.MediaConvert.Types.AvcIntraSlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraSlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraSlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraTelecine

newtype AvcIntraTelecine Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

Instances

Instances details
Eq AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Ord AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Read AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Show AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Generic AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Associated Types

type Rep AvcIntraTelecine :: Type -> Type #

NFData AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

Methods

rnf :: AvcIntraTelecine -> () #

Hashable AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToJSON AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToJSONKey AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

FromJSON AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

FromJSONKey AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToLog AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToHeader AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToQuery AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

FromXML AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToXML AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToByteString AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

FromText AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

ToText AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

type Rep AvcIntraTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraTelecine

type Rep AvcIntraTelecine = D1 ('MetaData "AvcIntraTelecine" "Amazonka.MediaConvert.Types.AvcIntraTelecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraTelecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraTelecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AvcIntraUhdQualityTuningLevel

newtype AvcIntraUhdQualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how many transcoding passes MediaConvert does with your video. When you choose Multi-pass (MULTI_PASS), your video quality is better and your output bitrate is more accurate. That is, the actual bitrate of your output is closer to the target bitrate defined in the specification. When you choose Single-pass (SINGLE_PASS), your encoding time is faster. The default behavior is Single-pass (SINGLE_PASS).

Instances

Instances details
Eq AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Ord AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Read AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Show AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Generic AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Associated Types

type Rep AvcIntraUhdQualityTuningLevel :: Type -> Type #

NFData AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

Hashable AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToJSON AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToJSONKey AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

FromJSON AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

FromJSONKey AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToLog AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToHeader AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToQuery AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

FromXML AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToXML AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToByteString AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

FromText AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

ToText AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

type Rep AvcIntraUhdQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel

type Rep AvcIntraUhdQualityTuningLevel = D1 ('MetaData "AvcIntraUhdQualityTuningLevel" "Amazonka.MediaConvert.Types.AvcIntraUhdQualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "AvcIntraUhdQualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAvcIntraUhdQualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BillingTagsSource

newtype BillingTagsSource Source #

The tag type that AWS Billing and Cost Management will use to sort your AWS Elemental MediaConvert costs on any billing report that you set up.

Instances

Instances details
Eq BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Ord BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Read BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Show BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Generic BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Associated Types

type Rep BillingTagsSource :: Type -> Type #

NFData BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

Methods

rnf :: BillingTagsSource -> () #

Hashable BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToJSON BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToJSONKey BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

FromJSON BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

FromJSONKey BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToLog BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToHeader BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToQuery BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

FromXML BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToXML BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToByteString BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

FromText BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

ToText BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

type Rep BillingTagsSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BillingTagsSource

type Rep BillingTagsSource = D1 ('MetaData "BillingTagsSource" "Amazonka.MediaConvert.Types.BillingTagsSource" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BillingTagsSource'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBillingTagsSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurnInSubtitleStylePassthrough

newtype BurnInSubtitleStylePassthrough Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

Instances

Instances details
Eq BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Ord BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Read BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Show BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Generic BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Associated Types

type Rep BurnInSubtitleStylePassthrough :: Type -> Type #

NFData BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

Hashable BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToJSON BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToJSONKey BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

FromJSON BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

FromJSONKey BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToLog BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToHeader BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToQuery BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

FromXML BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToXML BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToByteString BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

FromText BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

ToText BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

type Rep BurnInSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough

type Rep BurnInSubtitleStylePassthrough = D1 ('MetaData "BurnInSubtitleStylePassthrough" "Amazonka.MediaConvert.Types.BurnInSubtitleStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurnInSubtitleStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurnInSubtitleStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleAlignment

newtype BurninSubtitleAlignment Source #

Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates.

Instances

Instances details
Eq BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Ord BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Read BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Show BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Generic BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Associated Types

type Rep BurninSubtitleAlignment :: Type -> Type #

NFData BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

Methods

rnf :: BurninSubtitleAlignment -> () #

Hashable BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToJSON BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToJSONKey BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

FromJSON BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

FromJSONKey BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToLog BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToHeader BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToQuery BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

FromXML BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToXML BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToByteString BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

FromText BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

ToText BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

type Rep BurninSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleAlignment

type Rep BurninSubtitleAlignment = D1 ('MetaData "BurninSubtitleAlignment" "Amazonka.MediaConvert.Types.BurninSubtitleAlignment" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleAlignment'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleAlignment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleApplyFontColor

newtype BurninSubtitleApplyFontColor Source #

Ignore this setting unless Style passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

Instances

Instances details
Eq BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Ord BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Read BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Show BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Generic BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Associated Types

type Rep BurninSubtitleApplyFontColor :: Type -> Type #

NFData BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

Hashable BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToJSON BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToJSONKey BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

FromJSON BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

FromJSONKey BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToLog BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToHeader BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToQuery BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

FromXML BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToXML BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToByteString BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

FromText BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

ToText BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

type Rep BurninSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor

type Rep BurninSubtitleApplyFontColor = D1 ('MetaData "BurninSubtitleApplyFontColor" "Amazonka.MediaConvert.Types.BurninSubtitleApplyFontColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleApplyFontColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleApplyFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleBackgroundColor

newtype BurninSubtitleBackgroundColor Source #

Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Ord BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Read BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Show BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Generic BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Associated Types

type Rep BurninSubtitleBackgroundColor :: Type -> Type #

NFData BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

Hashable BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToJSON BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToJSONKey BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

FromJSON BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

FromJSONKey BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToLog BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToHeader BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToQuery BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

FromXML BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToXML BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToByteString BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

FromText BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

ToText BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

type Rep BurninSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor

type Rep BurninSubtitleBackgroundColor = D1 ('MetaData "BurninSubtitleBackgroundColor" "Amazonka.MediaConvert.Types.BurninSubtitleBackgroundColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleBackgroundColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleBackgroundColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleFallbackFont

newtype BurninSubtitleFallbackFont Source #

Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

Instances

Instances details
Eq BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Ord BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Read BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Show BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Generic BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Associated Types

type Rep BurninSubtitleFallbackFont :: Type -> Type #

NFData BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

Hashable BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToJSON BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToJSONKey BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

FromJSON BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

FromJSONKey BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToLog BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToHeader BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToQuery BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

FromXML BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToXML BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToByteString BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

FromText BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

ToText BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

type Rep BurninSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont

type Rep BurninSubtitleFallbackFont = D1 ('MetaData "BurninSubtitleFallbackFont" "Amazonka.MediaConvert.Types.BurninSubtitleFallbackFont" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleFallbackFont'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleFallbackFont") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleFontColor

newtype BurninSubtitleFontColor Source #

Specify the color of the burned-in captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present.

Instances

Instances details
Eq BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Ord BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Read BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Show BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Generic BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Associated Types

type Rep BurninSubtitleFontColor :: Type -> Type #

NFData BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

Methods

rnf :: BurninSubtitleFontColor -> () #

Hashable BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToJSON BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToJSONKey BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

FromJSON BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

FromJSONKey BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToLog BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToHeader BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToQuery BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

FromXML BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToXML BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToByteString BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

FromText BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

ToText BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

type Rep BurninSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleFontColor

type Rep BurninSubtitleFontColor = D1 ('MetaData "BurninSubtitleFontColor" "Amazonka.MediaConvert.Types.BurninSubtitleFontColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleFontColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleOutlineColor

newtype BurninSubtitleOutlineColor Source #

Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present.

Instances

Instances details
Eq BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Ord BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Read BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Show BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Generic BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Associated Types

type Rep BurninSubtitleOutlineColor :: Type -> Type #

NFData BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

Hashable BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToJSON BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToJSONKey BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

FromJSON BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

FromJSONKey BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToLog BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToHeader BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToQuery BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

FromXML BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToXML BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToByteString BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

FromText BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

ToText BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

type Rep BurninSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor

type Rep BurninSubtitleOutlineColor = D1 ('MetaData "BurninSubtitleOutlineColor" "Amazonka.MediaConvert.Types.BurninSubtitleOutlineColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleOutlineColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleOutlineColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleShadowColor

newtype BurninSubtitleShadowColor Source #

Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present.

Instances

Instances details
Eq BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Ord BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Read BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Show BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Generic BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Associated Types

type Rep BurninSubtitleShadowColor :: Type -> Type #

NFData BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

Hashable BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToJSON BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToJSONKey BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

FromJSON BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

FromJSONKey BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToLog BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToHeader BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToQuery BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

FromXML BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToXML BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToByteString BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

FromText BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

ToText BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

type Rep BurninSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleShadowColor

type Rep BurninSubtitleShadowColor = D1 ('MetaData "BurninSubtitleShadowColor" "Amazonka.MediaConvert.Types.BurninSubtitleShadowColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleShadowColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleShadowColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BurninSubtitleTeletextSpacing

newtype BurninSubtitleTeletextSpacing Source #

Specify whether the text spacing (TeletextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions.

Instances

Instances details
Eq BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Ord BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Read BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Show BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Generic BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Associated Types

type Rep BurninSubtitleTeletextSpacing :: Type -> Type #

NFData BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

Hashable BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToJSON BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToJSONKey BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

FromJSON BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

FromJSONKey BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToLog BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToHeader BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToQuery BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

FromXML BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToXML BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToByteString BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

FromText BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

ToText BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

type Rep BurninSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing

type Rep BurninSubtitleTeletextSpacing = D1 ('MetaData "BurninSubtitleTeletextSpacing" "Amazonka.MediaConvert.Types.BurninSubtitleTeletextSpacing" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "BurninSubtitleTeletextSpacing'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBurninSubtitleTeletextSpacing") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CaptionDestinationType

newtype CaptionDestinationType Source #

Specify the format for this set of captions on this output. The default format is embedded without SCTE-20. Note that your choice of video output container constrains your choice of output captions format. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/captions-support-tables.html. If you are using SCTE-20 and you want to create an output that complies with the SCTE-43 spec, choose SCTE-20 plus embedded (SCTE20_PLUS_EMBEDDED). To create a non-compliant output where the embedded captions come first, choose Embedded plus SCTE-20 (EMBEDDED_PLUS_SCTE20).

Instances

Instances details
Eq CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Ord CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Read CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Show CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Generic CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Associated Types

type Rep CaptionDestinationType :: Type -> Type #

NFData CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

Methods

rnf :: CaptionDestinationType -> () #

Hashable CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToJSON CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToJSONKey CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

FromJSON CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

FromJSONKey CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToLog CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToHeader CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToQuery CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

FromXML CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToXML CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToByteString CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

FromText CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

ToText CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

type Rep CaptionDestinationType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationType

type Rep CaptionDestinationType = D1 ('MetaData "CaptionDestinationType" "Amazonka.MediaConvert.Types.CaptionDestinationType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CaptionDestinationType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCaptionDestinationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CaptionSourceType

newtype CaptionSourceType Source #

Use Source (SourceType) to identify the format of your input captions. The service cannot auto-detect caption format.

Instances

Instances details
Eq CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Ord CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Read CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Show CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Generic CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Associated Types

type Rep CaptionSourceType :: Type -> Type #

NFData CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

Methods

rnf :: CaptionSourceType -> () #

Hashable CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToJSON CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToJSONKey CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

FromJSON CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

FromJSONKey CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToLog CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToHeader CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToQuery CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

FromXML CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToXML CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToByteString CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

FromText CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

ToText CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

type Rep CaptionSourceType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceType

type Rep CaptionSourceType = D1 ('MetaData "CaptionSourceType" "Amazonka.MediaConvert.Types.CaptionSourceType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CaptionSourceType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCaptionSourceType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafClientCache

newtype CmafClientCache Source #

Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

Constructors

CmafClientCache' 

Instances

Instances details
Eq CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Ord CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Read CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Show CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Generic CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Associated Types

type Rep CmafClientCache :: Type -> Type #

NFData CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Methods

rnf :: CmafClientCache -> () #

Hashable CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToJSON CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToJSONKey CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

FromJSON CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

FromJSONKey CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToLog CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToHeader CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToQuery CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

FromXML CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToXML CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

Methods

toXML :: CmafClientCache -> XML #

ToByteString CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

FromText CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

ToText CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

type Rep CmafClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafClientCache

type Rep CmafClientCache = D1 ('MetaData "CmafClientCache" "Amazonka.MediaConvert.Types.CmafClientCache" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafClientCache'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafClientCache") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafCodecSpecification

newtype CmafCodecSpecification Source #

Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

Instances

Instances details
Eq CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Ord CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Read CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Show CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Generic CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Associated Types

type Rep CmafCodecSpecification :: Type -> Type #

NFData CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

Methods

rnf :: CmafCodecSpecification -> () #

Hashable CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToJSON CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToJSONKey CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

FromJSON CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

FromJSONKey CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToLog CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToHeader CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToQuery CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

FromXML CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToXML CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToByteString CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

FromText CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

ToText CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

type Rep CmafCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafCodecSpecification

type Rep CmafCodecSpecification = D1 ('MetaData "CmafCodecSpecification" "Amazonka.MediaConvert.Types.CmafCodecSpecification" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafCodecSpecification'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafCodecSpecification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafEncryptionType

newtype CmafEncryptionType Source #

Specify the encryption scheme that you want the service to use when encrypting your CMAF segments. Choose AES-CBC subsample (SAMPLE-AES) or AES_CTR (AES-CTR).

Instances

Instances details
Eq CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Ord CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Read CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Show CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Generic CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Associated Types

type Rep CmafEncryptionType :: Type -> Type #

NFData CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

Methods

rnf :: CmafEncryptionType -> () #

Hashable CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToJSON CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToJSONKey CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

FromJSON CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

FromJSONKey CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToLog CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToHeader CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToQuery CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

FromXML CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToXML CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToByteString CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

FromText CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

ToText CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

type Rep CmafEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionType

type Rep CmafEncryptionType = D1 ('MetaData "CmafEncryptionType" "Amazonka.MediaConvert.Types.CmafEncryptionType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafEncryptionType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafEncryptionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafImageBasedTrickPlay

newtype CmafImageBasedTrickPlay Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. When you enable Write HLS manifest (WriteHlsManifest), MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. When you enable Write DASH manifest (WriteDashManifest), MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

Instances

Instances details
Eq CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Ord CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Read CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Show CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Generic CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Associated Types

type Rep CmafImageBasedTrickPlay :: Type -> Type #

NFData CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

Methods

rnf :: CmafImageBasedTrickPlay -> () #

Hashable CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToJSON CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToJSONKey CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

FromJSON CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

FromJSONKey CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToLog CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToHeader CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToQuery CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

FromXML CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToXML CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToByteString CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

FromText CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

ToText CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

type Rep CmafImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay

type Rep CmafImageBasedTrickPlay = D1 ('MetaData "CmafImageBasedTrickPlay" "Amazonka.MediaConvert.Types.CmafImageBasedTrickPlay" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafImageBasedTrickPlay'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafImageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafInitializationVectorInManifest

newtype CmafInitializationVectorInManifest Source #

When you use DRM with CMAF outputs, choose whether the service writes the 128-bit encryption initialization vector in the HLS and DASH manifests.

Instances

Instances details
Eq CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Ord CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Read CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Show CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Generic CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Associated Types

type Rep CmafInitializationVectorInManifest :: Type -> Type #

NFData CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

Hashable CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToJSON CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToJSONKey CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

FromJSON CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

FromJSONKey CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToLog CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToHeader CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToQuery CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

FromXML CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToXML CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToByteString CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

FromText CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

ToText CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

type Rep CmafInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest

type Rep CmafInitializationVectorInManifest = D1 ('MetaData "CmafInitializationVectorInManifest" "Amazonka.MediaConvert.Types.CmafInitializationVectorInManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafInitializationVectorInManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafInitializationVectorInManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafIntervalCadence

newtype CmafIntervalCadence Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

Instances

Instances details
Eq CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Ord CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Read CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Show CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Generic CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Associated Types

type Rep CmafIntervalCadence :: Type -> Type #

NFData CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

Methods

rnf :: CmafIntervalCadence -> () #

Hashable CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToJSON CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToJSONKey CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

FromJSON CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

FromJSONKey CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToLog CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToHeader CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToQuery CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

FromXML CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToXML CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToByteString CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

FromText CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

ToText CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

type Rep CmafIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafIntervalCadence

type Rep CmafIntervalCadence = D1 ('MetaData "CmafIntervalCadence" "Amazonka.MediaConvert.Types.CmafIntervalCadence" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafIntervalCadence'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafIntervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafKeyProviderType

newtype CmafKeyProviderType Source #

Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

Instances

Instances details
Eq CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Ord CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Read CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Show CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Generic CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Associated Types

type Rep CmafKeyProviderType :: Type -> Type #

NFData CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

Methods

rnf :: CmafKeyProviderType -> () #

Hashable CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToJSON CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToJSONKey CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

FromJSON CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

FromJSONKey CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToLog CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToHeader CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToQuery CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

FromXML CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToXML CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToByteString CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

FromText CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

ToText CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

type Rep CmafKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafKeyProviderType

type Rep CmafKeyProviderType = D1 ('MetaData "CmafKeyProviderType" "Amazonka.MediaConvert.Types.CmafKeyProviderType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafKeyProviderType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafKeyProviderType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafManifestCompression

newtype CmafManifestCompression Source #

When set to GZIP, compresses HLS playlist.

Instances

Instances details
Eq CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Ord CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Read CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Show CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Generic CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Associated Types

type Rep CmafManifestCompression :: Type -> Type #

NFData CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

Methods

rnf :: CmafManifestCompression -> () #

Hashable CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToJSON CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToJSONKey CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

FromJSON CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

FromJSONKey CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToLog CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToHeader CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToQuery CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

FromXML CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToXML CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToByteString CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

FromText CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

ToText CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

type Rep CmafManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestCompression

type Rep CmafManifestCompression = D1 ('MetaData "CmafManifestCompression" "Amazonka.MediaConvert.Types.CmafManifestCompression" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafManifestCompression'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafManifestCompression") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafManifestDurationFormat

newtype CmafManifestDurationFormat Source #

Indicates whether the output manifest should use floating point values for segment duration.

Instances

Instances details
Eq CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Ord CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Read CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Show CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Generic CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Associated Types

type Rep CmafManifestDurationFormat :: Type -> Type #

NFData CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

Hashable CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToJSON CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToJSONKey CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

FromJSON CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

FromJSONKey CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToLog CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToHeader CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToQuery CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

FromXML CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToXML CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToByteString CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

FromText CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

ToText CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

type Rep CmafManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafManifestDurationFormat

type Rep CmafManifestDurationFormat = D1 ('MetaData "CmafManifestDurationFormat" "Amazonka.MediaConvert.Types.CmafManifestDurationFormat" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafManifestDurationFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafManifestDurationFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafMpdProfile

newtype CmafMpdProfile Source #

Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

Constructors

CmafMpdProfile' 

Instances

Instances details
Eq CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Ord CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Read CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Show CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Generic CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Associated Types

type Rep CmafMpdProfile :: Type -> Type #

NFData CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Methods

rnf :: CmafMpdProfile -> () #

Hashable CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToJSON CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToJSONKey CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

FromJSON CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

FromJSONKey CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToLog CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToHeader CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToQuery CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

FromXML CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToXML CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

Methods

toXML :: CmafMpdProfile -> XML #

ToByteString CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

FromText CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

ToText CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

type Rep CmafMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafMpdProfile

type Rep CmafMpdProfile = D1 ('MetaData "CmafMpdProfile" "Amazonka.MediaConvert.Types.CmafMpdProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafMpdProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafMpdProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafPtsOffsetHandlingForBFrames

newtype CmafPtsOffsetHandlingForBFrames Source #

Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

Instances

Instances details
Eq CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Ord CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Read CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Show CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Generic CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Associated Types

type Rep CmafPtsOffsetHandlingForBFrames :: Type -> Type #

NFData CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

Hashable CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToJSON CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToJSONKey CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

FromJSON CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

FromJSONKey CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToLog CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToHeader CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToQuery CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

FromXML CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToXML CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToByteString CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

FromText CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

ToText CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

type Rep CmafPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames

type Rep CmafPtsOffsetHandlingForBFrames = D1 ('MetaData "CmafPtsOffsetHandlingForBFrames" "Amazonka.MediaConvert.Types.CmafPtsOffsetHandlingForBFrames" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafPtsOffsetHandlingForBFrames'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafPtsOffsetHandlingForBFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafSegmentControl

newtype CmafSegmentControl Source #

When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

Instances

Instances details
Eq CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Ord CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Read CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Show CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Generic CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Associated Types

type Rep CmafSegmentControl :: Type -> Type #

NFData CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

Methods

rnf :: CmafSegmentControl -> () #

Hashable CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToJSON CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToJSONKey CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

FromJSON CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

FromJSONKey CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToLog CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToHeader CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToQuery CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

FromXML CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToXML CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToByteString CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

FromText CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

ToText CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

type Rep CmafSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentControl

type Rep CmafSegmentControl = D1 ('MetaData "CmafSegmentControl" "Amazonka.MediaConvert.Types.CmafSegmentControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafSegmentControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafSegmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafSegmentLengthControl

newtype CmafSegmentLengthControl Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

Instances

Instances details
Eq CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Ord CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Read CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Show CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Generic CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Associated Types

type Rep CmafSegmentLengthControl :: Type -> Type #

NFData CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

Hashable CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToJSON CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToJSONKey CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

FromJSON CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

FromJSONKey CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToLog CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToHeader CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToQuery CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

FromXML CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToXML CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToByteString CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

FromText CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

ToText CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

type Rep CmafSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafSegmentLengthControl

type Rep CmafSegmentLengthControl = D1 ('MetaData "CmafSegmentLengthControl" "Amazonka.MediaConvert.Types.CmafSegmentLengthControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafSegmentLengthControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafSegmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafStreamInfResolution

newtype CmafStreamInfResolution Source #

Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

Instances

Instances details
Eq CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Ord CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Read CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Show CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Generic CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Associated Types

type Rep CmafStreamInfResolution :: Type -> Type #

NFData CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

Methods

rnf :: CmafStreamInfResolution -> () #

Hashable CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToJSON CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToJSONKey CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

FromJSON CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

FromJSONKey CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToLog CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToHeader CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToQuery CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

FromXML CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToXML CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToByteString CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

FromText CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

ToText CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

type Rep CmafStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafStreamInfResolution

type Rep CmafStreamInfResolution = D1 ('MetaData "CmafStreamInfResolution" "Amazonka.MediaConvert.Types.CmafStreamInfResolution" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafStreamInfResolution'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafStreamInfResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafTargetDurationCompatibilityMode

newtype CmafTargetDurationCompatibilityMode Source #

When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

Instances

Instances details
Eq CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

Ord CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

Read CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

Show CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

Generic CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

NFData CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

Hashable CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToJSON CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToJSONKey CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

FromJSON CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

FromJSONKey CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToLog CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToHeader CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToQuery CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

FromXML CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToXML CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToByteString CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

FromText CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

ToText CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

type Rep CmafTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode

type Rep CmafTargetDurationCompatibilityMode = D1 ('MetaData "CmafTargetDurationCompatibilityMode" "Amazonka.MediaConvert.Types.CmafTargetDurationCompatibilityMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafTargetDurationCompatibilityMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafTargetDurationCompatibilityMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafWriteDASHManifest

newtype CmafWriteDASHManifest Source #

When set to ENABLED, a DASH MPD manifest will be generated for this output.

Instances

Instances details
Eq CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Ord CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Read CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Show CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Generic CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Associated Types

type Rep CmafWriteDASHManifest :: Type -> Type #

NFData CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

Methods

rnf :: CmafWriteDASHManifest -> () #

Hashable CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToJSON CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToJSONKey CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

FromJSON CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

FromJSONKey CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToLog CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToHeader CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToQuery CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

FromXML CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToXML CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToByteString CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

FromText CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

ToText CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

type Rep CmafWriteDASHManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteDASHManifest

type Rep CmafWriteDASHManifest = D1 ('MetaData "CmafWriteDASHManifest" "Amazonka.MediaConvert.Types.CmafWriteDASHManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafWriteDASHManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafWriteDASHManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafWriteHLSManifest

newtype CmafWriteHLSManifest Source #

When set to ENABLED, an Apple HLS manifest will be generated for this output.

Instances

Instances details
Eq CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Ord CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Read CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Show CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Generic CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Associated Types

type Rep CmafWriteHLSManifest :: Type -> Type #

NFData CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

Methods

rnf :: CmafWriteHLSManifest -> () #

Hashable CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToJSON CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToJSONKey CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

FromJSON CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

FromJSONKey CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToLog CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToHeader CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToQuery CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

FromXML CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToXML CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToByteString CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

FromText CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

ToText CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

type Rep CmafWriteHLSManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteHLSManifest

type Rep CmafWriteHLSManifest = D1 ('MetaData "CmafWriteHLSManifest" "Amazonka.MediaConvert.Types.CmafWriteHLSManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafWriteHLSManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafWriteHLSManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmafWriteSegmentTimelineInRepresentation

newtype CmafWriteSegmentTimelineInRepresentation Source #

When you enable Precise segment duration in DASH manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.

Instances

Instances details
Eq CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

Ord CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

Read CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

Show CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

Generic CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

NFData CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

Hashable CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToJSON CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToJSONKey CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

FromJSON CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

FromJSONKey CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToLog CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToHeader CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToQuery CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

FromXML CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToXML CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToByteString CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

FromText CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

ToText CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

type Rep CmafWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation

type Rep CmafWriteSegmentTimelineInRepresentation = D1 ('MetaData "CmafWriteSegmentTimelineInRepresentation" "Amazonka.MediaConvert.Types.CmafWriteSegmentTimelineInRepresentation" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmafWriteSegmentTimelineInRepresentation'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmafWriteSegmentTimelineInRepresentation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcAudioDuration

newtype CmfcAudioDuration Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

Instances

Instances details
Eq CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Ord CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Read CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Show CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Generic CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Associated Types

type Rep CmfcAudioDuration :: Type -> Type #

NFData CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

Methods

rnf :: CmfcAudioDuration -> () #

Hashable CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToJSON CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToJSONKey CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

FromJSON CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

FromJSONKey CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToLog CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToHeader CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToQuery CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

FromXML CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToXML CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToByteString CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

FromText CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

ToText CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

type Rep CmfcAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioDuration

type Rep CmfcAudioDuration = D1 ('MetaData "CmfcAudioDuration" "Amazonka.MediaConvert.Types.CmfcAudioDuration" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcAudioDuration'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcAudioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcAudioTrackType

newtype CmfcAudioTrackType Source #

Use this setting to control the values that MediaConvert puts in your HLS parent playlist to control how the client player selects which audio track to play. The other options for this setting determine the values that MediaConvert writes for the DEFAULT and AUTOSELECT attributes of the EXT-X-MEDIA entry for the audio variant. For more information about these attributes, see the Apple documentation article https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming/adding_alternate_media_to_a_playlist. Choose Alternate audio, auto select, default (ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT) to set DEFAULT=YES and AUTOSELECT=YES. Choose this value for only one variant in your output group. Choose Alternate audio, auto select, not default (ALTERNATE_AUDIO_AUTO_SELECT) to set DEFAULT=NO and AUTOSELECT=YES. Choose Alternate Audio, Not Auto Select to set DEFAULT=NO and AUTOSELECT=NO. When you don't specify a value for this setting, MediaConvert defaults to Alternate audio, auto select, default. When there is more than one variant in your output group, you must explicitly choose a value for this setting.

Instances

Instances details
Eq CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Ord CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Read CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Show CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Generic CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Associated Types

type Rep CmfcAudioTrackType :: Type -> Type #

NFData CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

Methods

rnf :: CmfcAudioTrackType -> () #

Hashable CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToJSON CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToJSONKey CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

FromJSON CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

FromJSONKey CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToLog CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToHeader CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToQuery CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

FromXML CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToXML CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToByteString CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

FromText CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

ToText CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

type Rep CmfcAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcAudioTrackType

type Rep CmfcAudioTrackType = D1 ('MetaData "CmfcAudioTrackType" "Amazonka.MediaConvert.Types.CmfcAudioTrackType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcAudioTrackType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcAudioTrackType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcDescriptiveVideoServiceFlag

newtype CmfcDescriptiveVideoServiceFlag Source #

Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

Instances

Instances details
Eq CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Ord CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Read CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Show CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Generic CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Associated Types

type Rep CmfcDescriptiveVideoServiceFlag :: Type -> Type #

NFData CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

Hashable CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToJSON CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToJSONKey CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

FromJSON CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

FromJSONKey CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToLog CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToHeader CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToQuery CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

FromXML CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToXML CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToByteString CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

FromText CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

ToText CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

type Rep CmfcDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag

type Rep CmfcDescriptiveVideoServiceFlag = D1 ('MetaData "CmfcDescriptiveVideoServiceFlag" "Amazonka.MediaConvert.Types.CmfcDescriptiveVideoServiceFlag" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcDescriptiveVideoServiceFlag'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcDescriptiveVideoServiceFlag") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcIFrameOnlyManifest

newtype CmfcIFrameOnlyManifest Source #

Choose Include (INCLUDE) to have MediaConvert generate an HLS child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

Instances

Instances details
Eq CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Ord CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Read CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Show CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Generic CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Associated Types

type Rep CmfcIFrameOnlyManifest :: Type -> Type #

NFData CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

Methods

rnf :: CmfcIFrameOnlyManifest -> () #

Hashable CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToJSON CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToJSONKey CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

FromJSON CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

FromJSONKey CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToLog CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToHeader CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToQuery CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

FromXML CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToXML CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToByteString CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

FromText CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

ToText CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

type Rep CmfcIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest

type Rep CmfcIFrameOnlyManifest = D1 ('MetaData "CmfcIFrameOnlyManifest" "Amazonka.MediaConvert.Types.CmfcIFrameOnlyManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcIFrameOnlyManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcIFrameOnlyManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcScte35Esam

newtype CmfcScte35Esam Source #

Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

Constructors

CmfcScte35Esam' 

Instances

Instances details
Eq CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Ord CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Read CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Show CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Generic CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Associated Types

type Rep CmfcScte35Esam :: Type -> Type #

NFData CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Methods

rnf :: CmfcScte35Esam -> () #

Hashable CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToJSON CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToJSONKey CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

FromJSON CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

FromJSONKey CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToLog CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToHeader CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToQuery CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

FromXML CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToXML CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

Methods

toXML :: CmfcScte35Esam -> XML #

ToByteString CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

FromText CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

ToText CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

type Rep CmfcScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Esam

type Rep CmfcScte35Esam = D1 ('MetaData "CmfcScte35Esam" "Amazonka.MediaConvert.Types.CmfcScte35Esam" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcScte35Esam'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcScte35Esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CmfcScte35Source

newtype CmfcScte35Source Source #

Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

Instances

Instances details
Eq CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Ord CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Read CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Show CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Generic CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Associated Types

type Rep CmfcScte35Source :: Type -> Type #

NFData CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

Methods

rnf :: CmfcScte35Source -> () #

Hashable CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToJSON CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToJSONKey CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

FromJSON CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

FromJSONKey CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToLog CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToHeader CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToQuery CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

FromXML CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToXML CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToByteString CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

FromText CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

ToText CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

type Rep CmfcScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcScte35Source

type Rep CmfcScte35Source = D1 ('MetaData "CmfcScte35Source" "Amazonka.MediaConvert.Types.CmfcScte35Source" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CmfcScte35Source'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCmfcScte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ColorMetadata

newtype ColorMetadata Source #

Choose Insert (INSERT) for this setting to include color metadata in this output. Choose Ignore (IGNORE) to exclude color metadata from this output. If you don't specify a value, the service sets this to Insert by default.

Constructors

ColorMetadata' 

Bundled Patterns

pattern ColorMetadata_IGNORE :: ColorMetadata 
pattern ColorMetadata_INSERT :: ColorMetadata 

Instances

Instances details
Eq ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Ord ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Read ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Show ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Generic ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Associated Types

type Rep ColorMetadata :: Type -> Type #

NFData ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Methods

rnf :: ColorMetadata -> () #

Hashable ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToJSON ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToJSONKey ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

FromJSON ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

FromJSONKey ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToLog ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToHeader ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToQuery ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

FromXML ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToXML ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Methods

toXML :: ColorMetadata -> XML #

ToByteString ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

FromText ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

ToText ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

Methods

toText :: ColorMetadata -> Text #

type Rep ColorMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorMetadata

type Rep ColorMetadata = D1 ('MetaData "ColorMetadata" "Amazonka.MediaConvert.Types.ColorMetadata" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ColorMetadata'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromColorMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ColorSpace

newtype ColorSpace Source #

If your input video has accurate color space metadata, or if you don't know about color space, leave this set to the default value Follow (FOLLOW). The service will automatically detect your input color space. If your input video has metadata indicating the wrong color space, specify the accurate color space here. If your input video is HDR 10 and the SMPTE ST 2086 Mastering Display Color Volume static metadata isn't present in your video stream, or if that metadata is present but not accurate, choose Force HDR 10 (FORCE_HDR10) here and specify correct values in the input HDR 10 metadata (Hdr10Metadata) settings. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

Constructors

ColorSpace' 

Fields

Instances

Instances details
Eq ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Ord ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Read ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Show ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Generic ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Associated Types

type Rep ColorSpace :: Type -> Type #

NFData ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Methods

rnf :: ColorSpace -> () #

Hashable ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToJSON ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToJSONKey ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

FromJSON ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

FromJSONKey ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToLog ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToHeader ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToQuery ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

FromXML ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToXML ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Methods

toXML :: ColorSpace -> XML #

ToByteString ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

FromText ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

ToText ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

Methods

toText :: ColorSpace -> Text #

type Rep ColorSpace Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpace

type Rep ColorSpace = D1 ('MetaData "ColorSpace" "Amazonka.MediaConvert.Types.ColorSpace" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ColorSpace'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromColorSpace") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ColorSpaceConversion

newtype ColorSpaceConversion Source #

Specify the color space you want for this output. The service supports conversion between HDR formats, between SDR formats, from SDR to HDR, and from HDR to SDR. SDR to HDR conversion doesn't upgrade the dynamic range. The converted video has an HDR format, but visually appears the same as an unconverted output. HDR to SDR conversion uses Elemental tone mapping technology to approximate the outcome of manually regrading from HDR to SDR.

Instances

Instances details
Eq ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Ord ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Read ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Show ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Generic ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Associated Types

type Rep ColorSpaceConversion :: Type -> Type #

NFData ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

Methods

rnf :: ColorSpaceConversion -> () #

Hashable ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToJSON ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToJSONKey ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

FromJSON ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

FromJSONKey ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToLog ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToHeader ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToQuery ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

FromXML ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToXML ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToByteString ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

FromText ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

ToText ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

type Rep ColorSpaceConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceConversion

type Rep ColorSpaceConversion = D1 ('MetaData "ColorSpaceConversion" "Amazonka.MediaConvert.Types.ColorSpaceConversion" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ColorSpaceConversion'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromColorSpaceConversion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ColorSpaceUsage

newtype ColorSpaceUsage Source #

There are two sources for color metadata, the input file and the job input settings Color space (ColorSpace) and HDR master display information settings(Hdr10Metadata). The Color space usage setting determines which takes precedence. Choose Force (FORCE) to use color metadata from the input job settings. If you don't specify values for those settings, the service defaults to using metadata from your input. FALLBACK - Choose Fallback (FALLBACK) to use color metadata from the source when it is present. If there's no color metadata in your input file, the service defaults to using values you specify in the input settings.

Constructors

ColorSpaceUsage' 

Instances

Instances details
Eq ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Ord ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Read ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Show ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Generic ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Associated Types

type Rep ColorSpaceUsage :: Type -> Type #

NFData ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Methods

rnf :: ColorSpaceUsage -> () #

Hashable ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToJSON ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToJSONKey ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

FromJSON ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

FromJSONKey ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToLog ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToHeader ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToQuery ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

FromXML ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToXML ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

Methods

toXML :: ColorSpaceUsage -> XML #

ToByteString ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

FromText ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

ToText ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

type Rep ColorSpaceUsage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorSpaceUsage

type Rep ColorSpaceUsage = D1 ('MetaData "ColorSpaceUsage" "Amazonka.MediaConvert.Types.ColorSpaceUsage" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ColorSpaceUsage'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromColorSpaceUsage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Commitment

newtype Commitment Source #

The length of the term of your reserved queue pricing plan commitment.

Constructors

Commitment' 

Fields

Bundled Patterns

pattern Commitment_ONE_YEAR :: Commitment 

Instances

Instances details
Eq Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Ord Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Read Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Show Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Generic Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Associated Types

type Rep Commitment :: Type -> Type #

NFData Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Methods

rnf :: Commitment -> () #

Hashable Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToJSON Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToJSONKey Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

FromJSON Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

FromJSONKey Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToLog Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToHeader Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToQuery Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

FromXML Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToXML Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Methods

toXML :: Commitment -> XML #

ToByteString Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

FromText Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

ToText Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

Methods

toText :: Commitment -> Text #

type Rep Commitment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Commitment

type Rep Commitment = D1 ('MetaData "Commitment" "Amazonka.MediaConvert.Types.Commitment" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Commitment'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCommitment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ContainerType

newtype ContainerType Source #

Container for this output. Some containers require a container settings object. If not specified, the default object will be created.

Constructors

ContainerType' 

Instances

Instances details
Eq ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Ord ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Read ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Show ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Generic ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Associated Types

type Rep ContainerType :: Type -> Type #

NFData ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Methods

rnf :: ContainerType -> () #

Hashable ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToJSON ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToJSONKey ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

FromJSON ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

FromJSONKey ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToLog ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToHeader ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToQuery ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

FromXML ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToXML ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Methods

toXML :: ContainerType -> XML #

ToByteString ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

FromText ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

ToText ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

Methods

toText :: ContainerType -> Text #

type Rep ContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerType

type Rep ContainerType = D1 ('MetaData "ContainerType" "Amazonka.MediaConvert.Types.ContainerType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ContainerType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromContainerType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CopyProtectionAction

newtype CopyProtectionAction Source #

The action to take on copy and redistribution control XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

Instances

Instances details
Eq CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Ord CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Read CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Show CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Generic CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Associated Types

type Rep CopyProtectionAction :: Type -> Type #

NFData CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

Methods

rnf :: CopyProtectionAction -> () #

Hashable CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToJSON CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToJSONKey CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

FromJSON CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

FromJSONKey CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToLog CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToHeader CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToQuery CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

FromXML CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToXML CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToByteString CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

FromText CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

ToText CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

type Rep CopyProtectionAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CopyProtectionAction

type Rep CopyProtectionAction = D1 ('MetaData "CopyProtectionAction" "Amazonka.MediaConvert.Types.CopyProtectionAction" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "CopyProtectionAction'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCopyProtectionAction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoGroupAudioChannelConfigSchemeIdUri

newtype DashIsoGroupAudioChannelConfigSchemeIdUri Source #

Use this setting only when your audio codec is a Dolby one (AC3, EAC3, or Atmos) and your downstream workflow requires that your DASH manifest use the Dolby channel configuration tag, rather than the MPEG one. For example, you might need to use this to make dynamic ad insertion work. Specify which audio channel configuration scheme ID URI MediaConvert writes in your DASH manifest. Keep the default value, MPEG channel configuration (MPEG_CHANNEL_CONFIGURATION), to have MediaConvert write this: urn:mpeg:mpegB:cicp:ChannelConfiguration. Choose Dolby channel configuration (DOLBY_CHANNEL_CONFIGURATION) to have MediaConvert write this instead: tag:dolby.com,2014:dash:audio_channel_configuration:2011.

Instances

Instances details
Eq DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

Ord DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

Read DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

Show DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

Generic DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

NFData DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

Hashable DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToJSON DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToJSONKey DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

FromJSON DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

FromJSONKey DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToLog DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToHeader DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToQuery DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

FromXML DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToXML DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToByteString DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

FromText DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

ToText DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

type Rep DashIsoGroupAudioChannelConfigSchemeIdUri Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri

type Rep DashIsoGroupAudioChannelConfigSchemeIdUri = D1 ('MetaData "DashIsoGroupAudioChannelConfigSchemeIdUri" "Amazonka.MediaConvert.Types.DashIsoGroupAudioChannelConfigSchemeIdUri" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoGroupAudioChannelConfigSchemeIdUri'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoGroupAudioChannelConfigSchemeIdUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoHbbtvCompliance

newtype DashIsoHbbtvCompliance Source #

Supports HbbTV specification as indicated

Instances

Instances details
Eq DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Ord DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Read DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Show DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Generic DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Associated Types

type Rep DashIsoHbbtvCompliance :: Type -> Type #

NFData DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

Methods

rnf :: DashIsoHbbtvCompliance -> () #

Hashable DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToJSON DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToJSONKey DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

FromJSON DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

FromJSONKey DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToLog DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToHeader DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToQuery DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

FromXML DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToXML DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToByteString DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

FromText DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

ToText DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

type Rep DashIsoHbbtvCompliance Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance

type Rep DashIsoHbbtvCompliance = D1 ('MetaData "DashIsoHbbtvCompliance" "Amazonka.MediaConvert.Types.DashIsoHbbtvCompliance" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoHbbtvCompliance'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoHbbtvCompliance") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoImageBasedTrickPlay

newtype DashIsoImageBasedTrickPlay Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

Instances

Instances details
Eq DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Ord DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Read DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Show DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Generic DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Associated Types

type Rep DashIsoImageBasedTrickPlay :: Type -> Type #

NFData DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

Hashable DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToJSON DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToJSONKey DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

FromJSON DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

FromJSONKey DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToLog DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToHeader DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToQuery DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

FromXML DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToXML DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToByteString DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

FromText DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

ToText DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

type Rep DashIsoImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay

type Rep DashIsoImageBasedTrickPlay = D1 ('MetaData "DashIsoImageBasedTrickPlay" "Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlay" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoImageBasedTrickPlay'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoImageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoIntervalCadence

newtype DashIsoIntervalCadence Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

Instances

Instances details
Eq DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Ord DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Read DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Show DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Generic DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Associated Types

type Rep DashIsoIntervalCadence :: Type -> Type #

NFData DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

Methods

rnf :: DashIsoIntervalCadence -> () #

Hashable DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToJSON DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToJSONKey DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

FromJSON DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

FromJSONKey DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToLog DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToHeader DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToQuery DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

FromXML DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToXML DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToByteString DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

FromText DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

ToText DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

type Rep DashIsoIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoIntervalCadence

type Rep DashIsoIntervalCadence = D1 ('MetaData "DashIsoIntervalCadence" "Amazonka.MediaConvert.Types.DashIsoIntervalCadence" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoIntervalCadence'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoIntervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoMpdProfile

newtype DashIsoMpdProfile Source #

Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

Instances

Instances details
Eq DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Ord DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Read DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Show DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Generic DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Associated Types

type Rep DashIsoMpdProfile :: Type -> Type #

NFData DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

Methods

rnf :: DashIsoMpdProfile -> () #

Hashable DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToJSON DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToJSONKey DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

FromJSON DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

FromJSONKey DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToLog DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToHeader DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToQuery DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

FromXML DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToXML DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToByteString DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

FromText DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

ToText DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

type Rep DashIsoMpdProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoMpdProfile

type Rep DashIsoMpdProfile = D1 ('MetaData "DashIsoMpdProfile" "Amazonka.MediaConvert.Types.DashIsoMpdProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoMpdProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoMpdProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoPlaybackDeviceCompatibility

newtype DashIsoPlaybackDeviceCompatibility Source #

This setting can improve the compatibility of your output with video players on obsolete devices. It applies only to DASH H.264 outputs with DRM encryption. Choose Unencrypted SEI (UNENCRYPTED_SEI) only to correct problems with playback on older devices. Otherwise, keep the default setting CENC v1 (CENC_V1). If you choose Unencrypted SEI, for that output, the service will exclude the access unit delimiter and will leave the SEI NAL units unencrypted.

Instances

Instances details
Eq DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Ord DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Read DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Show DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Generic DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Associated Types

type Rep DashIsoPlaybackDeviceCompatibility :: Type -> Type #

NFData DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

Hashable DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToJSON DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToJSONKey DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

FromJSON DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

FromJSONKey DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToLog DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToHeader DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToQuery DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

FromXML DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToXML DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToByteString DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

FromText DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

ToText DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

type Rep DashIsoPlaybackDeviceCompatibility Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility

type Rep DashIsoPlaybackDeviceCompatibility = D1 ('MetaData "DashIsoPlaybackDeviceCompatibility" "Amazonka.MediaConvert.Types.DashIsoPlaybackDeviceCompatibility" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoPlaybackDeviceCompatibility'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoPlaybackDeviceCompatibility") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoPtsOffsetHandlingForBFrames

newtype DashIsoPtsOffsetHandlingForBFrames Source #

Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

Instances

Instances details
Eq DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Ord DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Read DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Show DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Generic DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Associated Types

type Rep DashIsoPtsOffsetHandlingForBFrames :: Type -> Type #

NFData DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

Hashable DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToJSON DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToJSONKey DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

FromJSON DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

FromJSONKey DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToLog DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToHeader DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToQuery DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

FromXML DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToXML DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToByteString DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

FromText DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

ToText DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

type Rep DashIsoPtsOffsetHandlingForBFrames Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames

type Rep DashIsoPtsOffsetHandlingForBFrames = D1 ('MetaData "DashIsoPtsOffsetHandlingForBFrames" "Amazonka.MediaConvert.Types.DashIsoPtsOffsetHandlingForBFrames" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoPtsOffsetHandlingForBFrames'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoPtsOffsetHandlingForBFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoSegmentControl

newtype DashIsoSegmentControl Source #

When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

Instances

Instances details
Eq DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Ord DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Read DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Show DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Generic DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Associated Types

type Rep DashIsoSegmentControl :: Type -> Type #

NFData DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

Methods

rnf :: DashIsoSegmentControl -> () #

Hashable DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToJSON DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToJSONKey DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

FromJSON DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

FromJSONKey DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToLog DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToHeader DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToQuery DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

FromXML DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToXML DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToByteString DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

FromText DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

ToText DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

type Rep DashIsoSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentControl

type Rep DashIsoSegmentControl = D1 ('MetaData "DashIsoSegmentControl" "Amazonka.MediaConvert.Types.DashIsoSegmentControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoSegmentControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoSegmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoSegmentLengthControl

newtype DashIsoSegmentLengthControl Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

Instances

Instances details
Eq DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Ord DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Read DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Show DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Generic DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Associated Types

type Rep DashIsoSegmentLengthControl :: Type -> Type #

NFData DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

Hashable DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToJSON DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToJSONKey DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

FromJSON DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

FromJSONKey DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToLog DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToHeader DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToQuery DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

FromXML DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToXML DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToByteString DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

FromText DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

ToText DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

type Rep DashIsoSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl

type Rep DashIsoSegmentLengthControl = D1 ('MetaData "DashIsoSegmentLengthControl" "Amazonka.MediaConvert.Types.DashIsoSegmentLengthControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoSegmentLengthControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoSegmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DashIsoWriteSegmentTimelineInRepresentation

newtype DashIsoWriteSegmentTimelineInRepresentation Source #

When you enable Precise segment duration in manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.

Instances

Instances details
Eq DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

Ord DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

Read DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

Show DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

Generic DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

NFData DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

Hashable DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToJSON DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToJSONKey DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

FromJSON DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

FromJSONKey DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToLog DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToHeader DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToQuery DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

FromXML DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToXML DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToByteString DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

FromText DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

ToText DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

type Rep DashIsoWriteSegmentTimelineInRepresentation Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation

type Rep DashIsoWriteSegmentTimelineInRepresentation = D1 ('MetaData "DashIsoWriteSegmentTimelineInRepresentation" "Amazonka.MediaConvert.Types.DashIsoWriteSegmentTimelineInRepresentation" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DashIsoWriteSegmentTimelineInRepresentation'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDashIsoWriteSegmentTimelineInRepresentation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DecryptionMode

newtype DecryptionMode Source #

Specify the encryption mode that you used to encrypt your input files.

Constructors

DecryptionMode' 

Instances

Instances details
Eq DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Ord DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Read DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Show DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Generic DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Associated Types

type Rep DecryptionMode :: Type -> Type #

NFData DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Methods

rnf :: DecryptionMode -> () #

Hashable DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToJSON DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToJSONKey DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

FromJSON DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

FromJSONKey DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToLog DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToHeader DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToQuery DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

FromXML DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToXML DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

Methods

toXML :: DecryptionMode -> XML #

ToByteString DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

FromText DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

ToText DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

type Rep DecryptionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DecryptionMode

type Rep DecryptionMode = D1 ('MetaData "DecryptionMode" "Amazonka.MediaConvert.Types.DecryptionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DecryptionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDecryptionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DeinterlaceAlgorithm

newtype DeinterlaceAlgorithm Source #

Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) OR (BLEND_TICKER) if your source file includes a ticker, such as a scrolling headline at the bottom of the frame.

Instances

Instances details
Eq DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Ord DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Read DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Show DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Generic DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Associated Types

type Rep DeinterlaceAlgorithm :: Type -> Type #

NFData DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

Methods

rnf :: DeinterlaceAlgorithm -> () #

Hashable DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToJSON DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToJSONKey DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

FromJSON DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

FromJSONKey DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToLog DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToHeader DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToQuery DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

FromXML DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToXML DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToByteString DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

FromText DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

ToText DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

type Rep DeinterlaceAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlaceAlgorithm

type Rep DeinterlaceAlgorithm = D1 ('MetaData "DeinterlaceAlgorithm" "Amazonka.MediaConvert.Types.DeinterlaceAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DeinterlaceAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDeinterlaceAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DeinterlacerControl

newtype DeinterlacerControl Source #

  • When set to NORMAL (default), the deinterlacer does not convert frames that are tagged in metadata as progressive. It will only convert those that are tagged as some other type. - When set to FORCE_ALL_FRAMES, the deinterlacer converts every frame to progressive - even those that are already tagged as progressive. Turn Force mode on only if there is a good chance that the metadata has tagged frames as progressive when they are not progressive. Do not turn on otherwise; processing frames that are already progressive into progressive will probably result in lower quality video.

Instances

Instances details
Eq DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Ord DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Read DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Show DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Generic DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Associated Types

type Rep DeinterlacerControl :: Type -> Type #

NFData DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

Methods

rnf :: DeinterlacerControl -> () #

Hashable DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToJSON DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToJSONKey DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

FromJSON DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

FromJSONKey DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToLog DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToHeader DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToQuery DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

FromXML DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToXML DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToByteString DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

FromText DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

ToText DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

type Rep DeinterlacerControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerControl

type Rep DeinterlacerControl = D1 ('MetaData "DeinterlacerControl" "Amazonka.MediaConvert.Types.DeinterlacerControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DeinterlacerControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDeinterlacerControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DeinterlacerMode

newtype DeinterlacerMode Source #

Use Deinterlacer (DeinterlaceMode) to choose how the service will do deinterlacing. Default is Deinterlace. - Deinterlace converts interlaced to progressive. - Inverse telecine converts Hard Telecine 29.97i to progressive 23.976p. - Adaptive auto-detects and converts to progressive.

Instances

Instances details
Eq DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Ord DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Read DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Show DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Generic DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Associated Types

type Rep DeinterlacerMode :: Type -> Type #

NFData DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

Methods

rnf :: DeinterlacerMode -> () #

Hashable DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToJSON DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToJSONKey DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

FromJSON DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

FromJSONKey DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToLog DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToHeader DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToQuery DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

FromXML DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToXML DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToByteString DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

FromText DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

ToText DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

type Rep DeinterlacerMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DeinterlacerMode

type Rep DeinterlacerMode = D1 ('MetaData "DeinterlacerMode" "Amazonka.MediaConvert.Types.DeinterlacerMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DeinterlacerMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDeinterlacerMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DescribeEndpointsMode

newtype DescribeEndpointsMode Source #

Optional field, defaults to DEFAULT. Specify DEFAULT for this operation to return your endpoints if any exist, or to create an endpoint for you and return it if one doesn't already exist. Specify GET_ONLY to return your endpoints if any exist, or an empty list if none exist.

Instances

Instances details
Eq DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Ord DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Read DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Show DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Generic DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Associated Types

type Rep DescribeEndpointsMode :: Type -> Type #

NFData DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

Methods

rnf :: DescribeEndpointsMode -> () #

Hashable DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToJSON DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToJSONKey DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

FromJSON DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

FromJSONKey DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToLog DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToHeader DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToQuery DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

FromXML DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToXML DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToByteString DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

FromText DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

ToText DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

type Rep DescribeEndpointsMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DescribeEndpointsMode

type Rep DescribeEndpointsMode = D1 ('MetaData "DescribeEndpointsMode" "Amazonka.MediaConvert.Types.DescribeEndpointsMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DescribeEndpointsMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDescribeEndpointsMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DolbyVisionLevel6Mode

newtype DolbyVisionLevel6Mode Source #

Use Dolby Vision Mode to choose how the service will handle Dolby Vision MaxCLL and MaxFALL properies.

Instances

Instances details
Eq DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Ord DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Read DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Show DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Generic DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Associated Types

type Rep DolbyVisionLevel6Mode :: Type -> Type #

NFData DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

Methods

rnf :: DolbyVisionLevel6Mode -> () #

Hashable DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToJSON DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToJSONKey DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

FromJSON DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

FromJSONKey DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToLog DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToHeader DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToQuery DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

FromXML DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToXML DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToByteString DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

FromText DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

ToText DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

type Rep DolbyVisionLevel6Mode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode

type Rep DolbyVisionLevel6Mode = D1 ('MetaData "DolbyVisionLevel6Mode" "Amazonka.MediaConvert.Types.DolbyVisionLevel6Mode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DolbyVisionLevel6Mode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDolbyVisionLevel6Mode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DolbyVisionProfile

newtype DolbyVisionProfile Source #

In the current MediaConvert implementation, the Dolby Vision profile is always 5 (PROFILE_5). Therefore, all of your inputs must contain Dolby Vision frame interleaved data.

Instances

Instances details
Eq DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Ord DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Read DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Show DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Generic DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Associated Types

type Rep DolbyVisionProfile :: Type -> Type #

NFData DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

Methods

rnf :: DolbyVisionProfile -> () #

Hashable DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToJSON DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToJSONKey DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

FromJSON DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

FromJSONKey DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToLog DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToHeader DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToQuery DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

FromXML DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToXML DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToByteString DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

FromText DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

ToText DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

type Rep DolbyVisionProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionProfile

type Rep DolbyVisionProfile = D1 ('MetaData "DolbyVisionProfile" "Amazonka.MediaConvert.Types.DolbyVisionProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DolbyVisionProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDolbyVisionProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DropFrameTimecode

newtype DropFrameTimecode Source #

Applies only to 29.97 fps outputs. When this feature is enabled, the service will use drop-frame timecode on outputs. If it is not possible to use drop-frame timecode, the system will fall back to non-drop-frame. This setting is enabled by default when Timecode insertion (TimecodeInsertion) is enabled.

Instances

Instances details
Eq DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Ord DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Read DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Show DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Generic DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Associated Types

type Rep DropFrameTimecode :: Type -> Type #

NFData DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

Methods

rnf :: DropFrameTimecode -> () #

Hashable DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToJSON DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToJSONKey DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

FromJSON DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

FromJSONKey DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToLog DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToHeader DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToQuery DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

FromXML DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToXML DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToByteString DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

FromText DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

ToText DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

type Rep DropFrameTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DropFrameTimecode

type Rep DropFrameTimecode = D1 ('MetaData "DropFrameTimecode" "Amazonka.MediaConvert.Types.DropFrameTimecode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DropFrameTimecode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDropFrameTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubSubtitleFallbackFont

newtype DvbSubSubtitleFallbackFont Source #

Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

Instances

Instances details
Eq DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Ord DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Read DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Show DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Generic DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Associated Types

type Rep DvbSubSubtitleFallbackFont :: Type -> Type #

NFData DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

Hashable DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToJSON DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToJSONKey DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

FromJSON DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

FromJSONKey DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToLog DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToHeader DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToQuery DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

FromXML DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToXML DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToByteString DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

FromText DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

ToText DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

type Rep DvbSubSubtitleFallbackFont Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont

type Rep DvbSubSubtitleFallbackFont = D1 ('MetaData "DvbSubSubtitleFallbackFont" "Amazonka.MediaConvert.Types.DvbSubSubtitleFallbackFont" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubSubtitleFallbackFont'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubSubtitleFallbackFont") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleAlignment

newtype DvbSubtitleAlignment Source #

Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Ord DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Read DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Show DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Generic DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Associated Types

type Rep DvbSubtitleAlignment :: Type -> Type #

NFData DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

Methods

rnf :: DvbSubtitleAlignment -> () #

Hashable DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToJSON DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToJSONKey DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

FromJSON DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

FromJSONKey DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToLog DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToHeader DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToQuery DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

FromXML DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToXML DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToByteString DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

FromText DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

ToText DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

type Rep DvbSubtitleAlignment Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleAlignment

type Rep DvbSubtitleAlignment = D1 ('MetaData "DvbSubtitleAlignment" "Amazonka.MediaConvert.Types.DvbSubtitleAlignment" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleAlignment'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleAlignment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleApplyFontColor

newtype DvbSubtitleApplyFontColor Source #

Ignore this setting unless Style Passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

Instances

Instances details
Eq DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Ord DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Read DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Show DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Generic DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Associated Types

type Rep DvbSubtitleApplyFontColor :: Type -> Type #

NFData DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

Hashable DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToJSON DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToJSONKey DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

FromJSON DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

FromJSONKey DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToLog DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToHeader DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToQuery DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

FromXML DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToXML DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToByteString DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

FromText DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

ToText DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

type Rep DvbSubtitleApplyFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor

type Rep DvbSubtitleApplyFontColor = D1 ('MetaData "DvbSubtitleApplyFontColor" "Amazonka.MediaConvert.Types.DvbSubtitleApplyFontColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleApplyFontColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleApplyFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleBackgroundColor

newtype DvbSubtitleBackgroundColor Source #

Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present.

Instances

Instances details
Eq DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Ord DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Read DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Show DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Generic DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Associated Types

type Rep DvbSubtitleBackgroundColor :: Type -> Type #

NFData DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

Hashable DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToJSON DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToJSONKey DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

FromJSON DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

FromJSONKey DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToLog DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToHeader DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToQuery DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

FromXML DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToXML DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToByteString DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

FromText DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

ToText DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

type Rep DvbSubtitleBackgroundColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor

type Rep DvbSubtitleBackgroundColor = D1 ('MetaData "DvbSubtitleBackgroundColor" "Amazonka.MediaConvert.Types.DvbSubtitleBackgroundColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleBackgroundColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleBackgroundColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleFontColor

newtype DvbSubtitleFontColor Source #

Specify the color of the captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Ord DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Read DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Show DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Generic DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Associated Types

type Rep DvbSubtitleFontColor :: Type -> Type #

NFData DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

Methods

rnf :: DvbSubtitleFontColor -> () #

Hashable DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToJSON DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToJSONKey DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

FromJSON DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

FromJSONKey DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToLog DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToHeader DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToQuery DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

FromXML DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToXML DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToByteString DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

FromText DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

ToText DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

type Rep DvbSubtitleFontColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleFontColor

type Rep DvbSubtitleFontColor = D1 ('MetaData "DvbSubtitleFontColor" "Amazonka.MediaConvert.Types.DvbSubtitleFontColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleFontColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleOutlineColor

newtype DvbSubtitleOutlineColor Source #

Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Ord DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Read DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Show DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Generic DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Associated Types

type Rep DvbSubtitleOutlineColor :: Type -> Type #

NFData DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

Methods

rnf :: DvbSubtitleOutlineColor -> () #

Hashable DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToJSON DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToJSONKey DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

FromJSON DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

FromJSONKey DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToLog DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToHeader DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToQuery DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

FromXML DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToXML DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToByteString DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

FromText DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

ToText DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

type Rep DvbSubtitleOutlineColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor

type Rep DvbSubtitleOutlineColor = D1 ('MetaData "DvbSubtitleOutlineColor" "Amazonka.MediaConvert.Types.DvbSubtitleOutlineColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleOutlineColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleOutlineColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleShadowColor

newtype DvbSubtitleShadowColor Source #

Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Ord DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Read DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Show DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Generic DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Associated Types

type Rep DvbSubtitleShadowColor :: Type -> Type #

NFData DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

Methods

rnf :: DvbSubtitleShadowColor -> () #

Hashable DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToJSON DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToJSONKey DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

FromJSON DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

FromJSONKey DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToLog DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToHeader DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToQuery DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

FromXML DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToXML DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToByteString DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

FromText DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

ToText DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

type Rep DvbSubtitleShadowColor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleShadowColor

type Rep DvbSubtitleShadowColor = D1 ('MetaData "DvbSubtitleShadowColor" "Amazonka.MediaConvert.Types.DvbSubtitleShadowColor" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleShadowColor'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleShadowColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleStylePassthrough

newtype DvbSubtitleStylePassthrough Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

Instances

Instances details
Eq DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Ord DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Read DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Show DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Generic DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Associated Types

type Rep DvbSubtitleStylePassthrough :: Type -> Type #

NFData DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

Hashable DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToJSON DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToJSONKey DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

FromJSON DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

FromJSONKey DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToLog DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToHeader DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToQuery DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

FromXML DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToXML DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToByteString DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

FromText DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

ToText DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

type Rep DvbSubtitleStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough

type Rep DvbSubtitleStylePassthrough = D1 ('MetaData "DvbSubtitleStylePassthrough" "Amazonka.MediaConvert.Types.DvbSubtitleStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitleTeletextSpacing

newtype DvbSubtitleTeletextSpacing Source #

Specify whether the Text spacing (TextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions. Within your job settings, all of your DVB-Sub settings must be identical.

Instances

Instances details
Eq DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Ord DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Read DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Show DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Generic DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Associated Types

type Rep DvbSubtitleTeletextSpacing :: Type -> Type #

NFData DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

Hashable DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToJSON DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToJSONKey DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

FromJSON DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

FromJSONKey DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToLog DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToHeader DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToQuery DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

FromXML DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToXML DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToByteString DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

FromText DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

ToText DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

type Rep DvbSubtitleTeletextSpacing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing

type Rep DvbSubtitleTeletextSpacing = D1 ('MetaData "DvbSubtitleTeletextSpacing" "Amazonka.MediaConvert.Types.DvbSubtitleTeletextSpacing" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitleTeletextSpacing'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitleTeletextSpacing") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbSubtitlingType

newtype DvbSubtitlingType Source #

Specify whether your DVB subtitles are standard or for hearing impaired. Choose hearing impaired if your subtitles include audio descriptions and dialogue. Choose standard if your subtitles include only dialogue.

Instances

Instances details
Eq DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Ord DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Read DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Show DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Generic DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Associated Types

type Rep DvbSubtitlingType :: Type -> Type #

NFData DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

Methods

rnf :: DvbSubtitlingType -> () #

Hashable DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToJSON DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToJSONKey DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

FromJSON DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

FromJSONKey DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToLog DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToHeader DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToQuery DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

FromXML DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToXML DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToByteString DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

FromText DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

ToText DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

type Rep DvbSubtitlingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubtitlingType

type Rep DvbSubtitlingType = D1 ('MetaData "DvbSubtitlingType" "Amazonka.MediaConvert.Types.DvbSubtitlingType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbSubtitlingType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbSubtitlingType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DvbddsHandling

newtype DvbddsHandling Source #

Specify how MediaConvert handles the display definition segment (DDS). Keep the default, None (NONE), to exclude the DDS from this set of captions. Choose No display window (NO_DISPLAY_WINDOW) to have MediaConvert include the DDS but not include display window data. In this case, MediaConvert writes that information to the page composition segment (PCS) instead. Choose Specify (SPECIFIED) to have MediaConvert set up the display window based on the values that you specify in related job settings. For video resolutions that are 576 pixels or smaller in height, MediaConvert doesn't include the DDS, regardless of the value you choose for DDS handling (ddsHandling). In this case, it doesn't write the display window data to the PCS either. Related settings: Use the settings DDS x-coordinate (ddsXCoordinate) and DDS y-coordinate (ddsYCoordinate) to specify the offset between the top left corner of the display window and the top left corner of the video frame. All burn-in and DVB-Sub font settings must match.

Constructors

DvbddsHandling' 

Instances

Instances details
Eq DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Ord DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Read DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Show DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Generic DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Associated Types

type Rep DvbddsHandling :: Type -> Type #

NFData DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Methods

rnf :: DvbddsHandling -> () #

Hashable DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToJSON DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToJSONKey DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

FromJSON DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

FromJSONKey DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToLog DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToHeader DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToQuery DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

FromXML DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToXML DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

Methods

toXML :: DvbddsHandling -> XML #

ToByteString DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

FromText DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

ToText DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

type Rep DvbddsHandling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbddsHandling

type Rep DvbddsHandling = D1 ('MetaData "DvbddsHandling" "Amazonka.MediaConvert.Types.DvbddsHandling" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "DvbddsHandling'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDvbddsHandling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosBitstreamMode

newtype Eac3AtmosBitstreamMode Source #

Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

Instances

Instances details
Eq Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Ord Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Read Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Show Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Generic Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Associated Types

type Rep Eac3AtmosBitstreamMode :: Type -> Type #

NFData Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

Methods

rnf :: Eac3AtmosBitstreamMode -> () #

Hashable Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToJSON Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToJSONKey Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

FromJSON Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

FromJSONKey Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToLog Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToHeader Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToQuery Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

FromXML Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToXML Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToByteString Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

FromText Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

ToText Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

type Rep Eac3AtmosBitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode

type Rep Eac3AtmosBitstreamMode = D1 ('MetaData "Eac3AtmosBitstreamMode" "Amazonka.MediaConvert.Types.Eac3AtmosBitstreamMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosBitstreamMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosBitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosCodingMode

newtype Eac3AtmosCodingMode Source #

The coding mode for Dolby Digital Plus JOC (Atmos).

Instances

Instances details
Eq Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Ord Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Read Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Show Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Generic Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Associated Types

type Rep Eac3AtmosCodingMode :: Type -> Type #

NFData Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

Methods

rnf :: Eac3AtmosCodingMode -> () #

Hashable Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToJSON Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToJSONKey Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

FromJSON Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

FromJSONKey Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToLog Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToHeader Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToQuery Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

FromXML Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToXML Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToByteString Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

FromText Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

ToText Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

type Rep Eac3AtmosCodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosCodingMode

type Rep Eac3AtmosCodingMode = D1 ('MetaData "Eac3AtmosCodingMode" "Amazonka.MediaConvert.Types.Eac3AtmosCodingMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosCodingMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosCodingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosDialogueIntelligence

newtype Eac3AtmosDialogueIntelligence Source #

Enable Dolby Dialogue Intelligence to adjust loudness based on dialogue analysis.

Instances

Instances details
Eq Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Ord Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Read Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Show Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Generic Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Associated Types

type Rep Eac3AtmosDialogueIntelligence :: Type -> Type #

NFData Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

Hashable Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToJSON Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToJSONKey Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

FromJSON Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

FromJSONKey Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToLog Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToHeader Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToQuery Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

FromXML Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToXML Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToByteString Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

FromText Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

ToText Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

type Rep Eac3AtmosDialogueIntelligence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence

type Rep Eac3AtmosDialogueIntelligence = D1 ('MetaData "Eac3AtmosDialogueIntelligence" "Amazonka.MediaConvert.Types.Eac3AtmosDialogueIntelligence" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosDialogueIntelligence'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosDialogueIntelligence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosDownmixControl

newtype Eac3AtmosDownmixControl Source #

Specify whether MediaConvert should use any downmix metadata from your input file. Keep the default value, Custom (SPECIFIED) to provide downmix values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your downmix values: Left only/Right only surround (LoRoSurroundMixLevel), Left total/Right total surround (LtRtSurroundMixLevel), Left total/Right total center (LtRtCenterMixLevel), Left only/Right only center (LoRoCenterMixLevel), and Stereo downmix (StereoDownmix). When you keep Custom (SPECIFIED) for Downmix control (DownmixControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

Instances

Instances details
Eq Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Ord Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Read Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Show Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Generic Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Associated Types

type Rep Eac3AtmosDownmixControl :: Type -> Type #

NFData Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

Methods

rnf :: Eac3AtmosDownmixControl -> () #

Hashable Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToJSON Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToJSONKey Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

FromJSON Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

FromJSONKey Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToLog Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToHeader Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToQuery Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

FromXML Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToXML Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToByteString Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

FromText Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

ToText Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

type Rep Eac3AtmosDownmixControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl

type Rep Eac3AtmosDownmixControl = D1 ('MetaData "Eac3AtmosDownmixControl" "Amazonka.MediaConvert.Types.Eac3AtmosDownmixControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosDownmixControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosDownmixControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosDynamicRangeCompressionLine

newtype Eac3AtmosDynamicRangeCompressionLine Source #

Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the line operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression line (DynamicRangeCompressionLine). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

Ord Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

Read Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

Show Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

Generic Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

NFData Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

Hashable Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToJSON Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToJSONKey Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

FromJSON Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

FromJSONKey Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToLog Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToHeader Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToQuery Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

FromXML Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToXML Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToByteString Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

FromText Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

ToText Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

type Rep Eac3AtmosDynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine

type Rep Eac3AtmosDynamicRangeCompressionLine = D1 ('MetaData "Eac3AtmosDynamicRangeCompressionLine" "Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionLine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosDynamicRangeCompressionLine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosDynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosDynamicRangeCompressionRf

newtype Eac3AtmosDynamicRangeCompressionRf Source #

Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the RF operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression RF (DynamicRangeCompressionRf). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Ord Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Read Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Show Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Generic Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Associated Types

type Rep Eac3AtmosDynamicRangeCompressionRf :: Type -> Type #

NFData Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

Hashable Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToJSON Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToJSONKey Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

FromJSON Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

FromJSONKey Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToLog Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToHeader Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToQuery Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

FromXML Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToXML Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToByteString Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

FromText Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

ToText Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

type Rep Eac3AtmosDynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf

type Rep Eac3AtmosDynamicRangeCompressionRf = D1 ('MetaData "Eac3AtmosDynamicRangeCompressionRf" "Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeCompressionRf" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosDynamicRangeCompressionRf'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosDynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosDynamicRangeControl

newtype Eac3AtmosDynamicRangeControl Source #

Specify whether MediaConvert should use any dynamic range control metadata from your input file. Keep the default value, Custom (SPECIFIED), to provide dynamic range control values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your dynamic range control values: Dynamic range compression line (DynamicRangeCompressionLine) and Dynamic range compression RF (DynamicRangeCompressionRf). When you keep the value Custom (SPECIFIED) for Dynamic range control (DynamicRangeControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

Instances

Instances details
Eq Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Ord Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Read Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Show Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Generic Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Associated Types

type Rep Eac3AtmosDynamicRangeControl :: Type -> Type #

NFData Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

Hashable Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToJSON Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToJSONKey Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

FromJSON Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

FromJSONKey Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToLog Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToHeader Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToQuery Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

FromXML Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToXML Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToByteString Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

FromText Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

ToText Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

type Rep Eac3AtmosDynamicRangeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl

type Rep Eac3AtmosDynamicRangeControl = D1 ('MetaData "Eac3AtmosDynamicRangeControl" "Amazonka.MediaConvert.Types.Eac3AtmosDynamicRangeControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosDynamicRangeControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosDynamicRangeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosMeteringMode

newtype Eac3AtmosMeteringMode Source #

Choose how the service meters the loudness of your audio.

Instances

Instances details
Eq Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Ord Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Read Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Show Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Generic Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Associated Types

type Rep Eac3AtmosMeteringMode :: Type -> Type #

NFData Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

Methods

rnf :: Eac3AtmosMeteringMode -> () #

Hashable Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToJSON Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToJSONKey Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

FromJSON Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

FromJSONKey Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToLog Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToHeader Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToQuery Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

FromXML Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToXML Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToByteString Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

FromText Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

ToText Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

type Rep Eac3AtmosMeteringMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode

type Rep Eac3AtmosMeteringMode = D1 ('MetaData "Eac3AtmosMeteringMode" "Amazonka.MediaConvert.Types.Eac3AtmosMeteringMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosMeteringMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosMeteringMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosStereoDownmix

newtype Eac3AtmosStereoDownmix Source #

Choose how the service does stereo downmixing. Default value: Not indicated (ATMOS_STORAGE_DDP_DMIXMOD_NOT_INDICATED) Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Stereo downmix (StereoDownmix).

Instances

Instances details
Eq Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Ord Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Read Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Show Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Generic Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Associated Types

type Rep Eac3AtmosStereoDownmix :: Type -> Type #

NFData Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

Methods

rnf :: Eac3AtmosStereoDownmix -> () #

Hashable Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToJSON Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToJSONKey Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

FromJSON Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

FromJSONKey Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToLog Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToHeader Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToQuery Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

FromXML Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToXML Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToByteString Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

FromText Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

ToText Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

type Rep Eac3AtmosStereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix

type Rep Eac3AtmosStereoDownmix = D1 ('MetaData "Eac3AtmosStereoDownmix" "Amazonka.MediaConvert.Types.Eac3AtmosStereoDownmix" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosStereoDownmix'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosStereoDownmix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AtmosSurroundExMode

newtype Eac3AtmosSurroundExMode Source #

Specify whether your input audio has an additional center rear surround channel matrix encoded into your left and right surround channels.

Instances

Instances details
Eq Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Ord Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Read Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Show Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Generic Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Associated Types

type Rep Eac3AtmosSurroundExMode :: Type -> Type #

NFData Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

Methods

rnf :: Eac3AtmosSurroundExMode -> () #

Hashable Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToJSON Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToJSONKey Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

FromJSON Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

FromJSONKey Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToLog Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToHeader Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToQuery Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

FromXML Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToXML Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToByteString Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

FromText Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

ToText Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

type Rep Eac3AtmosSurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode

type Rep Eac3AtmosSurroundExMode = D1 ('MetaData "Eac3AtmosSurroundExMode" "Amazonka.MediaConvert.Types.Eac3AtmosSurroundExMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AtmosSurroundExMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AtmosSurroundExMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3AttenuationControl

newtype Eac3AttenuationControl Source #

If set to ATTENUATE_3_DB, applies a 3 dB attenuation to the surround channels. Only used for 3/2 coding mode.

Instances

Instances details
Eq Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Ord Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Read Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Show Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Generic Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Associated Types

type Rep Eac3AttenuationControl :: Type -> Type #

NFData Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

Methods

rnf :: Eac3AttenuationControl -> () #

Hashable Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToJSON Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToJSONKey Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

FromJSON Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

FromJSONKey Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToLog Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToHeader Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToQuery Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

FromXML Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToXML Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToByteString Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

FromText Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

ToText Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

type Rep Eac3AttenuationControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AttenuationControl

type Rep Eac3AttenuationControl = D1 ('MetaData "Eac3AttenuationControl" "Amazonka.MediaConvert.Types.Eac3AttenuationControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3AttenuationControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3AttenuationControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3BitstreamMode

newtype Eac3BitstreamMode Source #

Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

Instances

Instances details
Eq Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Ord Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Read Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Show Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Generic Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Associated Types

type Rep Eac3BitstreamMode :: Type -> Type #

NFData Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

Methods

rnf :: Eac3BitstreamMode -> () #

Hashable Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToJSON Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToJSONKey Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

FromJSON Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

FromJSONKey Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToLog Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToHeader Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToQuery Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

FromXML Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToXML Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToByteString Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

FromText Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

ToText Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

type Rep Eac3BitstreamMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3BitstreamMode

type Rep Eac3BitstreamMode = D1 ('MetaData "Eac3BitstreamMode" "Amazonka.MediaConvert.Types.Eac3BitstreamMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3BitstreamMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3BitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3CodingMode

newtype Eac3CodingMode Source #

Dolby Digital Plus coding mode. Determines number of channels.

Constructors

Eac3CodingMode' 

Instances

Instances details
Eq Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Ord Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Read Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Show Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Generic Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Associated Types

type Rep Eac3CodingMode :: Type -> Type #

NFData Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Methods

rnf :: Eac3CodingMode -> () #

Hashable Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToJSON Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToJSONKey Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

FromJSON Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

FromJSONKey Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToLog Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToHeader Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToQuery Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

FromXML Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToXML Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

Methods

toXML :: Eac3CodingMode -> XML #

ToByteString Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

FromText Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

ToText Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

type Rep Eac3CodingMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3CodingMode

type Rep Eac3CodingMode = D1 ('MetaData "Eac3CodingMode" "Amazonka.MediaConvert.Types.Eac3CodingMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3CodingMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3CodingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3DcFilter

newtype Eac3DcFilter Source #

Activates a DC highpass filter for all input channels.

Constructors

Eac3DcFilter' 

Bundled Patterns

pattern Eac3DcFilter_DISABLED :: Eac3DcFilter 
pattern Eac3DcFilter_ENABLED :: Eac3DcFilter 

Instances

Instances details
Eq Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Ord Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Read Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Show Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Generic Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Associated Types

type Rep Eac3DcFilter :: Type -> Type #

NFData Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Methods

rnf :: Eac3DcFilter -> () #

Hashable Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToJSON Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToJSONKey Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

FromJSON Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

FromJSONKey Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToLog Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToHeader Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToQuery Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

FromXML Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToXML Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Methods

toXML :: Eac3DcFilter -> XML #

ToByteString Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

FromText Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

ToText Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

Methods

toText :: Eac3DcFilter -> Text #

type Rep Eac3DcFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DcFilter

type Rep Eac3DcFilter = D1 ('MetaData "Eac3DcFilter" "Amazonka.MediaConvert.Types.Eac3DcFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3DcFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3DcFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3DynamicRangeCompressionLine

newtype Eac3DynamicRangeCompressionLine Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Ord Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Read Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Show Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Generic Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Associated Types

type Rep Eac3DynamicRangeCompressionLine :: Type -> Type #

NFData Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

Hashable Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToJSON Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToJSONKey Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

FromJSON Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

FromJSONKey Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToLog Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToHeader Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToQuery Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

FromXML Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToXML Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToByteString Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

FromText Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

ToText Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

type Rep Eac3DynamicRangeCompressionLine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine

type Rep Eac3DynamicRangeCompressionLine = D1 ('MetaData "Eac3DynamicRangeCompressionLine" "Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionLine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3DynamicRangeCompressionLine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3DynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3DynamicRangeCompressionRf

newtype Eac3DynamicRangeCompressionRf Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

Instances

Instances details
Eq Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Ord Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Read Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Show Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Generic Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Associated Types

type Rep Eac3DynamicRangeCompressionRf :: Type -> Type #

NFData Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

Hashable Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToJSON Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToJSONKey Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

FromJSON Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

FromJSONKey Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToLog Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToHeader Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToQuery Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

FromXML Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToXML Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToByteString Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

FromText Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

ToText Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

type Rep Eac3DynamicRangeCompressionRf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf

type Rep Eac3DynamicRangeCompressionRf = D1 ('MetaData "Eac3DynamicRangeCompressionRf" "Amazonka.MediaConvert.Types.Eac3DynamicRangeCompressionRf" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3DynamicRangeCompressionRf'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3DynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3LfeControl

newtype Eac3LfeControl Source #

When encoding 3/2 audio, controls whether the LFE channel is enabled

Constructors

Eac3LfeControl' 

Instances

Instances details
Eq Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Ord Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Read Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Show Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Generic Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Associated Types

type Rep Eac3LfeControl :: Type -> Type #

NFData Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Methods

rnf :: Eac3LfeControl -> () #

Hashable Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToJSON Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToJSONKey Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

FromJSON Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

FromJSONKey Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToLog Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToHeader Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToQuery Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

FromXML Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToXML Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

Methods

toXML :: Eac3LfeControl -> XML #

ToByteString Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

FromText Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

ToText Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

type Rep Eac3LfeControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeControl

type Rep Eac3LfeControl = D1 ('MetaData "Eac3LfeControl" "Amazonka.MediaConvert.Types.Eac3LfeControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3LfeControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3LfeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3LfeFilter

newtype Eac3LfeFilter Source #

Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

Constructors

Eac3LfeFilter' 

Instances

Instances details
Eq Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Ord Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Read Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Show Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Generic Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Associated Types

type Rep Eac3LfeFilter :: Type -> Type #

NFData Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Methods

rnf :: Eac3LfeFilter -> () #

Hashable Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToJSON Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToJSONKey Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

FromJSON Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

FromJSONKey Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToLog Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToHeader Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToQuery Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

FromXML Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToXML Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Methods

toXML :: Eac3LfeFilter -> XML #

ToByteString Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

FromText Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

ToText Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

Methods

toText :: Eac3LfeFilter -> Text #

type Rep Eac3LfeFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3LfeFilter

type Rep Eac3LfeFilter = D1 ('MetaData "Eac3LfeFilter" "Amazonka.MediaConvert.Types.Eac3LfeFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3LfeFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3LfeFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3MetadataControl

newtype Eac3MetadataControl Source #

When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

Instances

Instances details
Eq Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Ord Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Read Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Show Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Generic Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Associated Types

type Rep Eac3MetadataControl :: Type -> Type #

NFData Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

Methods

rnf :: Eac3MetadataControl -> () #

Hashable Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToJSON Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToJSONKey Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

FromJSON Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

FromJSONKey Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToLog Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToHeader Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToQuery Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

FromXML Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToXML Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToByteString Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

FromText Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

ToText Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

type Rep Eac3MetadataControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3MetadataControl

type Rep Eac3MetadataControl = D1 ('MetaData "Eac3MetadataControl" "Amazonka.MediaConvert.Types.Eac3MetadataControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3MetadataControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3MetadataControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3PassthroughControl

newtype Eac3PassthroughControl Source #

When set to WHEN_POSSIBLE, input DD+ audio will be passed through if it is present on the input. this detection is dynamic over the life of the transcode. Inputs that alternate between DD+ and non-DD+ content will have a consistent DD+ output as the system alternates between passthrough and encoding.

Instances

Instances details
Eq Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Ord Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Read Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Show Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Generic Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Associated Types

type Rep Eac3PassthroughControl :: Type -> Type #

NFData Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

Methods

rnf :: Eac3PassthroughControl -> () #

Hashable Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToJSON Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToJSONKey Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

FromJSON Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

FromJSONKey Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToLog Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToHeader Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToQuery Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

FromXML Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToXML Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToByteString Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

FromText Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

ToText Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

type Rep Eac3PassthroughControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PassthroughControl

type Rep Eac3PassthroughControl = D1 ('MetaData "Eac3PassthroughControl" "Amazonka.MediaConvert.Types.Eac3PassthroughControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3PassthroughControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3PassthroughControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3PhaseControl

newtype Eac3PhaseControl Source #

Controls the amount of phase-shift applied to the surround channels. Only used for 3/2 coding mode.

Instances

Instances details
Eq Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Ord Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Read Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Show Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Generic Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Associated Types

type Rep Eac3PhaseControl :: Type -> Type #

NFData Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

Methods

rnf :: Eac3PhaseControl -> () #

Hashable Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToJSON Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToJSONKey Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

FromJSON Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

FromJSONKey Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToLog Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToHeader Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToQuery Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

FromXML Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToXML Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToByteString Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

FromText Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

ToText Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

type Rep Eac3PhaseControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3PhaseControl

type Rep Eac3PhaseControl = D1 ('MetaData "Eac3PhaseControl" "Amazonka.MediaConvert.Types.Eac3PhaseControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3PhaseControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3PhaseControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3StereoDownmix

newtype Eac3StereoDownmix Source #

Choose how the service does stereo downmixing. This setting only applies if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Stereo downmix (Eac3StereoDownmix).

Instances

Instances details
Eq Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Ord Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Read Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Show Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Generic Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Associated Types

type Rep Eac3StereoDownmix :: Type -> Type #

NFData Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

Methods

rnf :: Eac3StereoDownmix -> () #

Hashable Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToJSON Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToJSONKey Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

FromJSON Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

FromJSONKey Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToLog Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToHeader Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToQuery Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

FromXML Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToXML Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToByteString Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

FromText Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

ToText Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

type Rep Eac3StereoDownmix Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3StereoDownmix

type Rep Eac3StereoDownmix = D1 ('MetaData "Eac3StereoDownmix" "Amazonka.MediaConvert.Types.Eac3StereoDownmix" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3StereoDownmix'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3StereoDownmix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3SurroundExMode

newtype Eac3SurroundExMode Source #

When encoding 3/2 audio, sets whether an extra center back surround channel is matrix encoded into the left and right surround channels.

Instances

Instances details
Eq Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Ord Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Read Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Show Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Generic Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Associated Types

type Rep Eac3SurroundExMode :: Type -> Type #

NFData Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

Methods

rnf :: Eac3SurroundExMode -> () #

Hashable Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToJSON Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToJSONKey Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

FromJSON Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

FromJSONKey Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToLog Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToHeader Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToQuery Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

FromXML Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToXML Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToByteString Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

FromText Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

ToText Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

type Rep Eac3SurroundExMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundExMode

type Rep Eac3SurroundExMode = D1 ('MetaData "Eac3SurroundExMode" "Amazonka.MediaConvert.Types.Eac3SurroundExMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3SurroundExMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3SurroundExMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Eac3SurroundMode

newtype Eac3SurroundMode Source #

When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into the two channels.

Instances

Instances details
Eq Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Ord Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Read Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Show Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Generic Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Associated Types

type Rep Eac3SurroundMode :: Type -> Type #

NFData Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

Methods

rnf :: Eac3SurroundMode -> () #

Hashable Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToJSON Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToJSONKey Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

FromJSON Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

FromJSONKey Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToLog Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToHeader Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToQuery Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

FromXML Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToXML Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToByteString Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

FromText Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

ToText Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

type Rep Eac3SurroundMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3SurroundMode

type Rep Eac3SurroundMode = D1 ('MetaData "Eac3SurroundMode" "Amazonka.MediaConvert.Types.Eac3SurroundMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Eac3SurroundMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEac3SurroundMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

EmbeddedConvert608To708

newtype EmbeddedConvert608To708 Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

Instances

Instances details
Eq EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Ord EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Read EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Show EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Generic EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Associated Types

type Rep EmbeddedConvert608To708 :: Type -> Type #

NFData EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

Methods

rnf :: EmbeddedConvert608To708 -> () #

Hashable EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToJSON EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToJSONKey EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

FromJSON EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

FromJSONKey EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToLog EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToHeader EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToQuery EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

FromXML EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToXML EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToByteString EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

FromText EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

ToText EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

type Rep EmbeddedConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedConvert608To708

type Rep EmbeddedConvert608To708 = D1 ('MetaData "EmbeddedConvert608To708" "Amazonka.MediaConvert.Types.EmbeddedConvert608To708" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "EmbeddedConvert608To708'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEmbeddedConvert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

EmbeddedTerminateCaptions

newtype EmbeddedTerminateCaptions Source #

By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

Instances

Instances details
Eq EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Ord EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Read EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Show EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Generic EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Associated Types

type Rep EmbeddedTerminateCaptions :: Type -> Type #

NFData EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

Hashable EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToJSON EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToJSONKey EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

FromJSON EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

FromJSONKey EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToLog EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToHeader EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToQuery EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

FromXML EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToXML EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToByteString EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

FromText EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

ToText EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

type Rep EmbeddedTerminateCaptions Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions

type Rep EmbeddedTerminateCaptions = D1 ('MetaData "EmbeddedTerminateCaptions" "Amazonka.MediaConvert.Types.EmbeddedTerminateCaptions" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "EmbeddedTerminateCaptions'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEmbeddedTerminateCaptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

F4vMoovPlacement

newtype F4vMoovPlacement Source #

If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

Instances

Instances details
Eq F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Ord F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Read F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Show F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Generic F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Associated Types

type Rep F4vMoovPlacement :: Type -> Type #

NFData F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

Methods

rnf :: F4vMoovPlacement -> () #

Hashable F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToJSON F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToJSONKey F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

FromJSON F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

FromJSONKey F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToLog F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToHeader F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToQuery F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

FromXML F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToXML F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToByteString F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

FromText F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

ToText F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

type Rep F4vMoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vMoovPlacement

type Rep F4vMoovPlacement = D1 ('MetaData "F4vMoovPlacement" "Amazonka.MediaConvert.Types.F4vMoovPlacement" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "F4vMoovPlacement'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromF4vMoovPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

FileSourceConvert608To708

newtype FileSourceConvert608To708 Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

Instances

Instances details
Eq FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Ord FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Read FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Show FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Generic FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Associated Types

type Rep FileSourceConvert608To708 :: Type -> Type #

NFData FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

Hashable FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToJSON FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToJSONKey FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

FromJSON FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

FromJSONKey FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToLog FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToHeader FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToQuery FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

FromXML FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToXML FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToByteString FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

FromText FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

ToText FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

type Rep FileSourceConvert608To708 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceConvert608To708

type Rep FileSourceConvert608To708 = D1 ('MetaData "FileSourceConvert608To708" "Amazonka.MediaConvert.Types.FileSourceConvert608To708" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "FileSourceConvert608To708'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromFileSourceConvert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

FileSourceTimeDeltaUnits

newtype FileSourceTimeDeltaUnits Source #

When you use the setting Time delta (TimeDelta) to adjust the sync between your sidecar captions and your video, use this setting to specify the units for the delta that you specify. When you don't specify a value for Time delta units (TimeDeltaUnits), MediaConvert uses seconds by default.

Instances

Instances details
Eq FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Ord FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Read FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Show FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Generic FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Associated Types

type Rep FileSourceTimeDeltaUnits :: Type -> Type #

NFData FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

Hashable FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToJSON FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToJSONKey FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

FromJSON FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

FromJSONKey FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToLog FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToHeader FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToQuery FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

FromXML FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToXML FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToByteString FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

FromText FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

ToText FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

type Rep FileSourceTimeDeltaUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits

type Rep FileSourceTimeDeltaUnits = D1 ('MetaData "FileSourceTimeDeltaUnits" "Amazonka.MediaConvert.Types.FileSourceTimeDeltaUnits" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "FileSourceTimeDeltaUnits'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromFileSourceTimeDeltaUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

FontScript

newtype FontScript Source #

Provide the font script, using an ISO 15924 script code, if the LanguageCode is not sufficient for determining the script type. Where LanguageCode or CustomLanguageCode is sufficient, use "AUTOMATIC" or leave unset.

Constructors

FontScript' 

Fields

Bundled Patterns

pattern FontScript_AUTOMATIC :: FontScript 
pattern FontScript_HANS :: FontScript 
pattern FontScript_HANT :: FontScript 

Instances

Instances details
Eq FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Ord FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Read FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Show FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Generic FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Associated Types

type Rep FontScript :: Type -> Type #

NFData FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Methods

rnf :: FontScript -> () #

Hashable FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToJSON FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToJSONKey FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

FromJSON FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

FromJSONKey FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToLog FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToHeader FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToQuery FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

FromXML FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToXML FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Methods

toXML :: FontScript -> XML #

ToByteString FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

FromText FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

ToText FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

Methods

toText :: FontScript -> Text #

type Rep FontScript Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FontScript

type Rep FontScript = D1 ('MetaData "FontScript" "Amazonka.MediaConvert.Types.FontScript" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "FontScript'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromFontScript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264AdaptiveQuantization

newtype H264AdaptiveQuantization Source #

Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set H264AdaptiveQuantization to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization (H264AdaptiveQuantization) to Off (OFF). Related settings: The value that you choose here applies to the following settings: H264FlickerAdaptiveQuantization, H264SpatialAdaptiveQuantization, and H264TemporalAdaptiveQuantization.

Instances

Instances details
Eq H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Ord H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Read H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Show H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Generic H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Associated Types

type Rep H264AdaptiveQuantization :: Type -> Type #

NFData H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

Hashable H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToJSON H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToJSONKey H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

FromJSON H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

FromJSONKey H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToLog H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToHeader H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToQuery H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

FromXML H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToXML H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToByteString H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

FromText H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

ToText H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

type Rep H264AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264AdaptiveQuantization

type Rep H264AdaptiveQuantization = D1 ('MetaData "H264AdaptiveQuantization" "Amazonka.MediaConvert.Types.H264AdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264AdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264AdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264CodecLevel

newtype H264CodecLevel Source #

Specify an H.264 level that is consistent with your output video settings. If you aren't sure what level to specify, choose Auto (AUTO).

Constructors

H264CodecLevel' 

Instances

Instances details
Eq H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Ord H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Read H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Show H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Generic H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Associated Types

type Rep H264CodecLevel :: Type -> Type #

NFData H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Methods

rnf :: H264CodecLevel -> () #

Hashable H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToJSON H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToJSONKey H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

FromJSON H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

FromJSONKey H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToLog H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToHeader H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToQuery H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

FromXML H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToXML H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

Methods

toXML :: H264CodecLevel -> XML #

ToByteString H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

FromText H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

ToText H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

type Rep H264CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecLevel

type Rep H264CodecLevel = D1 ('MetaData "H264CodecLevel" "Amazonka.MediaConvert.Types.H264CodecLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264CodecLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264CodecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264CodecProfile

newtype H264CodecProfile Source #

H.264 Profile. High 4:2:2 and 10-bit profiles are only available with the AVC-I License.

Instances

Instances details
Eq H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Ord H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Read H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Show H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Generic H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Associated Types

type Rep H264CodecProfile :: Type -> Type #

NFData H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

Methods

rnf :: H264CodecProfile -> () #

Hashable H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToJSON H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToJSONKey H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

FromJSON H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

FromJSONKey H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToLog H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToHeader H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToQuery H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

FromXML H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToXML H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToByteString H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

FromText H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

ToText H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

type Rep H264CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264CodecProfile

type Rep H264CodecProfile = D1 ('MetaData "H264CodecProfile" "Amazonka.MediaConvert.Types.H264CodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264CodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264CodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264DynamicSubGop

newtype H264DynamicSubGop Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

Instances

Instances details
Eq H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Ord H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Read H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Show H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Generic H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Associated Types

type Rep H264DynamicSubGop :: Type -> Type #

NFData H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

Methods

rnf :: H264DynamicSubGop -> () #

Hashable H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToJSON H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToJSONKey H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

FromJSON H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

FromJSONKey H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToLog H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToHeader H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToQuery H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

FromXML H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToXML H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToByteString H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

FromText H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

ToText H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

type Rep H264DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264DynamicSubGop

type Rep H264DynamicSubGop = D1 ('MetaData "H264DynamicSubGop" "Amazonka.MediaConvert.Types.H264DynamicSubGop" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264DynamicSubGop'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264DynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264EntropyEncoding

newtype H264EntropyEncoding Source #

Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC.

Instances

Instances details
Eq H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Ord H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Read H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Show H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Generic H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Associated Types

type Rep H264EntropyEncoding :: Type -> Type #

NFData H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

Methods

rnf :: H264EntropyEncoding -> () #

Hashable H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToJSON H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToJSONKey H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

FromJSON H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

FromJSONKey H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToLog H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToHeader H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToQuery H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

FromXML H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToXML H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToByteString H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

FromText H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

ToText H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

type Rep H264EntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264EntropyEncoding

type Rep H264EntropyEncoding = D1 ('MetaData "H264EntropyEncoding" "Amazonka.MediaConvert.Types.H264EntropyEncoding" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264EntropyEncoding'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264EntropyEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264FieldEncoding

newtype H264FieldEncoding Source #

The video encoding method for your MPEG-4 AVC output. Keep the default value, PAFF, to have MediaConvert use PAFF encoding for interlaced outputs. Choose Force field (FORCE_FIELD) to disable PAFF encoding and create separate interlaced fields. Choose MBAFF to disable PAFF and have MediaConvert use MBAFF encoding for interlaced outputs.

Instances

Instances details
Eq H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Ord H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Read H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Show H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Generic H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Associated Types

type Rep H264FieldEncoding :: Type -> Type #

NFData H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

Methods

rnf :: H264FieldEncoding -> () #

Hashable H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToJSON H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToJSONKey H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

FromJSON H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

FromJSONKey H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToLog H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToHeader H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToQuery H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

FromXML H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToXML H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToByteString H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

FromText H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

ToText H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

type Rep H264FieldEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FieldEncoding

type Rep H264FieldEncoding = D1 ('MetaData "H264FieldEncoding" "Amazonka.MediaConvert.Types.H264FieldEncoding" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264FieldEncoding'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264FieldEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264FlickerAdaptiveQuantization

newtype H264FlickerAdaptiveQuantization Source #

Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264FlickerAdaptiveQuantization is Disabled (DISABLED). Change this value to Enabled (ENABLED) to reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. To manually enable or disable H264FlickerAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

Instances

Instances details
Eq H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Ord H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Read H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Show H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Generic H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Associated Types

type Rep H264FlickerAdaptiveQuantization :: Type -> Type #

NFData H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

Hashable H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToJSON H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToJSONKey H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

FromJSON H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

FromJSONKey H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToLog H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToHeader H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToQuery H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

FromXML H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToXML H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToByteString H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

FromText H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

ToText H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

type Rep H264FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization

type Rep H264FlickerAdaptiveQuantization = D1 ('MetaData "H264FlickerAdaptiveQuantization" "Amazonka.MediaConvert.Types.H264FlickerAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264FlickerAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264FlickerAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264FramerateControl

newtype H264FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Ord H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Read H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Show H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Generic H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Associated Types

type Rep H264FramerateControl :: Type -> Type #

NFData H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

Methods

rnf :: H264FramerateControl -> () #

Hashable H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToJSON H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToJSONKey H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

FromJSON H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

FromJSONKey H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToLog H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToHeader H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToQuery H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

FromXML H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToXML H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToByteString H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

FromText H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

ToText H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

type Rep H264FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateControl

type Rep H264FramerateControl = D1 ('MetaData "H264FramerateControl" "Amazonka.MediaConvert.Types.H264FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264FramerateConversionAlgorithm

newtype H264FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Ord H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Read H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Show H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Generic H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Associated Types

type Rep H264FramerateConversionAlgorithm :: Type -> Type #

NFData H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

Hashable H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToJSON H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToJSONKey H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

FromJSON H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

FromJSONKey H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToLog H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToHeader H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToQuery H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

FromXML H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToXML H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToByteString H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

FromText H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

ToText H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

type Rep H264FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm

type Rep H264FramerateConversionAlgorithm = D1 ('MetaData "H264FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.H264FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264GopBReference

newtype H264GopBReference Source #

If enable, use reference B frames for GOP structures that have B frames > 1.

Instances

Instances details
Eq H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Ord H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Read H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Show H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Generic H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Associated Types

type Rep H264GopBReference :: Type -> Type #

NFData H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

Methods

rnf :: H264GopBReference -> () #

Hashable H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToJSON H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToJSONKey H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

FromJSON H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

FromJSONKey H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToLog H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToHeader H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToQuery H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

FromXML H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToXML H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToByteString H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

FromText H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

ToText H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

type Rep H264GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopBReference

type Rep H264GopBReference = D1 ('MetaData "H264GopBReference" "Amazonka.MediaConvert.Types.H264GopBReference" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264GopBReference'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264GopBReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264GopSizeUnits

newtype H264GopSizeUnits Source #

Indicates if the GOP Size in H264 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

Instances

Instances details
Eq H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Ord H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Read H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Show H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Generic H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Associated Types

type Rep H264GopSizeUnits :: Type -> Type #

NFData H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

Methods

rnf :: H264GopSizeUnits -> () #

Hashable H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToJSON H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToJSONKey H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

FromJSON H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

FromJSONKey H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToLog H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToHeader H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToQuery H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

FromXML H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToXML H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToByteString H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

FromText H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

ToText H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

type Rep H264GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264GopSizeUnits

type Rep H264GopSizeUnits = D1 ('MetaData "H264GopSizeUnits" "Amazonka.MediaConvert.Types.H264GopSizeUnits" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264GopSizeUnits'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264GopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264InterlaceMode

newtype H264InterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Ord H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Read H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Show H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Generic H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Associated Types

type Rep H264InterlaceMode :: Type -> Type #

NFData H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

Methods

rnf :: H264InterlaceMode -> () #

Hashable H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToJSON H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToJSONKey H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

FromJSON H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

FromJSONKey H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToLog H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToHeader H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToQuery H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

FromXML H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToXML H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToByteString H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

FromText H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

ToText H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

type Rep H264InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264InterlaceMode

type Rep H264InterlaceMode = D1 ('MetaData "H264InterlaceMode" "Amazonka.MediaConvert.Types.H264InterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264InterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264InterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264ParControl

newtype H264ParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Constructors

H264ParControl' 

Instances

Instances details
Eq H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Ord H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Read H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Show H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Generic H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Associated Types

type Rep H264ParControl :: Type -> Type #

NFData H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Methods

rnf :: H264ParControl -> () #

Hashable H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToJSON H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToJSONKey H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

FromJSON H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

FromJSONKey H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToLog H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToHeader H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToQuery H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

FromXML H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToXML H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

Methods

toXML :: H264ParControl -> XML #

ToByteString H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

FromText H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

ToText H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

type Rep H264ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ParControl

type Rep H264ParControl = D1 ('MetaData "H264ParControl" "Amazonka.MediaConvert.Types.H264ParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264ParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264ParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264QualityTuningLevel

newtype H264QualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Instances

Instances details
Eq H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Ord H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Read H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Show H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Generic H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Associated Types

type Rep H264QualityTuningLevel :: Type -> Type #

NFData H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

Methods

rnf :: H264QualityTuningLevel -> () #

Hashable H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToJSON H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToJSONKey H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

FromJSON H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

FromJSONKey H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToLog H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToHeader H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToQuery H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

FromXML H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToXML H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToByteString H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

FromText H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

ToText H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

type Rep H264QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QualityTuningLevel

type Rep H264QualityTuningLevel = D1 ('MetaData "H264QualityTuningLevel" "Amazonka.MediaConvert.Types.H264QualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264QualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264QualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264RateControlMode

newtype H264RateControlMode Source #

Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

Instances

Instances details
Eq H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Ord H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Read H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Show H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Generic H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Associated Types

type Rep H264RateControlMode :: Type -> Type #

NFData H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

Methods

rnf :: H264RateControlMode -> () #

Hashable H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToJSON H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToJSONKey H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

FromJSON H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

FromJSONKey H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToLog H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToHeader H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToQuery H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

FromXML H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToXML H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToByteString H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

FromText H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

ToText H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

type Rep H264RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RateControlMode

type Rep H264RateControlMode = D1 ('MetaData "H264RateControlMode" "Amazonka.MediaConvert.Types.H264RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264RepeatPps

newtype H264RepeatPps Source #

Places a PPS header on each encoded picture, even if repeated.

Constructors

H264RepeatPps' 

Instances

Instances details
Eq H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Ord H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Read H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Show H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Generic H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Associated Types

type Rep H264RepeatPps :: Type -> Type #

NFData H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Methods

rnf :: H264RepeatPps -> () #

Hashable H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToJSON H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToJSONKey H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

FromJSON H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

FromJSONKey H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToLog H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToHeader H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToQuery H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

FromXML H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToXML H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Methods

toXML :: H264RepeatPps -> XML #

ToByteString H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

FromText H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

ToText H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

Methods

toText :: H264RepeatPps -> Text #

type Rep H264RepeatPps Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264RepeatPps

type Rep H264RepeatPps = D1 ('MetaData "H264RepeatPps" "Amazonka.MediaConvert.Types.H264RepeatPps" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264RepeatPps'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264RepeatPps") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264ScanTypeConversionMode

newtype H264ScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Ord H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Read H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Show H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Generic H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Associated Types

type Rep H264ScanTypeConversionMode :: Type -> Type #

NFData H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

Hashable H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToJSON H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToJSONKey H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

FromJSON H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

FromJSONKey H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToLog H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToHeader H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToQuery H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

FromXML H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToXML H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToByteString H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

FromText H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

ToText H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

type Rep H264ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264ScanTypeConversionMode

type Rep H264ScanTypeConversionMode = D1 ('MetaData "H264ScanTypeConversionMode" "Amazonka.MediaConvert.Types.H264ScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264ScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264ScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264SceneChangeDetect

newtype H264SceneChangeDetect Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

Instances

Instances details
Eq H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Ord H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Read H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Show H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Generic H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Associated Types

type Rep H264SceneChangeDetect :: Type -> Type #

NFData H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

Methods

rnf :: H264SceneChangeDetect -> () #

Hashable H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToJSON H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToJSONKey H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

FromJSON H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

FromJSONKey H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToLog H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToHeader H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToQuery H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

FromXML H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToXML H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToByteString H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

FromText H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

ToText H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

type Rep H264SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SceneChangeDetect

type Rep H264SceneChangeDetect = D1 ('MetaData "H264SceneChangeDetect" "Amazonka.MediaConvert.Types.H264SceneChangeDetect" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264SceneChangeDetect'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264SceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264SlowPal

newtype H264SlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

H264SlowPal' 

Bundled Patterns

pattern H264SlowPal_DISABLED :: H264SlowPal 
pattern H264SlowPal_ENABLED :: H264SlowPal 

Instances

Instances details
Eq H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Ord H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Read H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Show H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Generic H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Associated Types

type Rep H264SlowPal :: Type -> Type #

NFData H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Methods

rnf :: H264SlowPal -> () #

Hashable H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToJSON H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToJSONKey H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

FromJSON H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

FromJSONKey H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToLog H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToHeader H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToQuery H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

FromXML H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToXML H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Methods

toXML :: H264SlowPal -> XML #

ToByteString H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

FromText H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

ToText H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

Methods

toText :: H264SlowPal -> Text #

type Rep H264SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SlowPal

type Rep H264SlowPal = D1 ('MetaData "H264SlowPal" "Amazonka.MediaConvert.Types.H264SlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264SlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264SlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264SpatialAdaptiveQuantization

newtype H264SpatialAdaptiveQuantization Source #

Only use this setting when you change the default value, Auto (AUTO), for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264SpatialAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to set H264SpatialAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (H264AdaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher. To manually enable or disable H264SpatialAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

Instances

Instances details
Eq H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Ord H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Read H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Show H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Generic H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Associated Types

type Rep H264SpatialAdaptiveQuantization :: Type -> Type #

NFData H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

Hashable H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToJSON H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToJSONKey H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

FromJSON H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

FromJSONKey H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToLog H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToHeader H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToQuery H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

FromXML H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToXML H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToByteString H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

FromText H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

ToText H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

type Rep H264SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization

type Rep H264SpatialAdaptiveQuantization = D1 ('MetaData "H264SpatialAdaptiveQuantization" "Amazonka.MediaConvert.Types.H264SpatialAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264SpatialAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264SpatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264Syntax

newtype H264Syntax Source #

Produces a bitstream compliant with SMPTE RP-2027.

Constructors

H264Syntax' 

Fields

Bundled Patterns

pattern H264Syntax_DEFAULT :: H264Syntax 
pattern H264Syntax_RP2027 :: H264Syntax 

Instances

Instances details
Eq H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Ord H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Read H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Show H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Generic H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Associated Types

type Rep H264Syntax :: Type -> Type #

NFData H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Methods

rnf :: H264Syntax -> () #

Hashable H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToJSON H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToJSONKey H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

FromJSON H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

FromJSONKey H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToLog H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToHeader H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToQuery H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

FromXML H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToXML H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Methods

toXML :: H264Syntax -> XML #

ToByteString H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

FromText H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

ToText H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

Methods

toText :: H264Syntax -> Text #

type Rep H264Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Syntax

type Rep H264Syntax = D1 ('MetaData "H264Syntax" "Amazonka.MediaConvert.Types.H264Syntax" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264Syntax'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264Syntax") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264Telecine

newtype H264Telecine Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

Constructors

H264Telecine' 

Bundled Patterns

pattern H264Telecine_HARD :: H264Telecine 
pattern H264Telecine_NONE :: H264Telecine 
pattern H264Telecine_SOFT :: H264Telecine 

Instances

Instances details
Eq H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Ord H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Read H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Show H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Generic H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Associated Types

type Rep H264Telecine :: Type -> Type #

NFData H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Methods

rnf :: H264Telecine -> () #

Hashable H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToJSON H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToJSONKey H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

FromJSON H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

FromJSONKey H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToLog H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToHeader H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToQuery H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

FromXML H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToXML H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Methods

toXML :: H264Telecine -> XML #

ToByteString H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

FromText H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

ToText H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

Methods

toText :: H264Telecine -> Text #

type Rep H264Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Telecine

type Rep H264Telecine = D1 ('MetaData "H264Telecine" "Amazonka.MediaConvert.Types.H264Telecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264Telecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264Telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264TemporalAdaptiveQuantization

newtype H264TemporalAdaptiveQuantization Source #

Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264TemporalAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to set H264TemporalAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization). To manually enable or disable H264TemporalAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

Instances

Instances details
Eq H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Ord H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Read H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Show H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Generic H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Associated Types

type Rep H264TemporalAdaptiveQuantization :: Type -> Type #

NFData H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

Hashable H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToJSON H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToJSONKey H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

FromJSON H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

FromJSONKey H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToLog H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToHeader H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToQuery H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

FromXML H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToXML H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToByteString H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

FromText H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

ToText H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

type Rep H264TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization

type Rep H264TemporalAdaptiveQuantization = D1 ('MetaData "H264TemporalAdaptiveQuantization" "Amazonka.MediaConvert.Types.H264TemporalAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264TemporalAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264TemporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H264UnregisteredSeiTimecode

newtype H264UnregisteredSeiTimecode Source #

Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

Instances

Instances details
Eq H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Ord H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Read H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Show H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Generic H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Associated Types

type Rep H264UnregisteredSeiTimecode :: Type -> Type #

NFData H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

Hashable H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToJSON H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToJSONKey H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

FromJSON H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

FromJSONKey H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToLog H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToHeader H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToQuery H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

FromXML H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToXML H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToByteString H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

FromText H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

ToText H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

type Rep H264UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode

type Rep H264UnregisteredSeiTimecode = D1 ('MetaData "H264UnregisteredSeiTimecode" "Amazonka.MediaConvert.Types.H264UnregisteredSeiTimecode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H264UnregisteredSeiTimecode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH264UnregisteredSeiTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265AdaptiveQuantization

newtype H265AdaptiveQuantization Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

Instances

Instances details
Eq H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Ord H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Read H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Show H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Generic H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Associated Types

type Rep H265AdaptiveQuantization :: Type -> Type #

NFData H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

Hashable H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToJSON H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToJSONKey H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

FromJSON H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

FromJSONKey H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToLog H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToHeader H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToQuery H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

FromXML H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToXML H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToByteString H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

FromText H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

ToText H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

type Rep H265AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AdaptiveQuantization

type Rep H265AdaptiveQuantization = D1 ('MetaData "H265AdaptiveQuantization" "Amazonka.MediaConvert.Types.H265AdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265AdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265AdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265AlternateTransferFunctionSei

newtype H265AlternateTransferFunctionSei Source #

Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

Instances

Instances details
Eq H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Ord H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Read H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Show H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Generic H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Associated Types

type Rep H265AlternateTransferFunctionSei :: Type -> Type #

NFData H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

Hashable H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToJSON H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToJSONKey H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

FromJSON H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

FromJSONKey H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToLog H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToHeader H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToQuery H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

FromXML H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToXML H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToByteString H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

FromText H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

ToText H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

type Rep H265AlternateTransferFunctionSei Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei

type Rep H265AlternateTransferFunctionSei = D1 ('MetaData "H265AlternateTransferFunctionSei" "Amazonka.MediaConvert.Types.H265AlternateTransferFunctionSei" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265AlternateTransferFunctionSei'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265AlternateTransferFunctionSei") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265CodecLevel

newtype H265CodecLevel Source #

H.265 Level.

Constructors

H265CodecLevel' 

Instances

Instances details
Eq H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Ord H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Read H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Show H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Generic H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Associated Types

type Rep H265CodecLevel :: Type -> Type #

NFData H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Methods

rnf :: H265CodecLevel -> () #

Hashable H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToJSON H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToJSONKey H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

FromJSON H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

FromJSONKey H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToLog H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToHeader H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToQuery H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

FromXML H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToXML H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

Methods

toXML :: H265CodecLevel -> XML #

ToByteString H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

FromText H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

ToText H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

type Rep H265CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecLevel

type Rep H265CodecLevel = D1 ('MetaData "H265CodecLevel" "Amazonka.MediaConvert.Types.H265CodecLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265CodecLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265CodecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265CodecProfile

newtype H265CodecProfile Source #

Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as [Profile] / [Tier], so "Main/High" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.

Instances

Instances details
Eq H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Ord H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Read H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Show H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Generic H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Associated Types

type Rep H265CodecProfile :: Type -> Type #

NFData H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

Methods

rnf :: H265CodecProfile -> () #

Hashable H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToJSON H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToJSONKey H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

FromJSON H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

FromJSONKey H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToLog H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToHeader H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToQuery H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

FromXML H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToXML H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToByteString H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

FromText H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

ToText H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

type Rep H265CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265CodecProfile

type Rep H265CodecProfile = D1 ('MetaData "H265CodecProfile" "Amazonka.MediaConvert.Types.H265CodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265CodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265CodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265DynamicSubGop

newtype H265DynamicSubGop Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

Instances

Instances details
Eq H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Ord H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Read H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Show H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Generic H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Associated Types

type Rep H265DynamicSubGop :: Type -> Type #

NFData H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

Methods

rnf :: H265DynamicSubGop -> () #

Hashable H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToJSON H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToJSONKey H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

FromJSON H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

FromJSONKey H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToLog H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToHeader H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToQuery H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

FromXML H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToXML H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToByteString H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

FromText H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

ToText H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

type Rep H265DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265DynamicSubGop

type Rep H265DynamicSubGop = D1 ('MetaData "H265DynamicSubGop" "Amazonka.MediaConvert.Types.H265DynamicSubGop" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265DynamicSubGop'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265DynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265FlickerAdaptiveQuantization

newtype H265FlickerAdaptiveQuantization Source #

Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).

Instances

Instances details
Eq H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Ord H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Read H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Show H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Generic H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Associated Types

type Rep H265FlickerAdaptiveQuantization :: Type -> Type #

NFData H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

Hashable H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToJSON H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToJSONKey H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

FromJSON H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

FromJSONKey H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToLog H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToHeader H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToQuery H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

FromXML H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToXML H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToByteString H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

FromText H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

ToText H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

type Rep H265FlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization

type Rep H265FlickerAdaptiveQuantization = D1 ('MetaData "H265FlickerAdaptiveQuantization" "Amazonka.MediaConvert.Types.H265FlickerAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265FlickerAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265FlickerAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265FramerateControl

newtype H265FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Ord H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Read H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Show H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Generic H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Associated Types

type Rep H265FramerateControl :: Type -> Type #

NFData H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

Methods

rnf :: H265FramerateControl -> () #

Hashable H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToJSON H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToJSONKey H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

FromJSON H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

FromJSONKey H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToLog H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToHeader H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToQuery H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

FromXML H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToXML H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToByteString H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

FromText H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

ToText H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

type Rep H265FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateControl

type Rep H265FramerateControl = D1 ('MetaData "H265FramerateControl" "Amazonka.MediaConvert.Types.H265FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265FramerateConversionAlgorithm

newtype H265FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Ord H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Read H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Show H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Generic H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Associated Types

type Rep H265FramerateConversionAlgorithm :: Type -> Type #

NFData H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

Hashable H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToJSON H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToJSONKey H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

FromJSON H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

FromJSONKey H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToLog H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToHeader H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToQuery H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

FromXML H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToXML H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToByteString H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

FromText H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

ToText H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

type Rep H265FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm

type Rep H265FramerateConversionAlgorithm = D1 ('MetaData "H265FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.H265FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265GopBReference

newtype H265GopBReference Source #

If enable, use reference B frames for GOP structures that have B frames > 1.

Instances

Instances details
Eq H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Ord H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Read H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Show H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Generic H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Associated Types

type Rep H265GopBReference :: Type -> Type #

NFData H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

Methods

rnf :: H265GopBReference -> () #

Hashable H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToJSON H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToJSONKey H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

FromJSON H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

FromJSONKey H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToLog H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToHeader H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToQuery H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

FromXML H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToXML H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToByteString H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

FromText H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

ToText H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

type Rep H265GopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopBReference

type Rep H265GopBReference = D1 ('MetaData "H265GopBReference" "Amazonka.MediaConvert.Types.H265GopBReference" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265GopBReference'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265GopBReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265GopSizeUnits

newtype H265GopSizeUnits Source #

Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

Instances

Instances details
Eq H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Ord H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Read H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Show H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Generic H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Associated Types

type Rep H265GopSizeUnits :: Type -> Type #

NFData H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

Methods

rnf :: H265GopSizeUnits -> () #

Hashable H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToJSON H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToJSONKey H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

FromJSON H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

FromJSONKey H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToLog H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToHeader H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToQuery H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

FromXML H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToXML H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToByteString H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

FromText H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

ToText H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

type Rep H265GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265GopSizeUnits

type Rep H265GopSizeUnits = D1 ('MetaData "H265GopSizeUnits" "Amazonka.MediaConvert.Types.H265GopSizeUnits" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265GopSizeUnits'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265GopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265InterlaceMode

newtype H265InterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Ord H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Read H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Show H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Generic H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Associated Types

type Rep H265InterlaceMode :: Type -> Type #

NFData H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

Methods

rnf :: H265InterlaceMode -> () #

Hashable H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToJSON H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToJSONKey H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

FromJSON H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

FromJSONKey H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToLog H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToHeader H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToQuery H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

FromXML H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToXML H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToByteString H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

FromText H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

ToText H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

type Rep H265InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265InterlaceMode

type Rep H265InterlaceMode = D1 ('MetaData "H265InterlaceMode" "Amazonka.MediaConvert.Types.H265InterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265InterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265InterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265ParControl

newtype H265ParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Constructors

H265ParControl' 

Instances

Instances details
Eq H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Ord H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Read H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Show H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Generic H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Associated Types

type Rep H265ParControl :: Type -> Type #

NFData H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Methods

rnf :: H265ParControl -> () #

Hashable H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToJSON H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToJSONKey H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

FromJSON H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

FromJSONKey H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToLog H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToHeader H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToQuery H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

FromXML H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToXML H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

Methods

toXML :: H265ParControl -> XML #

ToByteString H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

FromText H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

ToText H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

type Rep H265ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ParControl

type Rep H265ParControl = D1 ('MetaData "H265ParControl" "Amazonka.MediaConvert.Types.H265ParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265ParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265ParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265QualityTuningLevel

newtype H265QualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Instances

Instances details
Eq H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Ord H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Read H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Show H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Generic H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Associated Types

type Rep H265QualityTuningLevel :: Type -> Type #

NFData H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

Methods

rnf :: H265QualityTuningLevel -> () #

Hashable H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToJSON H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToJSONKey H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

FromJSON H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

FromJSONKey H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToLog H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToHeader H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToQuery H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

FromXML H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToXML H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToByteString H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

FromText H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

ToText H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

type Rep H265QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QualityTuningLevel

type Rep H265QualityTuningLevel = D1 ('MetaData "H265QualityTuningLevel" "Amazonka.MediaConvert.Types.H265QualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265QualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265QualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265RateControlMode

newtype H265RateControlMode Source #

Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

Instances

Instances details
Eq H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Ord H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Read H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Show H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Generic H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Associated Types

type Rep H265RateControlMode :: Type -> Type #

NFData H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

Methods

rnf :: H265RateControlMode -> () #

Hashable H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToJSON H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToJSONKey H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

FromJSON H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

FromJSONKey H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToLog H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToHeader H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToQuery H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

FromXML H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToXML H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToByteString H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

FromText H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

ToText H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

type Rep H265RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265RateControlMode

type Rep H265RateControlMode = D1 ('MetaData "H265RateControlMode" "Amazonka.MediaConvert.Types.H265RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265SampleAdaptiveOffsetFilterMode

newtype H265SampleAdaptiveOffsetFilterMode Source #

Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content

Instances

Instances details
Eq H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Ord H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Read H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Show H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Generic H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Associated Types

type Rep H265SampleAdaptiveOffsetFilterMode :: Type -> Type #

NFData H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

Hashable H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToJSON H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToJSONKey H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

FromJSON H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

FromJSONKey H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToLog H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToHeader H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToQuery H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

FromXML H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToXML H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToByteString H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

FromText H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

ToText H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

type Rep H265SampleAdaptiveOffsetFilterMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode

type Rep H265SampleAdaptiveOffsetFilterMode = D1 ('MetaData "H265SampleAdaptiveOffsetFilterMode" "Amazonka.MediaConvert.Types.H265SampleAdaptiveOffsetFilterMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265SampleAdaptiveOffsetFilterMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265SampleAdaptiveOffsetFilterMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265ScanTypeConversionMode

newtype H265ScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Ord H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Read H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Show H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Generic H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Associated Types

type Rep H265ScanTypeConversionMode :: Type -> Type #

NFData H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

Hashable H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToJSON H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToJSONKey H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

FromJSON H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

FromJSONKey H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToLog H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToHeader H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToQuery H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

FromXML H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToXML H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToByteString H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

FromText H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

ToText H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

type Rep H265ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265ScanTypeConversionMode

type Rep H265ScanTypeConversionMode = D1 ('MetaData "H265ScanTypeConversionMode" "Amazonka.MediaConvert.Types.H265ScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265ScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265ScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265SceneChangeDetect

newtype H265SceneChangeDetect Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

Instances

Instances details
Eq H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Ord H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Read H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Show H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Generic H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Associated Types

type Rep H265SceneChangeDetect :: Type -> Type #

NFData H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

Methods

rnf :: H265SceneChangeDetect -> () #

Hashable H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToJSON H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToJSONKey H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

FromJSON H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

FromJSONKey H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToLog H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToHeader H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToQuery H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

FromXML H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToXML H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToByteString H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

FromText H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

ToText H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

type Rep H265SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SceneChangeDetect

type Rep H265SceneChangeDetect = D1 ('MetaData "H265SceneChangeDetect" "Amazonka.MediaConvert.Types.H265SceneChangeDetect" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265SceneChangeDetect'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265SceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265SlowPal

newtype H265SlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

H265SlowPal' 

Bundled Patterns

pattern H265SlowPal_DISABLED :: H265SlowPal 
pattern H265SlowPal_ENABLED :: H265SlowPal 

Instances

Instances details
Eq H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Ord H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Read H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Show H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Generic H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Associated Types

type Rep H265SlowPal :: Type -> Type #

NFData H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Methods

rnf :: H265SlowPal -> () #

Hashable H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToJSON H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToJSONKey H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

FromJSON H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

FromJSONKey H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToLog H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToHeader H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToQuery H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

FromXML H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToXML H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Methods

toXML :: H265SlowPal -> XML #

ToByteString H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

FromText H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

ToText H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

Methods

toText :: H265SlowPal -> Text #

type Rep H265SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SlowPal

type Rep H265SlowPal = D1 ('MetaData "H265SlowPal" "Amazonka.MediaConvert.Types.H265SlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265SlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265SlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265SpatialAdaptiveQuantization

newtype H265SpatialAdaptiveQuantization Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Ord H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Read H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Show H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Generic H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Associated Types

type Rep H265SpatialAdaptiveQuantization :: Type -> Type #

NFData H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

Hashable H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToJSON H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToJSONKey H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

FromJSON H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

FromJSONKey H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToLog H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToHeader H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToQuery H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

FromXML H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToXML H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToByteString H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

FromText H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

ToText H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

type Rep H265SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization

type Rep H265SpatialAdaptiveQuantization = D1 ('MetaData "H265SpatialAdaptiveQuantization" "Amazonka.MediaConvert.Types.H265SpatialAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265SpatialAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265SpatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265Telecine

newtype H265Telecine Source #

This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.

Constructors

H265Telecine' 

Bundled Patterns

pattern H265Telecine_HARD :: H265Telecine 
pattern H265Telecine_NONE :: H265Telecine 
pattern H265Telecine_SOFT :: H265Telecine 

Instances

Instances details
Eq H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Ord H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Read H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Show H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Generic H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Associated Types

type Rep H265Telecine :: Type -> Type #

NFData H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Methods

rnf :: H265Telecine -> () #

Hashable H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToJSON H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToJSONKey H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

FromJSON H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

FromJSONKey H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToLog H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToHeader H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToQuery H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

FromXML H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToXML H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Methods

toXML :: H265Telecine -> XML #

ToByteString H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

FromText H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

ToText H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

Methods

toText :: H265Telecine -> Text #

type Rep H265Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Telecine

type Rep H265Telecine = D1 ('MetaData "H265Telecine" "Amazonka.MediaConvert.Types.H265Telecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265Telecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265Telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265TemporalAdaptiveQuantization

newtype H265TemporalAdaptiveQuantization Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

Instances

Instances details
Eq H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Ord H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Read H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Show H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Generic H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Associated Types

type Rep H265TemporalAdaptiveQuantization :: Type -> Type #

NFData H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

Hashable H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToJSON H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToJSONKey H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

FromJSON H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

FromJSONKey H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToLog H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToHeader H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToQuery H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

FromXML H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToXML H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToByteString H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

FromText H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

ToText H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

type Rep H265TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization

type Rep H265TemporalAdaptiveQuantization = D1 ('MetaData "H265TemporalAdaptiveQuantization" "Amazonka.MediaConvert.Types.H265TemporalAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265TemporalAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265TemporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265TemporalIds

newtype H265TemporalIds Source #

Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.

Constructors

H265TemporalIds' 

Instances

Instances details
Eq H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Ord H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Read H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Show H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Generic H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Associated Types

type Rep H265TemporalIds :: Type -> Type #

NFData H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Methods

rnf :: H265TemporalIds -> () #

Hashable H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToJSON H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToJSONKey H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

FromJSON H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

FromJSONKey H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToLog H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToHeader H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToQuery H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

FromXML H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToXML H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

Methods

toXML :: H265TemporalIds -> XML #

ToByteString H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

FromText H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

ToText H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

type Rep H265TemporalIds Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265TemporalIds

type Rep H265TemporalIds = D1 ('MetaData "H265TemporalIds" "Amazonka.MediaConvert.Types.H265TemporalIds" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265TemporalIds'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265TemporalIds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265Tiles

newtype H265Tiles Source #

Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

Constructors

H265Tiles' 

Fields

Bundled Patterns

pattern H265Tiles_DISABLED :: H265Tiles 
pattern H265Tiles_ENABLED :: H265Tiles 

Instances

Instances details
Eq H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Ord H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Read H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Show H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Generic H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Associated Types

type Rep H265Tiles :: Type -> Type #

NFData H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Methods

rnf :: H265Tiles -> () #

Hashable H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToJSON H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToJSONKey H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

FromJSON H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

FromJSONKey H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToLog H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToHeader H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToQuery H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

FromXML H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToXML H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Methods

toXML :: H265Tiles -> XML #

ToByteString H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Methods

toBS :: H265Tiles -> ByteString #

FromText H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

ToText H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

Methods

toText :: H265Tiles -> Text #

type Rep H265Tiles Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Tiles

type Rep H265Tiles = D1 ('MetaData "H265Tiles" "Amazonka.MediaConvert.Types.H265Tiles" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265Tiles'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265Tiles") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265UnregisteredSeiTimecode

newtype H265UnregisteredSeiTimecode Source #

Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

Instances

Instances details
Eq H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Ord H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Read H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Show H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Generic H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Associated Types

type Rep H265UnregisteredSeiTimecode :: Type -> Type #

NFData H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

Hashable H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToJSON H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToJSONKey H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

FromJSON H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

FromJSONKey H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToLog H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToHeader H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToQuery H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

FromXML H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToXML H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToByteString H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

FromText H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

ToText H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

type Rep H265UnregisteredSeiTimecode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode

type Rep H265UnregisteredSeiTimecode = D1 ('MetaData "H265UnregisteredSeiTimecode" "Amazonka.MediaConvert.Types.H265UnregisteredSeiTimecode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265UnregisteredSeiTimecode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265UnregisteredSeiTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

H265WriteMp4PackagingType

newtype H265WriteMp4PackagingType Source #

If the location of parameter set NAL units doesn't matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.

Instances

Instances details
Eq H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Ord H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Read H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Show H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Generic H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Associated Types

type Rep H265WriteMp4PackagingType :: Type -> Type #

NFData H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

Hashable H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToJSON H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToJSONKey H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

FromJSON H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

FromJSONKey H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToLog H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToHeader H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToQuery H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

FromXML H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToXML H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToByteString H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

FromText H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

ToText H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

type Rep H265WriteMp4PackagingType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265WriteMp4PackagingType

type Rep H265WriteMp4PackagingType = D1 ('MetaData "H265WriteMp4PackagingType" "Amazonka.MediaConvert.Types.H265WriteMp4PackagingType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "H265WriteMp4PackagingType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromH265WriteMp4PackagingType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsAdMarkers

newtype HlsAdMarkers Source #

Constructors

HlsAdMarkers' 

Instances

Instances details
Eq HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Ord HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Read HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Show HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Generic HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Associated Types

type Rep HlsAdMarkers :: Type -> Type #

NFData HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Methods

rnf :: HlsAdMarkers -> () #

Hashable HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToJSON HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToJSONKey HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

FromJSON HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

FromJSONKey HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToLog HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToHeader HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToQuery HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

FromXML HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToXML HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Methods

toXML :: HlsAdMarkers -> XML #

ToByteString HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

FromText HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

ToText HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

Methods

toText :: HlsAdMarkers -> Text #

type Rep HlsAdMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdMarkers

type Rep HlsAdMarkers = D1 ('MetaData "HlsAdMarkers" "Amazonka.MediaConvert.Types.HlsAdMarkers" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsAdMarkers'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsAdMarkers") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsAudioOnlyContainer

newtype HlsAudioOnlyContainer Source #

Use this setting only in audio-only outputs. Choose MPEG-2 Transport Stream (M2TS) to create a file in an MPEG2-TS container. Keep the default value Automatic (AUTOMATIC) to create a raw audio-only file with no container. Regardless of the value that you specify here, if this output has video, the service will place outputs into an MPEG2-TS container.

Instances

Instances details
Eq HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Ord HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Read HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Show HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Generic HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Associated Types

type Rep HlsAudioOnlyContainer :: Type -> Type #

NFData HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

Methods

rnf :: HlsAudioOnlyContainer -> () #

Hashable HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToJSON HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToJSONKey HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

FromJSON HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

FromJSONKey HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToLog HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToHeader HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToQuery HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

FromXML HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToXML HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToByteString HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

FromText HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

ToText HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

type Rep HlsAudioOnlyContainer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyContainer

type Rep HlsAudioOnlyContainer = D1 ('MetaData "HlsAudioOnlyContainer" "Amazonka.MediaConvert.Types.HlsAudioOnlyContainer" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsAudioOnlyContainer'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsAudioOnlyContainer") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsAudioOnlyHeader

newtype HlsAudioOnlyHeader Source #

Ignore this setting unless you are using FairPlay DRM with Verimatrix and you encounter playback issues. Keep the default value, Include (INCLUDE), to output audio-only headers. Choose Exclude (EXCLUDE) to remove the audio-only headers from your audio segments.

Instances

Instances details
Eq HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Ord HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Read HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Show HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Generic HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Associated Types

type Rep HlsAudioOnlyHeader :: Type -> Type #

NFData HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

Methods

rnf :: HlsAudioOnlyHeader -> () #

Hashable HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToJSON HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToJSONKey HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

FromJSON HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

FromJSONKey HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToLog HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToHeader HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToQuery HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

FromXML HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToXML HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToByteString HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

FromText HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

ToText HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

type Rep HlsAudioOnlyHeader Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioOnlyHeader

type Rep HlsAudioOnlyHeader = D1 ('MetaData "HlsAudioOnlyHeader" "Amazonka.MediaConvert.Types.HlsAudioOnlyHeader" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsAudioOnlyHeader'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsAudioOnlyHeader") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsAudioTrackType

newtype HlsAudioTrackType Source #

Four types of audio-only tracks are supported: Audio-Only Variant Stream The client can play back this audio-only stream instead of video in low-bandwidth scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest. Alternate Audio, Auto Select, Default Alternate rendition that the client should try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=YES, AUTOSELECT=YES Alternate Audio, Auto Select, Not Default Alternate rendition that the client may try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YES Alternate Audio, not Auto Select Alternate rendition that the client will not try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=NO

Instances

Instances details
Eq HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Ord HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Read HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Show HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Generic HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Associated Types

type Rep HlsAudioTrackType :: Type -> Type #

NFData HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

Methods

rnf :: HlsAudioTrackType -> () #

Hashable HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToJSON HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToJSONKey HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

FromJSON HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

FromJSONKey HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToLog HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToHeader HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToQuery HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

FromXML HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToXML HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToByteString HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

FromText HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

ToText HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

type Rep HlsAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAudioTrackType

type Rep HlsAudioTrackType = D1 ('MetaData "HlsAudioTrackType" "Amazonka.MediaConvert.Types.HlsAudioTrackType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsAudioTrackType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsAudioTrackType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsCaptionLanguageSetting

newtype HlsCaptionLanguageSetting Source #

Applies only to 608 Embedded output captions. Insert: Include CLOSED-CAPTIONS lines in the manifest. Specify at least one language in the CC1 Language Code field. One CLOSED-CAPTION line is added for each Language Code you specify. Make sure to specify the languages in the order in which they appear in the original source (if the source is embedded format) or the order of the caption selectors (if the source is other than embedded). Otherwise, languages in the manifest will not match up properly with the output captions. None: Include CLOSED-CAPTIONS=NONE line in the manifest. Omit: Omit any CLOSED-CAPTIONS line from the manifest.

Instances

Instances details
Eq HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Ord HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Read HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Show HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Generic HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Associated Types

type Rep HlsCaptionLanguageSetting :: Type -> Type #

NFData HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

Hashable HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToJSON HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToJSONKey HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

FromJSON HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

FromJSONKey HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToLog HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToHeader HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToQuery HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

FromXML HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToXML HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToByteString HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

FromText HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

ToText HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

type Rep HlsCaptionLanguageSetting Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting

type Rep HlsCaptionLanguageSetting = D1 ('MetaData "HlsCaptionLanguageSetting" "Amazonka.MediaConvert.Types.HlsCaptionLanguageSetting" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsCaptionLanguageSetting'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsCaptionLanguageSetting") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsClientCache

newtype HlsClientCache Source #

Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

Constructors

HlsClientCache' 

Instances

Instances details
Eq HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Ord HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Read HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Show HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Generic HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Associated Types

type Rep HlsClientCache :: Type -> Type #

NFData HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Methods

rnf :: HlsClientCache -> () #

Hashable HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToJSON HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToJSONKey HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

FromJSON HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

FromJSONKey HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToLog HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToHeader HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToQuery HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

FromXML HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToXML HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

Methods

toXML :: HlsClientCache -> XML #

ToByteString HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

FromText HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

ToText HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

type Rep HlsClientCache Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsClientCache

type Rep HlsClientCache = D1 ('MetaData "HlsClientCache" "Amazonka.MediaConvert.Types.HlsClientCache" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsClientCache'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsClientCache") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsCodecSpecification

newtype HlsCodecSpecification Source #

Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

Instances

Instances details
Eq HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Ord HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Read HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Show HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Generic HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Associated Types

type Rep HlsCodecSpecification :: Type -> Type #

NFData HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

Methods

rnf :: HlsCodecSpecification -> () #

Hashable HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToJSON HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToJSONKey HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

FromJSON HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

FromJSONKey HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToLog HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToHeader HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToQuery HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

FromXML HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToXML HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToByteString HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

FromText HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

ToText HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

type Rep HlsCodecSpecification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCodecSpecification

type Rep HlsCodecSpecification = D1 ('MetaData "HlsCodecSpecification" "Amazonka.MediaConvert.Types.HlsCodecSpecification" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsCodecSpecification'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsCodecSpecification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsDescriptiveVideoServiceFlag

newtype HlsDescriptiveVideoServiceFlag Source #

Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

Instances

Instances details
Eq HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Ord HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Read HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Show HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Generic HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Associated Types

type Rep HlsDescriptiveVideoServiceFlag :: Type -> Type #

NFData HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

Hashable HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToJSON HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToJSONKey HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

FromJSON HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

FromJSONKey HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToLog HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToHeader HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToQuery HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

FromXML HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToXML HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToByteString HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

FromText HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

ToText HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

type Rep HlsDescriptiveVideoServiceFlag Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag

type Rep HlsDescriptiveVideoServiceFlag = D1 ('MetaData "HlsDescriptiveVideoServiceFlag" "Amazonka.MediaConvert.Types.HlsDescriptiveVideoServiceFlag" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsDescriptiveVideoServiceFlag'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsDescriptiveVideoServiceFlag") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsDirectoryStructure

newtype HlsDirectoryStructure Source #

Indicates whether segments should be placed in subdirectories.

Instances

Instances details
Eq HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Ord HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Read HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Show HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Generic HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Associated Types

type Rep HlsDirectoryStructure :: Type -> Type #

NFData HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

Methods

rnf :: HlsDirectoryStructure -> () #

Hashable HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToJSON HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToJSONKey HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

FromJSON HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

FromJSONKey HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToLog HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToHeader HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToQuery HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

FromXML HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToXML HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToByteString HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

FromText HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

ToText HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

type Rep HlsDirectoryStructure Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsDirectoryStructure

type Rep HlsDirectoryStructure = D1 ('MetaData "HlsDirectoryStructure" "Amazonka.MediaConvert.Types.HlsDirectoryStructure" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsDirectoryStructure'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsDirectoryStructure") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsEncryptionType

newtype HlsEncryptionType Source #

Encrypts the segments with the given encryption scheme. Leave blank to disable. Selecting 'Disabled' in the web interface also disables encryption.

Instances

Instances details
Eq HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Ord HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Read HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Show HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Generic HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Associated Types

type Rep HlsEncryptionType :: Type -> Type #

NFData HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

Methods

rnf :: HlsEncryptionType -> () #

Hashable HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToJSON HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToJSONKey HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

FromJSON HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

FromJSONKey HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToLog HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToHeader HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToQuery HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

FromXML HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToXML HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToByteString HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

FromText HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

ToText HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

type Rep HlsEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionType

type Rep HlsEncryptionType = D1 ('MetaData "HlsEncryptionType" "Amazonka.MediaConvert.Types.HlsEncryptionType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsEncryptionType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsEncryptionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsIFrameOnlyManifest

newtype HlsIFrameOnlyManifest Source #

Choose Include (INCLUDE) to have MediaConvert generate a child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

Instances

Instances details
Eq HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Ord HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Read HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Show HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Generic HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Associated Types

type Rep HlsIFrameOnlyManifest :: Type -> Type #

NFData HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

Methods

rnf :: HlsIFrameOnlyManifest -> () #

Hashable HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToJSON HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToJSONKey HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

FromJSON HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

FromJSONKey HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToLog HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToHeader HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToQuery HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

FromXML HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToXML HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToByteString HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

FromText HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

ToText HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

type Rep HlsIFrameOnlyManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest

type Rep HlsIFrameOnlyManifest = D1 ('MetaData "HlsIFrameOnlyManifest" "Amazonka.MediaConvert.Types.HlsIFrameOnlyManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsIFrameOnlyManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsIFrameOnlyManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsImageBasedTrickPlay

newtype HlsImageBasedTrickPlay Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

Instances

Instances details
Eq HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Ord HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Read HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Show HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Generic HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Associated Types

type Rep HlsImageBasedTrickPlay :: Type -> Type #

NFData HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

Methods

rnf :: HlsImageBasedTrickPlay -> () #

Hashable HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToJSON HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToJSONKey HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

FromJSON HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

FromJSONKey HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToLog HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToHeader HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToQuery HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

FromXML HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToXML HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToByteString HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

FromText HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

ToText HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

type Rep HlsImageBasedTrickPlay Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay

type Rep HlsImageBasedTrickPlay = D1 ('MetaData "HlsImageBasedTrickPlay" "Amazonka.MediaConvert.Types.HlsImageBasedTrickPlay" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsImageBasedTrickPlay'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsImageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsInitializationVectorInManifest

newtype HlsInitializationVectorInManifest Source #

The Initialization Vector is a 128-bit number used in conjunction with the key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed in the manifest. Otherwise Initialization Vector is not in the manifest.

Instances

Instances details
Eq HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Ord HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Read HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Show HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Generic HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Associated Types

type Rep HlsInitializationVectorInManifest :: Type -> Type #

NFData HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

Hashable HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToJSON HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToJSONKey HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

FromJSON HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

FromJSONKey HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToLog HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToHeader HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToQuery HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

FromXML HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToXML HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToByteString HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

FromText HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

ToText HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

type Rep HlsInitializationVectorInManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest

type Rep HlsInitializationVectorInManifest = D1 ('MetaData "HlsInitializationVectorInManifest" "Amazonka.MediaConvert.Types.HlsInitializationVectorInManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsInitializationVectorInManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsInitializationVectorInManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsIntervalCadence

newtype HlsIntervalCadence Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

Instances

Instances details
Eq HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Ord HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Read HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Show HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Generic HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Associated Types

type Rep HlsIntervalCadence :: Type -> Type #

NFData HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

Methods

rnf :: HlsIntervalCadence -> () #

Hashable HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToJSON HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToJSONKey HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

FromJSON HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

FromJSONKey HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToLog HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToHeader HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToQuery HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

FromXML HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToXML HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToByteString HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

FromText HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

ToText HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

type Rep HlsIntervalCadence Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsIntervalCadence

type Rep HlsIntervalCadence = D1 ('MetaData "HlsIntervalCadence" "Amazonka.MediaConvert.Types.HlsIntervalCadence" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsIntervalCadence'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsIntervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsKeyProviderType

newtype HlsKeyProviderType Source #

Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

Instances

Instances details
Eq HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Ord HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Read HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Show HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Generic HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Associated Types

type Rep HlsKeyProviderType :: Type -> Type #

NFData HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

Methods

rnf :: HlsKeyProviderType -> () #

Hashable HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToJSON HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToJSONKey HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

FromJSON HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

FromJSONKey HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToLog HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToHeader HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToQuery HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

FromXML HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToXML HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToByteString HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

FromText HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

ToText HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

type Rep HlsKeyProviderType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsKeyProviderType

type Rep HlsKeyProviderType = D1 ('MetaData "HlsKeyProviderType" "Amazonka.MediaConvert.Types.HlsKeyProviderType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsKeyProviderType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsKeyProviderType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsManifestCompression

newtype HlsManifestCompression Source #

When set to GZIP, compresses HLS playlist.

Instances

Instances details
Eq HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Ord HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Read HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Show HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Generic HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Associated Types

type Rep HlsManifestCompression :: Type -> Type #

NFData HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

Methods

rnf :: HlsManifestCompression -> () #

Hashable HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToJSON HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToJSONKey HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

FromJSON HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

FromJSONKey HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToLog HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToHeader HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToQuery HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

FromXML HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToXML HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToByteString HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

FromText HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

ToText HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

type Rep HlsManifestCompression Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestCompression

type Rep HlsManifestCompression = D1 ('MetaData "HlsManifestCompression" "Amazonka.MediaConvert.Types.HlsManifestCompression" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsManifestCompression'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsManifestCompression") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsManifestDurationFormat

newtype HlsManifestDurationFormat Source #

Indicates whether the output manifest should use floating point values for segment duration.

Instances

Instances details
Eq HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Ord HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Read HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Show HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Generic HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Associated Types

type Rep HlsManifestDurationFormat :: Type -> Type #

NFData HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

Hashable HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToJSON HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToJSONKey HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

FromJSON HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

FromJSONKey HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToLog HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToHeader HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToQuery HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

FromXML HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToXML HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToByteString HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

FromText HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

ToText HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

type Rep HlsManifestDurationFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsManifestDurationFormat

type Rep HlsManifestDurationFormat = D1 ('MetaData "HlsManifestDurationFormat" "Amazonka.MediaConvert.Types.HlsManifestDurationFormat" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsManifestDurationFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsManifestDurationFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsOfflineEncrypted

newtype HlsOfflineEncrypted Source #

Enable this setting to insert the EXT-X-SESSION-KEY element into the master playlist. This allows for offline Apple HLS FairPlay content protection.

Instances

Instances details
Eq HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Ord HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Read HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Show HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Generic HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Associated Types

type Rep HlsOfflineEncrypted :: Type -> Type #

NFData HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

Methods

rnf :: HlsOfflineEncrypted -> () #

Hashable HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToJSON HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToJSONKey HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

FromJSON HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

FromJSONKey HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToLog HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToHeader HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToQuery HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

FromXML HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToXML HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToByteString HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

FromText HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

ToText HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

type Rep HlsOfflineEncrypted Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOfflineEncrypted

type Rep HlsOfflineEncrypted = D1 ('MetaData "HlsOfflineEncrypted" "Amazonka.MediaConvert.Types.HlsOfflineEncrypted" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsOfflineEncrypted'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsOfflineEncrypted") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsOutputSelection

newtype HlsOutputSelection Source #

Indicates whether the .m3u8 manifest file should be generated for this HLS output group.

Instances

Instances details
Eq HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Ord HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Read HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Show HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Generic HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Associated Types

type Rep HlsOutputSelection :: Type -> Type #

NFData HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

Methods

rnf :: HlsOutputSelection -> () #

Hashable HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToJSON HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToJSONKey HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

FromJSON HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

FromJSONKey HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToLog HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToHeader HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToQuery HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

FromXML HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToXML HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToByteString HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

FromText HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

ToText HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

type Rep HlsOutputSelection Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsOutputSelection

type Rep HlsOutputSelection = D1 ('MetaData "HlsOutputSelection" "Amazonka.MediaConvert.Types.HlsOutputSelection" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsOutputSelection'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsOutputSelection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsProgramDateTime

newtype HlsProgramDateTime Source #

Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. The value is calculated as follows: either the program date and time are initialized using the input timecode source, or the time is initialized using the input timecode source and the date is initialized using the timestamp_offset.

Instances

Instances details
Eq HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Ord HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Read HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Show HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Generic HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Associated Types

type Rep HlsProgramDateTime :: Type -> Type #

NFData HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

Methods

rnf :: HlsProgramDateTime -> () #

Hashable HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToJSON HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToJSONKey HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

FromJSON HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

FromJSONKey HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToLog HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToHeader HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToQuery HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

FromXML HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToXML HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToByteString HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

FromText HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

ToText HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

type Rep HlsProgramDateTime Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsProgramDateTime

type Rep HlsProgramDateTime = D1 ('MetaData "HlsProgramDateTime" "Amazonka.MediaConvert.Types.HlsProgramDateTime" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsProgramDateTime'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsProgramDateTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsSegmentControl

newtype HlsSegmentControl Source #

When set to SINGLE_FILE, emits program as a single media resource (.ts) file, uses #EXT-X-BYTERANGE tags to index segment for playback.

Instances

Instances details
Eq HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Ord HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Read HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Show HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Generic HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Associated Types

type Rep HlsSegmentControl :: Type -> Type #

NFData HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

Methods

rnf :: HlsSegmentControl -> () #

Hashable HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToJSON HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToJSONKey HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

FromJSON HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

FromJSONKey HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToLog HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToHeader HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToQuery HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

FromXML HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToXML HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToByteString HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

FromText HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

ToText HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

type Rep HlsSegmentControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentControl

type Rep HlsSegmentControl = D1 ('MetaData "HlsSegmentControl" "Amazonka.MediaConvert.Types.HlsSegmentControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsSegmentControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsSegmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsSegmentLengthControl

newtype HlsSegmentLengthControl Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

Instances

Instances details
Eq HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Ord HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Read HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Show HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Generic HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Associated Types

type Rep HlsSegmentLengthControl :: Type -> Type #

NFData HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

Methods

rnf :: HlsSegmentLengthControl -> () #

Hashable HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToJSON HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToJSONKey HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

FromJSON HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

FromJSONKey HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToLog HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToHeader HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToQuery HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

FromXML HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToXML HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToByteString HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

FromText HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

ToText HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

type Rep HlsSegmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSegmentLengthControl

type Rep HlsSegmentLengthControl = D1 ('MetaData "HlsSegmentLengthControl" "Amazonka.MediaConvert.Types.HlsSegmentLengthControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsSegmentLengthControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsSegmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsStreamInfResolution

newtype HlsStreamInfResolution Source #

Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

Instances

Instances details
Eq HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Ord HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Read HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Show HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Generic HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Associated Types

type Rep HlsStreamInfResolution :: Type -> Type #

NFData HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

Methods

rnf :: HlsStreamInfResolution -> () #

Hashable HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToJSON HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToJSONKey HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

FromJSON HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

FromJSONKey HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToLog HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToHeader HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToQuery HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

FromXML HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToXML HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToByteString HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

FromText HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

ToText HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

type Rep HlsStreamInfResolution Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsStreamInfResolution

type Rep HlsStreamInfResolution = D1 ('MetaData "HlsStreamInfResolution" "Amazonka.MediaConvert.Types.HlsStreamInfResolution" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsStreamInfResolution'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsStreamInfResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsTargetDurationCompatibilityMode

newtype HlsTargetDurationCompatibilityMode Source #

When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

Instances

Instances details
Eq HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Ord HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Read HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Show HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Generic HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Associated Types

type Rep HlsTargetDurationCompatibilityMode :: Type -> Type #

NFData HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

Hashable HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToJSON HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToJSONKey HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

FromJSON HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

FromJSONKey HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToLog HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToHeader HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToQuery HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

FromXML HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToXML HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToByteString HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

FromText HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

ToText HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

type Rep HlsTargetDurationCompatibilityMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode

type Rep HlsTargetDurationCompatibilityMode = D1 ('MetaData "HlsTargetDurationCompatibilityMode" "Amazonka.MediaConvert.Types.HlsTargetDurationCompatibilityMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsTargetDurationCompatibilityMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsTargetDurationCompatibilityMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

HlsTimedMetadataId3Frame

newtype HlsTimedMetadataId3Frame Source #

Indicates ID3 frame that has the timecode.

Instances

Instances details
Eq HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Ord HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Read HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Show HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Generic HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Associated Types

type Rep HlsTimedMetadataId3Frame :: Type -> Type #

NFData HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

Hashable HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToJSON HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToJSONKey HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

FromJSON HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

FromJSONKey HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToLog HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToHeader HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToQuery HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

FromXML HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToXML HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToByteString HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

FromText HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

ToText HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

type Rep HlsTimedMetadataId3Frame Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame

type Rep HlsTimedMetadataId3Frame = D1 ('MetaData "HlsTimedMetadataId3Frame" "Amazonka.MediaConvert.Types.HlsTimedMetadataId3Frame" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "HlsTimedMetadataId3Frame'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromHlsTimedMetadataId3Frame") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ImscStylePassthrough

newtype ImscStylePassthrough Source #

Keep this setting enabled to have MediaConvert use the font style and position information from the captions source in the output. This option is available only when your input captions are IMSC, SMPTE-TT, or TTML. Disable this setting for simplified output captions.

Instances

Instances details
Eq ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Ord ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Read ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Show ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Generic ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Associated Types

type Rep ImscStylePassthrough :: Type -> Type #

NFData ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

Methods

rnf :: ImscStylePassthrough -> () #

Hashable ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToJSON ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToJSONKey ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

FromJSON ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

FromJSONKey ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToLog ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToHeader ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToQuery ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

FromXML ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToXML ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToByteString ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

FromText ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

ToText ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

type Rep ImscStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscStylePassthrough

type Rep ImscStylePassthrough = D1 ('MetaData "ImscStylePassthrough" "Amazonka.MediaConvert.Types.ImscStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ImscStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromImscStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputDeblockFilter

newtype InputDeblockFilter Source #

Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

Instances

Instances details
Eq InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Ord InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Read InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Show InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Generic InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Associated Types

type Rep InputDeblockFilter :: Type -> Type #

NFData InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

Methods

rnf :: InputDeblockFilter -> () #

Hashable InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToJSON InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToJSONKey InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

FromJSON InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

FromJSONKey InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToLog InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToHeader InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToQuery InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

FromXML InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToXML InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToByteString InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

FromText InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

ToText InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

type Rep InputDeblockFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDeblockFilter

type Rep InputDeblockFilter = D1 ('MetaData "InputDeblockFilter" "Amazonka.MediaConvert.Types.InputDeblockFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputDeblockFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputDeblockFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputDenoiseFilter

newtype InputDenoiseFilter Source #

Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

Instances

Instances details
Eq InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Ord InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Read InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Show InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Generic InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Associated Types

type Rep InputDenoiseFilter :: Type -> Type #

NFData InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

Methods

rnf :: InputDenoiseFilter -> () #

Hashable InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToJSON InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToJSONKey InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

FromJSON InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

FromJSONKey InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToLog InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToHeader InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToQuery InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

FromXML InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToXML InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToByteString InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

FromText InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

ToText InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

type Rep InputDenoiseFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDenoiseFilter

type Rep InputDenoiseFilter = D1 ('MetaData "InputDenoiseFilter" "Amazonka.MediaConvert.Types.InputDenoiseFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputDenoiseFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputDenoiseFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputFilterEnable

newtype InputFilterEnable Source #

Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

Instances

Instances details
Eq InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Ord InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Read InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Show InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Generic InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Associated Types

type Rep InputFilterEnable :: Type -> Type #

NFData InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

Methods

rnf :: InputFilterEnable -> () #

Hashable InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToJSON InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToJSONKey InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

FromJSON InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

FromJSONKey InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToLog InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToHeader InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToQuery InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

FromXML InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToXML InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToByteString InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

FromText InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

ToText InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

type Rep InputFilterEnable Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputFilterEnable

type Rep InputFilterEnable = D1 ('MetaData "InputFilterEnable" "Amazonka.MediaConvert.Types.InputFilterEnable" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputFilterEnable'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputFilterEnable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputPolicy

newtype InputPolicy Source #

An input policy allows or disallows a job you submit to run based on the conditions that you specify.

Constructors

InputPolicy' 

Bundled Patterns

pattern InputPolicy_ALLOWED :: InputPolicy 
pattern InputPolicy_DISALLOWED :: InputPolicy 

Instances

Instances details
Eq InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Ord InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Read InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Show InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Generic InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Associated Types

type Rep InputPolicy :: Type -> Type #

NFData InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Methods

rnf :: InputPolicy -> () #

Hashable InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToJSON InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToJSONKey InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

FromJSON InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

FromJSONKey InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToLog InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToHeader InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToQuery InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

FromXML InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToXML InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Methods

toXML :: InputPolicy -> XML #

ToByteString InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

FromText InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

ToText InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

Methods

toText :: InputPolicy -> Text #

type Rep InputPolicy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPolicy

type Rep InputPolicy = D1 ('MetaData "InputPolicy" "Amazonka.MediaConvert.Types.InputPolicy" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputPolicy'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputPolicy") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputPsiControl

newtype InputPsiControl Source #

Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

Constructors

InputPsiControl' 

Instances

Instances details
Eq InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Ord InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Read InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Show InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Generic InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Associated Types

type Rep InputPsiControl :: Type -> Type #

NFData InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Methods

rnf :: InputPsiControl -> () #

Hashable InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToJSON InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToJSONKey InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

FromJSON InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

FromJSONKey InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToLog InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToHeader InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToQuery InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

FromXML InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToXML InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

Methods

toXML :: InputPsiControl -> XML #

ToByteString InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

FromText InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

ToText InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

type Rep InputPsiControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputPsiControl

type Rep InputPsiControl = D1 ('MetaData "InputPsiControl" "Amazonka.MediaConvert.Types.InputPsiControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputPsiControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputPsiControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputRotate

newtype InputRotate Source #

Use Rotate (InputRotate) to specify how the service rotates your video. You can choose automatic rotation or specify a rotation. You can specify a clockwise rotation of 0, 90, 180, or 270 degrees. If your input video container is .mov or .mp4 and your input has rotation metadata, you can choose Automatic to have the service rotate your video according to the rotation specified in the metadata. The rotation must be within one degree of 90, 180, or 270 degrees. If the rotation metadata specifies any other rotation, the service will default to no rotation. By default, the service does no rotation, even if your input video has rotation metadata. The service doesn't pass through rotation metadata.

Constructors

InputRotate' 

Instances

Instances details
Eq InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Ord InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Read InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Show InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Generic InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Associated Types

type Rep InputRotate :: Type -> Type #

NFData InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Methods

rnf :: InputRotate -> () #

Hashable InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToJSON InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToJSONKey InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

FromJSON InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

FromJSONKey InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToLog InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToHeader InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToQuery InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

FromXML InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToXML InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Methods

toXML :: InputRotate -> XML #

ToByteString InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

FromText InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

ToText InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

Methods

toText :: InputRotate -> Text #

type Rep InputRotate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputRotate

type Rep InputRotate = D1 ('MetaData "InputRotate" "Amazonka.MediaConvert.Types.InputRotate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputRotate'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputRotate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputSampleRange

newtype InputSampleRange Source #

If the sample range metadata in your input video is accurate, or if you don't know about sample range, keep the default value, Follow (FOLLOW), for this setting. When you do, the service automatically detects your input sample range. If your input video has metadata indicating the wrong sample range, specify the accurate sample range here. When you do, MediaConvert ignores any sample range information in the input metadata. Regardless of whether MediaConvert uses the input sample range or the sample range that you specify, MediaConvert uses the sample range for transcoding and also writes it to the output metadata.

Instances

Instances details
Eq InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Ord InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Read InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Show InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Generic InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Associated Types

type Rep InputSampleRange :: Type -> Type #

NFData InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

Methods

rnf :: InputSampleRange -> () #

Hashable InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToJSON InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToJSONKey InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

FromJSON InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

FromJSONKey InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToLog InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToHeader InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToQuery InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

FromXML InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToXML InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToByteString InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

FromText InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

ToText InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

type Rep InputSampleRange Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputSampleRange

type Rep InputSampleRange = D1 ('MetaData "InputSampleRange" "Amazonka.MediaConvert.Types.InputSampleRange" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputSampleRange'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputSampleRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputScanType

newtype InputScanType Source #

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

Constructors

InputScanType' 

Bundled Patterns

pattern InputScanType_AUTO :: InputScanType 
pattern InputScanType_PSF :: InputScanType 

Instances

Instances details
Eq InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Ord InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Read InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Show InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Generic InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Associated Types

type Rep InputScanType :: Type -> Type #

NFData InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Methods

rnf :: InputScanType -> () #

Hashable InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToJSON InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToJSONKey InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

FromJSON InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

FromJSONKey InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToLog InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToHeader InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToQuery InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

FromXML InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToXML InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Methods

toXML :: InputScanType -> XML #

ToByteString InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

FromText InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

ToText InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

Methods

toText :: InputScanType -> Text #

type Rep InputScanType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputScanType

type Rep InputScanType = D1 ('MetaData "InputScanType" "Amazonka.MediaConvert.Types.InputScanType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputScanType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputScanType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputTimecodeSource

newtype InputTimecodeSource Source #

Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

Instances

Instances details
Eq InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Ord InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Read InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Show InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Generic InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Associated Types

type Rep InputTimecodeSource :: Type -> Type #

NFData InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

Methods

rnf :: InputTimecodeSource -> () #

Hashable InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToJSON InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToJSONKey InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

FromJSON InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

FromJSONKey InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToLog InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToHeader InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToQuery InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

FromXML InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToXML InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToByteString InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

FromText InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

ToText InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

type Rep InputTimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTimecodeSource

type Rep InputTimecodeSource = D1 ('MetaData "InputTimecodeSource" "Amazonka.MediaConvert.Types.InputTimecodeSource" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "InputTimecodeSource'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputTimecodeSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

JobPhase

newtype JobPhase Source #

A job's phase can be PROBING, TRANSCODING OR UPLOADING

Constructors

JobPhase' 

Fields

Bundled Patterns

pattern JobPhase_PROBING :: JobPhase 
pattern JobPhase_TRANSCODING :: JobPhase 
pattern JobPhase_UPLOADING :: JobPhase 

Instances

Instances details
Eq JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Ord JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Read JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Show JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Generic JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Associated Types

type Rep JobPhase :: Type -> Type #

Methods

from :: JobPhase -> Rep JobPhase x #

to :: Rep JobPhase x -> JobPhase #

NFData JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

rnf :: JobPhase -> () #

Hashable JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

hashWithSalt :: Int -> JobPhase -> Int #

hash :: JobPhase -> Int #

ToJSON JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

ToJSONKey JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

FromJSON JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

FromJSONKey JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

ToLog JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

ToHeader JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

toHeader :: HeaderName -> JobPhase -> [Header] #

ToQuery JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

FromXML JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

ToXML JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

toXML :: JobPhase -> XML #

ToByteString JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

toBS :: JobPhase -> ByteString #

FromText JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

ToText JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

Methods

toText :: JobPhase -> Text #

type Rep JobPhase Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobPhase

type Rep JobPhase = D1 ('MetaData "JobPhase" "Amazonka.MediaConvert.Types.JobPhase" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "JobPhase'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromJobPhase") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

JobStatus

newtype JobStatus Source #

A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR.

Constructors

JobStatus' 

Fields

Instances

Instances details
Eq JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Ord JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Read JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Show JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Generic JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Associated Types

type Rep JobStatus :: Type -> Type #

NFData JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Methods

rnf :: JobStatus -> () #

Hashable JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToJSON JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToJSONKey JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

FromJSON JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

FromJSONKey JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToLog JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToHeader JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToQuery JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

FromXML JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToXML JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Methods

toXML :: JobStatus -> XML #

ToByteString JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Methods

toBS :: JobStatus -> ByteString #

FromText JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

ToText JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

Methods

toText :: JobStatus -> Text #

type Rep JobStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobStatus

type Rep JobStatus = D1 ('MetaData "JobStatus" "Amazonka.MediaConvert.Types.JobStatus" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "JobStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

JobTemplateListBy

newtype JobTemplateListBy Source #

Optional. When you request a list of job templates, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by name.

Instances

Instances details
Eq JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Ord JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Read JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Show JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Generic JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Associated Types

type Rep JobTemplateListBy :: Type -> Type #

NFData JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

Methods

rnf :: JobTemplateListBy -> () #

Hashable JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToJSON JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToJSONKey JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

FromJSON JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

FromJSONKey JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToLog JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToHeader JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToQuery JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

FromXML JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToXML JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToByteString JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

FromText JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

ToText JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

type Rep JobTemplateListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateListBy

type Rep JobTemplateListBy = D1 ('MetaData "JobTemplateListBy" "Amazonka.MediaConvert.Types.JobTemplateListBy" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "JobTemplateListBy'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromJobTemplateListBy") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

LanguageCode

newtype LanguageCode Source #

Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php.

Constructors

LanguageCode' 

Bundled Patterns

pattern LanguageCode_AAR :: LanguageCode 
pattern LanguageCode_ABK :: LanguageCode 
pattern LanguageCode_AFR :: LanguageCode 
pattern LanguageCode_AKA :: LanguageCode 
pattern LanguageCode_AMH :: LanguageCode 
pattern LanguageCode_ARA :: LanguageCode 
pattern LanguageCode_ARG :: LanguageCode 
pattern LanguageCode_ASM :: LanguageCode 
pattern LanguageCode_AVA :: LanguageCode 
pattern LanguageCode_AVE :: LanguageCode 
pattern LanguageCode_AYM :: LanguageCode 
pattern LanguageCode_AZE :: LanguageCode 
pattern LanguageCode_BAK :: LanguageCode 
pattern LanguageCode_BAM :: LanguageCode 
pattern LanguageCode_BEL :: LanguageCode 
pattern LanguageCode_BEN :: LanguageCode 
pattern LanguageCode_BIH :: LanguageCode 
pattern LanguageCode_BIS :: LanguageCode 
pattern LanguageCode_BOD :: LanguageCode 
pattern LanguageCode_BOS :: LanguageCode 
pattern LanguageCode_BRE :: LanguageCode 
pattern LanguageCode_BUL :: LanguageCode 
pattern LanguageCode_CAT :: LanguageCode 
pattern LanguageCode_CES :: LanguageCode 
pattern LanguageCode_CHA :: LanguageCode 
pattern LanguageCode_CHE :: LanguageCode 
pattern LanguageCode_CHU :: LanguageCode 
pattern LanguageCode_CHV :: LanguageCode 
pattern LanguageCode_COR :: LanguageCode 
pattern LanguageCode_COS :: LanguageCode 
pattern LanguageCode_CRE :: LanguageCode 
pattern LanguageCode_CYM :: LanguageCode 
pattern LanguageCode_DAN :: LanguageCode 
pattern LanguageCode_DEU :: LanguageCode 
pattern LanguageCode_DIV :: LanguageCode 
pattern LanguageCode_DZO :: LanguageCode 
pattern LanguageCode_ELL :: LanguageCode 
pattern LanguageCode_ENG :: LanguageCode 
pattern LanguageCode_ENM :: LanguageCode 
pattern LanguageCode_EPO :: LanguageCode 
pattern LanguageCode_EST :: LanguageCode 
pattern LanguageCode_EUS :: LanguageCode 
pattern LanguageCode_EWE :: LanguageCode 
pattern LanguageCode_FAO :: LanguageCode 
pattern LanguageCode_FAS :: LanguageCode 
pattern LanguageCode_FIJ :: LanguageCode 
pattern LanguageCode_FIN :: LanguageCode 
pattern LanguageCode_FRA :: LanguageCode 
pattern LanguageCode_FRM :: LanguageCode 
pattern LanguageCode_FRY :: LanguageCode 
pattern LanguageCode_FUL :: LanguageCode 
pattern LanguageCode_GER :: LanguageCode 
pattern LanguageCode_GLA :: LanguageCode 
pattern LanguageCode_GLE :: LanguageCode 
pattern LanguageCode_GLG :: LanguageCode 
pattern LanguageCode_GLV :: LanguageCode 
pattern LanguageCode_GRN :: LanguageCode 
pattern LanguageCode_GUJ :: LanguageCode 
pattern LanguageCode_HAT :: LanguageCode 
pattern LanguageCode_HAU :: LanguageCode 
pattern LanguageCode_HEB :: LanguageCode 
pattern LanguageCode_HER :: LanguageCode 
pattern LanguageCode_HIN :: LanguageCode 
pattern LanguageCode_HMO :: LanguageCode 
pattern LanguageCode_HRV :: LanguageCode 
pattern LanguageCode_HUN :: LanguageCode 
pattern LanguageCode_HYE :: LanguageCode 
pattern LanguageCode_IBO :: LanguageCode 
pattern LanguageCode_IDO :: LanguageCode 
pattern LanguageCode_III :: LanguageCode 
pattern LanguageCode_IKU :: LanguageCode 
pattern LanguageCode_ILE :: LanguageCode 
pattern LanguageCode_INA :: LanguageCode 
pattern LanguageCode_IND :: LanguageCode 
pattern LanguageCode_IPK :: LanguageCode 
pattern LanguageCode_ISL :: LanguageCode 
pattern LanguageCode_ITA :: LanguageCode 
pattern LanguageCode_JAV :: LanguageCode 
pattern LanguageCode_JPN :: LanguageCode 
pattern LanguageCode_KAL :: LanguageCode 
pattern LanguageCode_KAN :: LanguageCode 
pattern LanguageCode_KAS :: LanguageCode 
pattern LanguageCode_KAT :: LanguageCode 
pattern LanguageCode_KAU :: LanguageCode 
pattern LanguageCode_KAZ :: LanguageCode 
pattern LanguageCode_KHM :: LanguageCode 
pattern LanguageCode_KIK :: LanguageCode 
pattern LanguageCode_KIN :: LanguageCode 
pattern LanguageCode_KIR :: LanguageCode 
pattern LanguageCode_KOM :: LanguageCode 
pattern LanguageCode_KON :: LanguageCode 
pattern LanguageCode_KOR :: LanguageCode 
pattern LanguageCode_KUA :: LanguageCode 
pattern LanguageCode_KUR :: LanguageCode 
pattern LanguageCode_LAO :: LanguageCode 
pattern LanguageCode_LAT :: LanguageCode 
pattern LanguageCode_LAV :: LanguageCode 
pattern LanguageCode_LIM :: LanguageCode 
pattern LanguageCode_LIN :: LanguageCode 
pattern LanguageCode_LIT :: LanguageCode 
pattern LanguageCode_LTZ :: LanguageCode 
pattern LanguageCode_LUB :: LanguageCode 
pattern LanguageCode_LUG :: LanguageCode 
pattern LanguageCode_MAH :: LanguageCode 
pattern LanguageCode_MAL :: LanguageCode 
pattern LanguageCode_MAR :: LanguageCode 
pattern LanguageCode_MKD :: LanguageCode 
pattern LanguageCode_MLG :: LanguageCode 
pattern LanguageCode_MLT :: LanguageCode 
pattern LanguageCode_MON :: LanguageCode 
pattern LanguageCode_MRI :: LanguageCode 
pattern LanguageCode_MSA :: LanguageCode 
pattern LanguageCode_MYA :: LanguageCode 
pattern LanguageCode_NAU :: LanguageCode 
pattern LanguageCode_NAV :: LanguageCode 
pattern LanguageCode_NBL :: LanguageCode 
pattern LanguageCode_NDE :: LanguageCode 
pattern LanguageCode_NDO :: LanguageCode 
pattern LanguageCode_NEP :: LanguageCode 
pattern LanguageCode_NLD :: LanguageCode 
pattern LanguageCode_NNO :: LanguageCode 
pattern LanguageCode_NOB :: LanguageCode 
pattern LanguageCode_NOR :: LanguageCode 
pattern LanguageCode_NYA :: LanguageCode 
pattern LanguageCode_OCI :: LanguageCode 
pattern LanguageCode_OJI :: LanguageCode 
pattern LanguageCode_ORI :: LanguageCode 
pattern LanguageCode_ORJ :: LanguageCode 
pattern LanguageCode_ORM :: LanguageCode 
pattern LanguageCode_OSS :: LanguageCode 
pattern LanguageCode_PAN :: LanguageCode 
pattern LanguageCode_PLI :: LanguageCode 
pattern LanguageCode_POL :: LanguageCode 
pattern LanguageCode_POR :: LanguageCode 
pattern LanguageCode_PUS :: LanguageCode 
pattern LanguageCode_QAA :: LanguageCode 
pattern LanguageCode_QPC :: LanguageCode 
pattern LanguageCode_QUE :: LanguageCode 
pattern LanguageCode_ROH :: LanguageCode 
pattern LanguageCode_RON :: LanguageCode 
pattern LanguageCode_RUN :: LanguageCode 
pattern LanguageCode_RUS :: LanguageCode 
pattern LanguageCode_SAG :: LanguageCode 
pattern LanguageCode_SAN :: LanguageCode 
pattern LanguageCode_SIN :: LanguageCode 
pattern LanguageCode_SLK :: LanguageCode 
pattern LanguageCode_SLV :: LanguageCode 
pattern LanguageCode_SME :: LanguageCode 
pattern LanguageCode_SMO :: LanguageCode 
pattern LanguageCode_SNA :: LanguageCode 
pattern LanguageCode_SND :: LanguageCode 
pattern LanguageCode_SOM :: LanguageCode 
pattern LanguageCode_SOT :: LanguageCode 
pattern LanguageCode_SPA :: LanguageCode 
pattern LanguageCode_SQI :: LanguageCode 
pattern LanguageCode_SRB :: LanguageCode 
pattern LanguageCode_SRD :: LanguageCode 
pattern LanguageCode_SSW :: LanguageCode 
pattern LanguageCode_SUN :: LanguageCode 
pattern LanguageCode_SWA :: LanguageCode 
pattern LanguageCode_SWE :: LanguageCode 
pattern LanguageCode_TAH :: LanguageCode 
pattern LanguageCode_TAM :: LanguageCode 
pattern LanguageCode_TAT :: LanguageCode 
pattern LanguageCode_TEL :: LanguageCode 
pattern LanguageCode_TGK :: LanguageCode 
pattern LanguageCode_TGL :: LanguageCode 
pattern LanguageCode_THA :: LanguageCode 
pattern LanguageCode_TIR :: LanguageCode 
pattern LanguageCode_TNG :: LanguageCode 
pattern LanguageCode_TON :: LanguageCode 
pattern LanguageCode_TSN :: LanguageCode 
pattern LanguageCode_TSO :: LanguageCode 
pattern LanguageCode_TUK :: LanguageCode 
pattern LanguageCode_TUR :: LanguageCode 
pattern LanguageCode_TWI :: LanguageCode 
pattern LanguageCode_UIG :: LanguageCode 
pattern LanguageCode_UKR :: LanguageCode 
pattern LanguageCode_URD :: LanguageCode 
pattern LanguageCode_UZB :: LanguageCode 
pattern LanguageCode_VEN :: LanguageCode 
pattern LanguageCode_VIE :: LanguageCode 
pattern LanguageCode_VOL :: LanguageCode 
pattern LanguageCode_WLN :: LanguageCode 
pattern LanguageCode_WOL :: LanguageCode 
pattern LanguageCode_XHO :: LanguageCode 
pattern LanguageCode_YID :: LanguageCode 
pattern LanguageCode_YOR :: LanguageCode 
pattern LanguageCode_ZHA :: LanguageCode 
pattern LanguageCode_ZHO :: LanguageCode 
pattern LanguageCode_ZUL :: LanguageCode 

Instances

Instances details
Eq LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Ord LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Read LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Show LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Generic LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Associated Types

type Rep LanguageCode :: Type -> Type #

NFData LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Methods

rnf :: LanguageCode -> () #

Hashable LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToJSON LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToJSONKey LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

FromJSON LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

FromJSONKey LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToLog LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToHeader LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToQuery LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

FromXML LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToXML LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Methods

toXML :: LanguageCode -> XML #

ToByteString LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

FromText LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

ToText LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

Methods

toText :: LanguageCode -> Text #

type Rep LanguageCode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.LanguageCode

type Rep LanguageCode = D1 ('MetaData "LanguageCode" "Amazonka.MediaConvert.Types.LanguageCode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "LanguageCode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsAudioBufferModel

newtype M2tsAudioBufferModel Source #

Selects between the DVB and ATSC buffer models for Dolby Digital audio.

Instances

Instances details
Eq M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Ord M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Read M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Show M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Generic M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Associated Types

type Rep M2tsAudioBufferModel :: Type -> Type #

NFData M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

Methods

rnf :: M2tsAudioBufferModel -> () #

Hashable M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToJSON M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToJSONKey M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

FromJSON M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

FromJSONKey M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToLog M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToHeader M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToQuery M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

FromXML M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToXML M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToByteString M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

FromText M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

ToText M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

type Rep M2tsAudioBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioBufferModel

type Rep M2tsAudioBufferModel = D1 ('MetaData "M2tsAudioBufferModel" "Amazonka.MediaConvert.Types.M2tsAudioBufferModel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsAudioBufferModel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsAudioBufferModel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsAudioDuration

newtype M2tsAudioDuration Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

Instances

Instances details
Eq M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Ord M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Read M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Show M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Generic M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Associated Types

type Rep M2tsAudioDuration :: Type -> Type #

NFData M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

Methods

rnf :: M2tsAudioDuration -> () #

Hashable M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToJSON M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToJSONKey M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

FromJSON M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

FromJSONKey M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToLog M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToHeader M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToQuery M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

FromXML M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToXML M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToByteString M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

FromText M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

ToText M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

type Rep M2tsAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsAudioDuration

type Rep M2tsAudioDuration = D1 ('MetaData "M2tsAudioDuration" "Amazonka.MediaConvert.Types.M2tsAudioDuration" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsAudioDuration'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsAudioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsBufferModel

newtype M2tsBufferModel Source #

Controls what buffer model to use for accurate interleaving. If set to MULTIPLEX, use multiplex buffer model. If set to NONE, this can lead to lower latency, but low-memory devices may not be able to play back the stream without interruptions.

Constructors

M2tsBufferModel' 

Instances

Instances details
Eq M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Ord M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Read M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Show M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Generic M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Associated Types

type Rep M2tsBufferModel :: Type -> Type #

NFData M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Methods

rnf :: M2tsBufferModel -> () #

Hashable M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToJSON M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToJSONKey M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

FromJSON M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

FromJSONKey M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToLog M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToHeader M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToQuery M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

FromXML M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToXML M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

Methods

toXML :: M2tsBufferModel -> XML #

ToByteString M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

FromText M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

ToText M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

type Rep M2tsBufferModel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsBufferModel

type Rep M2tsBufferModel = D1 ('MetaData "M2tsBufferModel" "Amazonka.MediaConvert.Types.M2tsBufferModel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsBufferModel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsBufferModel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsDataPtsControl

newtype M2tsDataPtsControl Source #

If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

Instances

Instances details
Eq M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Ord M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Read M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Show M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Generic M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Associated Types

type Rep M2tsDataPtsControl :: Type -> Type #

NFData M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

Methods

rnf :: M2tsDataPtsControl -> () #

Hashable M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToJSON M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToJSONKey M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

FromJSON M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

FromJSONKey M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToLog M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToHeader M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToQuery M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

FromXML M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToXML M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToByteString M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

FromText M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

ToText M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

type Rep M2tsDataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsDataPtsControl

type Rep M2tsDataPtsControl = D1 ('MetaData "M2tsDataPtsControl" "Amazonka.MediaConvert.Types.M2tsDataPtsControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsDataPtsControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsDataPtsControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsEbpAudioInterval

newtype M2tsEbpAudioInterval Source #

When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to partitions 3 and 4. The interval between these additional markers will be fixed, and will be slightly shorter than the video EBP marker interval. When set to VIDEO_INTERVAL, these additional markers will not be inserted. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

Instances

Instances details
Eq M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Ord M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Read M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Show M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Generic M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Associated Types

type Rep M2tsEbpAudioInterval :: Type -> Type #

NFData M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

Methods

rnf :: M2tsEbpAudioInterval -> () #

Hashable M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToJSON M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToJSONKey M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

FromJSON M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

FromJSONKey M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToLog M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToHeader M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToQuery M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

FromXML M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToXML M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToByteString M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

FromText M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

ToText M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

type Rep M2tsEbpAudioInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpAudioInterval

type Rep M2tsEbpAudioInterval = D1 ('MetaData "M2tsEbpAudioInterval" "Amazonka.MediaConvert.Types.M2tsEbpAudioInterval" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsEbpAudioInterval'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsEbpAudioInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsEbpPlacement

newtype M2tsEbpPlacement Source #

Selects which PIDs to place EBP markers on. They can either be placed only on the video PID, or on both the video PID and all audio PIDs. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

Instances

Instances details
Eq M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Ord M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Read M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Show M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Generic M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Associated Types

type Rep M2tsEbpPlacement :: Type -> Type #

NFData M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

Methods

rnf :: M2tsEbpPlacement -> () #

Hashable M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToJSON M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToJSONKey M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

FromJSON M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

FromJSONKey M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToLog M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToHeader M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToQuery M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

FromXML M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToXML M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToByteString M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

FromText M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

ToText M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

type Rep M2tsEbpPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEbpPlacement

type Rep M2tsEbpPlacement = D1 ('MetaData "M2tsEbpPlacement" "Amazonka.MediaConvert.Types.M2tsEbpPlacement" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsEbpPlacement'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsEbpPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsEsRateInPes

newtype M2tsEsRateInPes Source #

Controls whether to include the ES Rate field in the PES header.

Constructors

M2tsEsRateInPes' 

Instances

Instances details
Eq M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Ord M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Read M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Show M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Generic M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Associated Types

type Rep M2tsEsRateInPes :: Type -> Type #

NFData M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Methods

rnf :: M2tsEsRateInPes -> () #

Hashable M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToJSON M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToJSONKey M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

FromJSON M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

FromJSONKey M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToLog M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToHeader M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToQuery M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

FromXML M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToXML M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

Methods

toXML :: M2tsEsRateInPes -> XML #

ToByteString M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

FromText M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

ToText M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

type Rep M2tsEsRateInPes Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsEsRateInPes

type Rep M2tsEsRateInPes = D1 ('MetaData "M2tsEsRateInPes" "Amazonka.MediaConvert.Types.M2tsEsRateInPes" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsEsRateInPes'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsEsRateInPes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsForceTsVideoEbpOrder

newtype M2tsForceTsVideoEbpOrder Source #

Keep the default value (DEFAULT) unless you know that your audio EBP markers are incorrectly appearing before your video EBP markers. To correct this problem, set this value to Force (FORCE).

Instances

Instances details
Eq M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Ord M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Read M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Show M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Generic M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Associated Types

type Rep M2tsForceTsVideoEbpOrder :: Type -> Type #

NFData M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

Hashable M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToJSON M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToJSONKey M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

FromJSON M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

FromJSONKey M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToLog M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToHeader M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToQuery M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

FromXML M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToXML M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToByteString M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

FromText M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

ToText M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

type Rep M2tsForceTsVideoEbpOrder Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder

type Rep M2tsForceTsVideoEbpOrder = D1 ('MetaData "M2tsForceTsVideoEbpOrder" "Amazonka.MediaConvert.Types.M2tsForceTsVideoEbpOrder" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsForceTsVideoEbpOrder'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsForceTsVideoEbpOrder") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsNielsenId3

newtype M2tsNielsenId3 Source #

If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

Constructors

M2tsNielsenId3' 

Instances

Instances details
Eq M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Ord M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Read M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Show M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Generic M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Associated Types

type Rep M2tsNielsenId3 :: Type -> Type #

NFData M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Methods

rnf :: M2tsNielsenId3 -> () #

Hashable M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToJSON M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToJSONKey M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

FromJSON M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

FromJSONKey M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToLog M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToHeader M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToQuery M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

FromXML M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToXML M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

Methods

toXML :: M2tsNielsenId3 -> XML #

ToByteString M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

FromText M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

ToText M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

type Rep M2tsNielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsNielsenId3

type Rep M2tsNielsenId3 = D1 ('MetaData "M2tsNielsenId3" "Amazonka.MediaConvert.Types.M2tsNielsenId3" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsNielsenId3'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsNielsenId3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsPcrControl

newtype M2tsPcrControl Source #

When set to PCR_EVERY_PES_PACKET, a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This is effective only when the PCR PID is the same as the video or audio elementary stream.

Constructors

M2tsPcrControl' 

Instances

Instances details
Eq M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Ord M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Read M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Show M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Generic M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Associated Types

type Rep M2tsPcrControl :: Type -> Type #

NFData M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Methods

rnf :: M2tsPcrControl -> () #

Hashable M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToJSON M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToJSONKey M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

FromJSON M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

FromJSONKey M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToLog M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToHeader M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToQuery M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

FromXML M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToXML M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

Methods

toXML :: M2tsPcrControl -> XML #

ToByteString M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

FromText M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

ToText M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

type Rep M2tsPcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsPcrControl

type Rep M2tsPcrControl = D1 ('MetaData "M2tsPcrControl" "Amazonka.MediaConvert.Types.M2tsPcrControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsPcrControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsPcrControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsRateMode

newtype M2tsRateMode Source #

When set to CBR, inserts null packets into transport stream to fill specified bitrate. When set to VBR, the bitrate setting acts as the maximum bitrate, but the output will not be padded up to that bitrate.

Constructors

M2tsRateMode' 

Bundled Patterns

pattern M2tsRateMode_CBR :: M2tsRateMode 
pattern M2tsRateMode_VBR :: M2tsRateMode 

Instances

Instances details
Eq M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Ord M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Read M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Show M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Generic M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Associated Types

type Rep M2tsRateMode :: Type -> Type #

NFData M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Methods

rnf :: M2tsRateMode -> () #

Hashable M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToJSON M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToJSONKey M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

FromJSON M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

FromJSONKey M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToLog M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToHeader M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToQuery M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

FromXML M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToXML M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Methods

toXML :: M2tsRateMode -> XML #

ToByteString M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

FromText M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

ToText M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

Methods

toText :: M2tsRateMode -> Text #

type Rep M2tsRateMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsRateMode

type Rep M2tsRateMode = D1 ('MetaData "M2tsRateMode" "Amazonka.MediaConvert.Types.M2tsRateMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsRateMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsRateMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsScte35Source

newtype M2tsScte35Source Source #

For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE). Also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml). Also enable ESAM SCTE-35 (include the property scte35Esam).

Instances

Instances details
Eq M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Ord M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Read M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Show M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Generic M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Associated Types

type Rep M2tsScte35Source :: Type -> Type #

NFData M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

Methods

rnf :: M2tsScte35Source -> () #

Hashable M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToJSON M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToJSONKey M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

FromJSON M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

FromJSONKey M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToLog M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToHeader M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToQuery M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

FromXML M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToXML M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToByteString M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

FromText M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

ToText M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

type Rep M2tsScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Source

type Rep M2tsScte35Source = D1 ('MetaData "M2tsScte35Source" "Amazonka.MediaConvert.Types.M2tsScte35Source" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsScte35Source'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsScte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsSegmentationMarkers

newtype M2tsSegmentationMarkers Source #

Inserts segmentation markers at each segmentation_time period. rai_segstart sets the Random Access Indicator bit in the adaptation field. rai_adapt sets the RAI bit and adds the current timecode in the private data bytes. psi_segstart inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary Point information to the adaptation field as per OpenCable specification OC-SP-EBP-I01-130118. ebp_legacy adds Encoder Boundary Point information to the adaptation field using a legacy proprietary format.

Instances

Instances details
Eq M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Ord M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Read M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Show M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Generic M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Associated Types

type Rep M2tsSegmentationMarkers :: Type -> Type #

NFData M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

Methods

rnf :: M2tsSegmentationMarkers -> () #

Hashable M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToJSON M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToJSONKey M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

FromJSON M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

FromJSONKey M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToLog M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToHeader M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToQuery M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

FromXML M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToXML M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToByteString M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

FromText M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

ToText M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

type Rep M2tsSegmentationMarkers Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationMarkers

type Rep M2tsSegmentationMarkers = D1 ('MetaData "M2tsSegmentationMarkers" "Amazonka.MediaConvert.Types.M2tsSegmentationMarkers" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsSegmentationMarkers'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsSegmentationMarkers") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M2tsSegmentationStyle

newtype M2tsSegmentationStyle Source #

The segmentation style parameter controls how segmentation markers are inserted into the transport stream. With avails, it is possible that segments may be truncated, which can influence where future segmentation markers are inserted. When a segmentation style of "reset_cadence" is selected and a segment is truncated due to an avail, we will reset the segmentation cadence. This means the subsequent segment will have a duration of of $segmentation_time seconds. When a segmentation style of "maintain_cadence" is selected and a segment is truncated due to an avail, we will not reset the segmentation cadence. This means the subsequent segment will likely be truncated as well. However, all segments after that will have a duration of $segmentation_time seconds. Note that EBP lookahead is a slight exception to this rule.

Instances

Instances details
Eq M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Ord M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Read M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Show M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Generic M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Associated Types

type Rep M2tsSegmentationStyle :: Type -> Type #

NFData M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

Methods

rnf :: M2tsSegmentationStyle -> () #

Hashable M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToJSON M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToJSONKey M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

FromJSON M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

FromJSONKey M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToLog M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToHeader M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToQuery M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

FromXML M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToXML M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToByteString M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

FromText M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

ToText M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

type Rep M2tsSegmentationStyle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSegmentationStyle

type Rep M2tsSegmentationStyle = D1 ('MetaData "M2tsSegmentationStyle" "Amazonka.MediaConvert.Types.M2tsSegmentationStyle" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M2tsSegmentationStyle'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM2tsSegmentationStyle") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M3u8AudioDuration

newtype M3u8AudioDuration Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

Instances

Instances details
Eq M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Ord M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Read M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Show M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Generic M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Associated Types

type Rep M3u8AudioDuration :: Type -> Type #

NFData M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

Methods

rnf :: M3u8AudioDuration -> () #

Hashable M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToJSON M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToJSONKey M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

FromJSON M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

FromJSONKey M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToLog M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToHeader M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToQuery M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

FromXML M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToXML M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToByteString M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

FromText M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

ToText M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

type Rep M3u8AudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8AudioDuration

type Rep M3u8AudioDuration = D1 ('MetaData "M3u8AudioDuration" "Amazonka.MediaConvert.Types.M3u8AudioDuration" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M3u8AudioDuration'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM3u8AudioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M3u8DataPtsControl

newtype M3u8DataPtsControl Source #

If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

Instances

Instances details
Eq M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Ord M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Read M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Show M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Generic M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Associated Types

type Rep M3u8DataPtsControl :: Type -> Type #

NFData M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

Methods

rnf :: M3u8DataPtsControl -> () #

Hashable M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToJSON M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToJSONKey M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

FromJSON M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

FromJSONKey M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToLog M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToHeader M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToQuery M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

FromXML M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToXML M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToByteString M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

FromText M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

ToText M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

type Rep M3u8DataPtsControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8DataPtsControl

type Rep M3u8DataPtsControl = D1 ('MetaData "M3u8DataPtsControl" "Amazonka.MediaConvert.Types.M3u8DataPtsControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M3u8DataPtsControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM3u8DataPtsControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M3u8NielsenId3

newtype M3u8NielsenId3 Source #

If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

Constructors

M3u8NielsenId3' 

Instances

Instances details
Eq M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Ord M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Read M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Show M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Generic M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Associated Types

type Rep M3u8NielsenId3 :: Type -> Type #

NFData M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Methods

rnf :: M3u8NielsenId3 -> () #

Hashable M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToJSON M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToJSONKey M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

FromJSON M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

FromJSONKey M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToLog M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToHeader M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToQuery M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

FromXML M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToXML M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

Methods

toXML :: M3u8NielsenId3 -> XML #

ToByteString M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

FromText M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

ToText M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

type Rep M3u8NielsenId3 Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8NielsenId3

type Rep M3u8NielsenId3 = D1 ('MetaData "M3u8NielsenId3" "Amazonka.MediaConvert.Types.M3u8NielsenId3" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M3u8NielsenId3'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM3u8NielsenId3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M3u8PcrControl

newtype M3u8PcrControl Source #

When set to PCR_EVERY_PES_PACKET a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This parameter is effective only when the PCR PID is the same as the video or audio elementary stream.

Constructors

M3u8PcrControl' 

Instances

Instances details
Eq M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Ord M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Read M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Show M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Generic M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Associated Types

type Rep M3u8PcrControl :: Type -> Type #

NFData M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Methods

rnf :: M3u8PcrControl -> () #

Hashable M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToJSON M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToJSONKey M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

FromJSON M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

FromJSONKey M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToLog M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToHeader M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToQuery M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

FromXML M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToXML M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

Methods

toXML :: M3u8PcrControl -> XML #

ToByteString M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

FromText M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

ToText M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

type Rep M3u8PcrControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8PcrControl

type Rep M3u8PcrControl = D1 ('MetaData "M3u8PcrControl" "Amazonka.MediaConvert.Types.M3u8PcrControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M3u8PcrControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM3u8PcrControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

M3u8Scte35Source

newtype M3u8Scte35Source Source #

For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE) if you don't want manifest conditioning. Choose Passthrough (PASSTHROUGH) and choose Ad markers (adMarkers) if you do want manifest conditioning. In both cases, also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml).

Instances

Instances details
Eq M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Ord M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Read M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Show M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Generic M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Associated Types

type Rep M3u8Scte35Source :: Type -> Type #

NFData M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

Methods

rnf :: M3u8Scte35Source -> () #

Hashable M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToJSON M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToJSONKey M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

FromJSON M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

FromJSONKey M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToLog M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToHeader M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToQuery M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

FromXML M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToXML M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToByteString M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

FromText M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

ToText M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

type Rep M3u8Scte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Scte35Source

type Rep M3u8Scte35Source = D1 ('MetaData "M3u8Scte35Source" "Amazonka.MediaConvert.Types.M3u8Scte35Source" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "M3u8Scte35Source'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromM3u8Scte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MotionImageInsertionMode

newtype MotionImageInsertionMode Source #

Choose the type of motion graphic asset that you are providing for your overlay. You can choose either a .mov file or a series of .png files.

Instances

Instances details
Eq MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Ord MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Read MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Show MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Generic MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Associated Types

type Rep MotionImageInsertionMode :: Type -> Type #

NFData MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

Hashable MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToJSON MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToJSONKey MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

FromJSON MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

FromJSONKey MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToLog MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToHeader MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToQuery MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

FromXML MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToXML MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToByteString MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

FromText MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

ToText MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

type Rep MotionImageInsertionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionMode

type Rep MotionImageInsertionMode = D1 ('MetaData "MotionImageInsertionMode" "Amazonka.MediaConvert.Types.MotionImageInsertionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MotionImageInsertionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMotionImageInsertionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MotionImagePlayback

newtype MotionImagePlayback Source #

Specify whether your motion graphic overlay repeats on a loop or plays only once.

Instances

Instances details
Eq MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Ord MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Read MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Show MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Generic MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Associated Types

type Rep MotionImagePlayback :: Type -> Type #

NFData MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

Methods

rnf :: MotionImagePlayback -> () #

Hashable MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToJSON MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToJSONKey MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

FromJSON MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

FromJSONKey MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToLog MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToHeader MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToQuery MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

FromXML MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToXML MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToByteString MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

FromText MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

ToText MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

type Rep MotionImagePlayback Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImagePlayback

type Rep MotionImagePlayback = D1 ('MetaData "MotionImagePlayback" "Amazonka.MediaConvert.Types.MotionImagePlayback" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MotionImagePlayback'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMotionImagePlayback") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MovClapAtom

newtype MovClapAtom Source #

When enabled, include 'clap' atom if appropriate for the video output settings.

Constructors

MovClapAtom' 

Bundled Patterns

pattern MovClapAtom_EXCLUDE :: MovClapAtom 
pattern MovClapAtom_INCLUDE :: MovClapAtom 

Instances

Instances details
Eq MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Ord MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Read MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Show MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Generic MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Associated Types

type Rep MovClapAtom :: Type -> Type #

NFData MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Methods

rnf :: MovClapAtom -> () #

Hashable MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToJSON MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToJSONKey MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

FromJSON MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

FromJSONKey MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToLog MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToHeader MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToQuery MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

FromXML MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToXML MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Methods

toXML :: MovClapAtom -> XML #

ToByteString MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

FromText MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

ToText MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

Methods

toText :: MovClapAtom -> Text #

type Rep MovClapAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovClapAtom

type Rep MovClapAtom = D1 ('MetaData "MovClapAtom" "Amazonka.MediaConvert.Types.MovClapAtom" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MovClapAtom'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMovClapAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MovCslgAtom

newtype MovCslgAtom Source #

When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

Constructors

MovCslgAtom' 

Bundled Patterns

pattern MovCslgAtom_EXCLUDE :: MovCslgAtom 
pattern MovCslgAtom_INCLUDE :: MovCslgAtom 

Instances

Instances details
Eq MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Ord MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Read MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Show MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Generic MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Associated Types

type Rep MovCslgAtom :: Type -> Type #

NFData MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Methods

rnf :: MovCslgAtom -> () #

Hashable MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToJSON MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToJSONKey MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

FromJSON MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

FromJSONKey MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToLog MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToHeader MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToQuery MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

FromXML MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToXML MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Methods

toXML :: MovCslgAtom -> XML #

ToByteString MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

FromText MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

ToText MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

Methods

toText :: MovCslgAtom -> Text #

type Rep MovCslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovCslgAtom

type Rep MovCslgAtom = D1 ('MetaData "MovCslgAtom" "Amazonka.MediaConvert.Types.MovCslgAtom" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MovCslgAtom'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMovCslgAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MovMpeg2FourCCControl

newtype MovMpeg2FourCCControl Source #

When set to XDCAM, writes MPEG2 video streams into the QuickTime file using XDCAM fourcc codes. This increases compatibility with Apple editors and players, but may decrease compatibility with other players. Only applicable when the video codec is MPEG2.

Instances

Instances details
Eq MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Ord MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Read MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Show MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Generic MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Associated Types

type Rep MovMpeg2FourCCControl :: Type -> Type #

NFData MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

Methods

rnf :: MovMpeg2FourCCControl -> () #

Hashable MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToJSON MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToJSONKey MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

FromJSON MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

FromJSONKey MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToLog MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToHeader MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToQuery MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

FromXML MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToXML MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToByteString MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

FromText MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

ToText MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

type Rep MovMpeg2FourCCControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovMpeg2FourCCControl

type Rep MovMpeg2FourCCControl = D1 ('MetaData "MovMpeg2FourCCControl" "Amazonka.MediaConvert.Types.MovMpeg2FourCCControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MovMpeg2FourCCControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMovMpeg2FourCCControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MovPaddingControl

newtype MovPaddingControl Source #

To make this output compatible with Omenon, keep the default value, OMNEON. Unless you need Omneon compatibility, set this value to NONE. When you keep the default value, OMNEON, MediaConvert increases the length of the edit list atom. This might cause file rejections when a recipient of the output file doesn't expct this extra padding.

Instances

Instances details
Eq MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Ord MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Read MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Show MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Generic MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Associated Types

type Rep MovPaddingControl :: Type -> Type #

NFData MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

Methods

rnf :: MovPaddingControl -> () #

Hashable MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToJSON MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToJSONKey MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

FromJSON MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

FromJSONKey MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToLog MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToHeader MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToQuery MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

FromXML MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToXML MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToByteString MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

FromText MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

ToText MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

type Rep MovPaddingControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovPaddingControl

type Rep MovPaddingControl = D1 ('MetaData "MovPaddingControl" "Amazonka.MediaConvert.Types.MovPaddingControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MovPaddingControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMovPaddingControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MovReference

newtype MovReference Source #

Always keep the default value (SELF_CONTAINED) for this setting.

Constructors

MovReference' 

Instances

Instances details
Eq MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Ord MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Read MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Show MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Generic MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Associated Types

type Rep MovReference :: Type -> Type #

NFData MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Methods

rnf :: MovReference -> () #

Hashable MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToJSON MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToJSONKey MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

FromJSON MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

FromJSONKey MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToLog MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToHeader MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToQuery MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

FromXML MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToXML MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Methods

toXML :: MovReference -> XML #

ToByteString MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

FromText MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

ToText MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

Methods

toText :: MovReference -> Text #

type Rep MovReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovReference

type Rep MovReference = D1 ('MetaData "MovReference" "Amazonka.MediaConvert.Types.MovReference" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MovReference'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMovReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mp3RateControlMode

newtype Mp3RateControlMode Source #

Specify whether the service encodes this MP3 audio output with a constant bitrate (CBR) or a variable bitrate (VBR).

Instances

Instances details
Eq Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Ord Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Read Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Show Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Generic Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Associated Types

type Rep Mp3RateControlMode :: Type -> Type #

NFData Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

Methods

rnf :: Mp3RateControlMode -> () #

Hashable Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToJSON Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToJSONKey Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

FromJSON Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

FromJSONKey Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToLog Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToHeader Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToQuery Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

FromXML Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToXML Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToByteString Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

FromText Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

ToText Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

type Rep Mp3RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3RateControlMode

type Rep Mp3RateControlMode = D1 ('MetaData "Mp3RateControlMode" "Amazonka.MediaConvert.Types.Mp3RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mp3RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMp3RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mp4CslgAtom

newtype Mp4CslgAtom Source #

When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

Constructors

Mp4CslgAtom' 

Bundled Patterns

pattern Mp4CslgAtom_EXCLUDE :: Mp4CslgAtom 
pattern Mp4CslgAtom_INCLUDE :: Mp4CslgAtom 

Instances

Instances details
Eq Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Ord Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Read Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Show Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Generic Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Associated Types

type Rep Mp4CslgAtom :: Type -> Type #

NFData Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Methods

rnf :: Mp4CslgAtom -> () #

Hashable Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToJSON Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToJSONKey Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

FromJSON Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

FromJSONKey Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToLog Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToHeader Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToQuery Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

FromXML Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToXML Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Methods

toXML :: Mp4CslgAtom -> XML #

ToByteString Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

FromText Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

ToText Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

Methods

toText :: Mp4CslgAtom -> Text #

type Rep Mp4CslgAtom Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4CslgAtom

type Rep Mp4CslgAtom = D1 ('MetaData "Mp4CslgAtom" "Amazonka.MediaConvert.Types.Mp4CslgAtom" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mp4CslgAtom'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMp4CslgAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mp4FreeSpaceBox

newtype Mp4FreeSpaceBox Source #

Inserts a free-space box immediately after the moov box.

Constructors

Mp4FreeSpaceBox' 

Instances

Instances details
Eq Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Ord Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Read Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Show Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Generic Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Associated Types

type Rep Mp4FreeSpaceBox :: Type -> Type #

NFData Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Methods

rnf :: Mp4FreeSpaceBox -> () #

Hashable Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToJSON Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToJSONKey Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

FromJSON Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

FromJSONKey Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToLog Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToHeader Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToQuery Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

FromXML Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToXML Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

Methods

toXML :: Mp4FreeSpaceBox -> XML #

ToByteString Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

FromText Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

ToText Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

type Rep Mp4FreeSpaceBox Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4FreeSpaceBox

type Rep Mp4FreeSpaceBox = D1 ('MetaData "Mp4FreeSpaceBox" "Amazonka.MediaConvert.Types.Mp4FreeSpaceBox" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mp4FreeSpaceBox'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMp4FreeSpaceBox") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mp4MoovPlacement

newtype Mp4MoovPlacement Source #

If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

Instances

Instances details
Eq Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Ord Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Read Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Show Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Generic Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Associated Types

type Rep Mp4MoovPlacement :: Type -> Type #

NFData Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

Methods

rnf :: Mp4MoovPlacement -> () #

Hashable Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToJSON Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToJSONKey Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

FromJSON Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

FromJSONKey Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToLog Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToHeader Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToQuery Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

FromXML Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToXML Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToByteString Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

FromText Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

ToText Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

type Rep Mp4MoovPlacement Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4MoovPlacement

type Rep Mp4MoovPlacement = D1 ('MetaData "Mp4MoovPlacement" "Amazonka.MediaConvert.Types.Mp4MoovPlacement" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mp4MoovPlacement'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMp4MoovPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MpdAccessibilityCaptionHints

newtype MpdAccessibilityCaptionHints Source #

Optional. Choose Include (INCLUDE) to have MediaConvert mark up your DASH manifest with elements for embedded 608 captions. This markup isn't generally required, but some video players require it to discover and play embedded 608 captions. Keep the default value, Exclude (EXCLUDE), to leave these elements out. When you enable this setting, this is the markup that MediaConvert includes in your manifest:

Instances

Instances details
Eq MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Ord MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Read MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Show MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Generic MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Associated Types

type Rep MpdAccessibilityCaptionHints :: Type -> Type #

NFData MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

Hashable MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToJSON MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToJSONKey MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

FromJSON MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

FromJSONKey MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToLog MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToHeader MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToQuery MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

FromXML MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToXML MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToByteString MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

FromText MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

ToText MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

type Rep MpdAccessibilityCaptionHints Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints

type Rep MpdAccessibilityCaptionHints = D1 ('MetaData "MpdAccessibilityCaptionHints" "Amazonka.MediaConvert.Types.MpdAccessibilityCaptionHints" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MpdAccessibilityCaptionHints'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpdAccessibilityCaptionHints") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MpdAudioDuration

newtype MpdAudioDuration Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

Instances

Instances details
Eq MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Ord MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Read MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Show MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Generic MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Associated Types

type Rep MpdAudioDuration :: Type -> Type #

NFData MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

Methods

rnf :: MpdAudioDuration -> () #

Hashable MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToJSON MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToJSONKey MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

FromJSON MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

FromJSONKey MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToLog MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToHeader MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToQuery MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

FromXML MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToXML MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToByteString MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

FromText MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

ToText MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

type Rep MpdAudioDuration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdAudioDuration

type Rep MpdAudioDuration = D1 ('MetaData "MpdAudioDuration" "Amazonka.MediaConvert.Types.MpdAudioDuration" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MpdAudioDuration'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpdAudioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MpdCaptionContainerType

newtype MpdCaptionContainerType Source #

Use this setting only in DASH output groups that include sidecar TTML or IMSC captions. You specify sidecar captions in a separate output from your audio and video. Choose Raw (RAW) for captions in a single XML file in a raw container. Choose Fragmented MPEG-4 (FRAGMENTED_MP4) for captions in XML format contained within fragmented MP4 files. This set of fragmented MP4 files is separate from your video and audio fragmented MP4 files.

Instances

Instances details
Eq MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Ord MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Read MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Show MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Generic MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Associated Types

type Rep MpdCaptionContainerType :: Type -> Type #

NFData MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

Methods

rnf :: MpdCaptionContainerType -> () #

Hashable MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToJSON MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToJSONKey MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

FromJSON MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

FromJSONKey MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToLog MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToHeader MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToQuery MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

FromXML MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToXML MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToByteString MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

FromText MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

ToText MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

type Rep MpdCaptionContainerType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdCaptionContainerType

type Rep MpdCaptionContainerType = D1 ('MetaData "MpdCaptionContainerType" "Amazonka.MediaConvert.Types.MpdCaptionContainerType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MpdCaptionContainerType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpdCaptionContainerType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MpdScte35Esam

newtype MpdScte35Esam Source #

Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

Constructors

MpdScte35Esam' 

Bundled Patterns

pattern MpdScte35Esam_INSERT :: MpdScte35Esam 
pattern MpdScte35Esam_NONE :: MpdScte35Esam 

Instances

Instances details
Eq MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Ord MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Read MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Show MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Generic MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Associated Types

type Rep MpdScte35Esam :: Type -> Type #

NFData MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Methods

rnf :: MpdScte35Esam -> () #

Hashable MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToJSON MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToJSONKey MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

FromJSON MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

FromJSONKey MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToLog MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToHeader MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToQuery MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

FromXML MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToXML MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Methods

toXML :: MpdScte35Esam -> XML #

ToByteString MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

FromText MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

ToText MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

Methods

toText :: MpdScte35Esam -> Text #

type Rep MpdScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Esam

type Rep MpdScte35Esam = D1 ('MetaData "MpdScte35Esam" "Amazonka.MediaConvert.Types.MpdScte35Esam" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MpdScte35Esam'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpdScte35Esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MpdScte35Source

newtype MpdScte35Source Source #

Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

Constructors

MpdScte35Source' 

Instances

Instances details
Eq MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Ord MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Read MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Show MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Generic MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Associated Types

type Rep MpdScte35Source :: Type -> Type #

NFData MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Methods

rnf :: MpdScte35Source -> () #

Hashable MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToJSON MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToJSONKey MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

FromJSON MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

FromJSONKey MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToLog MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToHeader MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToQuery MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

FromXML MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToXML MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

Methods

toXML :: MpdScte35Source -> XML #

ToByteString MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

FromText MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

ToText MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

type Rep MpdScte35Source Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdScte35Source

type Rep MpdScte35Source = D1 ('MetaData "MpdScte35Source" "Amazonka.MediaConvert.Types.MpdScte35Source" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MpdScte35Source'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpdScte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2AdaptiveQuantization

newtype Mpeg2AdaptiveQuantization Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

Instances

Instances details
Eq Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Ord Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Read Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Show Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Generic Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Associated Types

type Rep Mpeg2AdaptiveQuantization :: Type -> Type #

NFData Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

Hashable Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToJSON Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToJSONKey Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

FromJSON Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

FromJSONKey Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToLog Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToHeader Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToQuery Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

FromXML Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToXML Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToByteString Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

FromText Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

ToText Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

type Rep Mpeg2AdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization

type Rep Mpeg2AdaptiveQuantization = D1 ('MetaData "Mpeg2AdaptiveQuantization" "Amazonka.MediaConvert.Types.Mpeg2AdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2AdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2AdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2CodecLevel

newtype Mpeg2CodecLevel Source #

Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output.

Constructors

Mpeg2CodecLevel' 

Instances

Instances details
Eq Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Ord Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Read Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Show Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Generic Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Associated Types

type Rep Mpeg2CodecLevel :: Type -> Type #

NFData Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Methods

rnf :: Mpeg2CodecLevel -> () #

Hashable Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToJSON Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToJSONKey Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

FromJSON Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

FromJSONKey Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToLog Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToHeader Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToQuery Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

FromXML Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToXML Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

Methods

toXML :: Mpeg2CodecLevel -> XML #

ToByteString Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

FromText Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

ToText Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

type Rep Mpeg2CodecLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecLevel

type Rep Mpeg2CodecLevel = D1 ('MetaData "Mpeg2CodecLevel" "Amazonka.MediaConvert.Types.Mpeg2CodecLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2CodecLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2CodecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2CodecProfile

newtype Mpeg2CodecProfile Source #

Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output.

Instances

Instances details
Eq Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Ord Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Read Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Show Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Generic Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Associated Types

type Rep Mpeg2CodecProfile :: Type -> Type #

NFData Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

Methods

rnf :: Mpeg2CodecProfile -> () #

Hashable Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToJSON Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToJSONKey Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

FromJSON Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

FromJSONKey Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToLog Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToHeader Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToQuery Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

FromXML Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToXML Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToByteString Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

FromText Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

ToText Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

type Rep Mpeg2CodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2CodecProfile

type Rep Mpeg2CodecProfile = D1 ('MetaData "Mpeg2CodecProfile" "Amazonka.MediaConvert.Types.Mpeg2CodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2CodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2CodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2DynamicSubGop

newtype Mpeg2DynamicSubGop Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

Instances

Instances details
Eq Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Ord Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Read Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Show Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Generic Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Associated Types

type Rep Mpeg2DynamicSubGop :: Type -> Type #

NFData Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

Methods

rnf :: Mpeg2DynamicSubGop -> () #

Hashable Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToJSON Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToJSONKey Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

FromJSON Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

FromJSONKey Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToLog Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToHeader Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToQuery Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

FromXML Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToXML Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToByteString Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

FromText Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

ToText Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

type Rep Mpeg2DynamicSubGop Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop

type Rep Mpeg2DynamicSubGop = D1 ('MetaData "Mpeg2DynamicSubGop" "Amazonka.MediaConvert.Types.Mpeg2DynamicSubGop" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2DynamicSubGop'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2DynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2FramerateControl

newtype Mpeg2FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Ord Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Read Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Show Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Generic Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Associated Types

type Rep Mpeg2FramerateControl :: Type -> Type #

NFData Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

Methods

rnf :: Mpeg2FramerateControl -> () #

Hashable Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToJSON Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToJSONKey Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

FromJSON Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

FromJSONKey Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToLog Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToHeader Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToQuery Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

FromXML Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToXML Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToByteString Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

FromText Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

ToText Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

type Rep Mpeg2FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateControl

type Rep Mpeg2FramerateControl = D1 ('MetaData "Mpeg2FramerateControl" "Amazonka.MediaConvert.Types.Mpeg2FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2FramerateConversionAlgorithm

newtype Mpeg2FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Ord Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Read Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Show Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Generic Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Associated Types

type Rep Mpeg2FramerateConversionAlgorithm :: Type -> Type #

NFData Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

Hashable Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToJSON Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToJSONKey Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

FromJSON Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

FromJSONKey Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToLog Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToHeader Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToQuery Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

FromXML Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToXML Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToByteString Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

FromText Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

ToText Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

type Rep Mpeg2FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm

type Rep Mpeg2FramerateConversionAlgorithm = D1 ('MetaData "Mpeg2FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.Mpeg2FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2GopSizeUnits

newtype Mpeg2GopSizeUnits Source #

Specify the units for GOP size (GopSize). If you don't specify a value here, by default the encoder measures GOP size in frames.

Instances

Instances details
Eq Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Ord Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Read Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Show Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Generic Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Associated Types

type Rep Mpeg2GopSizeUnits :: Type -> Type #

NFData Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

Methods

rnf :: Mpeg2GopSizeUnits -> () #

Hashable Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToJSON Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToJSONKey Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

FromJSON Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

FromJSONKey Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToLog Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToHeader Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToQuery Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

FromXML Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToXML Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToByteString Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

FromText Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

ToText Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

type Rep Mpeg2GopSizeUnits Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits

type Rep Mpeg2GopSizeUnits = D1 ('MetaData "Mpeg2GopSizeUnits" "Amazonka.MediaConvert.Types.Mpeg2GopSizeUnits" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2GopSizeUnits'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2GopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2InterlaceMode

newtype Mpeg2InterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Ord Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Read Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Show Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Generic Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Associated Types

type Rep Mpeg2InterlaceMode :: Type -> Type #

NFData Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

Methods

rnf :: Mpeg2InterlaceMode -> () #

Hashable Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToJSON Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToJSONKey Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

FromJSON Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

FromJSONKey Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToLog Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToHeader Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToQuery Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

FromXML Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToXML Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToByteString Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

FromText Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

ToText Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

type Rep Mpeg2InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2InterlaceMode

type Rep Mpeg2InterlaceMode = D1 ('MetaData "Mpeg2InterlaceMode" "Amazonka.MediaConvert.Types.Mpeg2InterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2InterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2InterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2IntraDcPrecision

newtype Mpeg2IntraDcPrecision Source #

Use Intra DC precision (Mpeg2IntraDcPrecision) to set quantization precision for intra-block DC coefficients. If you choose the value auto, the service will automatically select the precision based on the per-frame compression ratio.

Instances

Instances details
Eq Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Ord Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Read Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Show Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Generic Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Associated Types

type Rep Mpeg2IntraDcPrecision :: Type -> Type #

NFData Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

Methods

rnf :: Mpeg2IntraDcPrecision -> () #

Hashable Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToJSON Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToJSONKey Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

FromJSON Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

FromJSONKey Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToLog Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToHeader Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToQuery Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

FromXML Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToXML Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToByteString Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

FromText Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

ToText Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

type Rep Mpeg2IntraDcPrecision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision

type Rep Mpeg2IntraDcPrecision = D1 ('MetaData "Mpeg2IntraDcPrecision" "Amazonka.MediaConvert.Types.Mpeg2IntraDcPrecision" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2IntraDcPrecision'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2IntraDcPrecision") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2ParControl

newtype Mpeg2ParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Constructors

Mpeg2ParControl' 

Instances

Instances details
Eq Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Ord Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Read Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Show Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Generic Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Associated Types

type Rep Mpeg2ParControl :: Type -> Type #

NFData Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Methods

rnf :: Mpeg2ParControl -> () #

Hashable Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToJSON Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToJSONKey Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

FromJSON Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

FromJSONKey Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToLog Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToHeader Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToQuery Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

FromXML Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToXML Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

Methods

toXML :: Mpeg2ParControl -> XML #

ToByteString Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

FromText Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

ToText Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

type Rep Mpeg2ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ParControl

type Rep Mpeg2ParControl = D1 ('MetaData "Mpeg2ParControl" "Amazonka.MediaConvert.Types.Mpeg2ParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2ParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2ParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2QualityTuningLevel

newtype Mpeg2QualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Instances

Instances details
Eq Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Ord Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Read Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Show Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Generic Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Associated Types

type Rep Mpeg2QualityTuningLevel :: Type -> Type #

NFData Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

Methods

rnf :: Mpeg2QualityTuningLevel -> () #

Hashable Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToJSON Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToJSONKey Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

FromJSON Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

FromJSONKey Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToLog Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToHeader Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToQuery Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

FromXML Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToXML Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToByteString Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

FromText Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

ToText Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

type Rep Mpeg2QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel

type Rep Mpeg2QualityTuningLevel = D1 ('MetaData "Mpeg2QualityTuningLevel" "Amazonka.MediaConvert.Types.Mpeg2QualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2QualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2QualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2RateControlMode

newtype Mpeg2RateControlMode Source #

Use Rate control mode (Mpeg2RateControlMode) to specify whether the bitrate is variable (vbr) or constant (cbr).

Instances

Instances details
Eq Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Ord Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Read Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Show Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Generic Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Associated Types

type Rep Mpeg2RateControlMode :: Type -> Type #

NFData Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

Methods

rnf :: Mpeg2RateControlMode -> () #

Hashable Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToJSON Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToJSONKey Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

FromJSON Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

FromJSONKey Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToLog Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToHeader Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToQuery Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

FromXML Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToXML Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToByteString Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

FromText Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

ToText Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

type Rep Mpeg2RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2RateControlMode

type Rep Mpeg2RateControlMode = D1 ('MetaData "Mpeg2RateControlMode" "Amazonka.MediaConvert.Types.Mpeg2RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2ScanTypeConversionMode

newtype Mpeg2ScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Ord Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Read Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Show Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Generic Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Associated Types

type Rep Mpeg2ScanTypeConversionMode :: Type -> Type #

NFData Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

Hashable Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToJSON Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToJSONKey Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

FromJSON Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

FromJSONKey Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToLog Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToHeader Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToQuery Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

FromXML Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToXML Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToByteString Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

FromText Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

ToText Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

type Rep Mpeg2ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode

type Rep Mpeg2ScanTypeConversionMode = D1 ('MetaData "Mpeg2ScanTypeConversionMode" "Amazonka.MediaConvert.Types.Mpeg2ScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2ScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2ScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2SceneChangeDetect

newtype Mpeg2SceneChangeDetect Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default.

Instances

Instances details
Eq Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Ord Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Read Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Show Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Generic Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Associated Types

type Rep Mpeg2SceneChangeDetect :: Type -> Type #

NFData Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

Methods

rnf :: Mpeg2SceneChangeDetect -> () #

Hashable Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToJSON Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToJSONKey Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

FromJSON Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

FromJSONKey Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToLog Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToHeader Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToQuery Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

FromXML Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToXML Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToByteString Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

FromText Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

ToText Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

type Rep Mpeg2SceneChangeDetect Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect

type Rep Mpeg2SceneChangeDetect = D1 ('MetaData "Mpeg2SceneChangeDetect" "Amazonka.MediaConvert.Types.Mpeg2SceneChangeDetect" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2SceneChangeDetect'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2SceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2SlowPal

newtype Mpeg2SlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

Mpeg2SlowPal' 

Bundled Patterns

pattern Mpeg2SlowPal_DISABLED :: Mpeg2SlowPal 
pattern Mpeg2SlowPal_ENABLED :: Mpeg2SlowPal 

Instances

Instances details
Eq Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Ord Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Read Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Show Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Generic Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Associated Types

type Rep Mpeg2SlowPal :: Type -> Type #

NFData Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Methods

rnf :: Mpeg2SlowPal -> () #

Hashable Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToJSON Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToJSONKey Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

FromJSON Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

FromJSONKey Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToLog Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToHeader Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToQuery Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

FromXML Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToXML Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Methods

toXML :: Mpeg2SlowPal -> XML #

ToByteString Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

FromText Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

ToText Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

Methods

toText :: Mpeg2SlowPal -> Text #

type Rep Mpeg2SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SlowPal

type Rep Mpeg2SlowPal = D1 ('MetaData "Mpeg2SlowPal" "Amazonka.MediaConvert.Types.Mpeg2SlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2SlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2SlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2SpatialAdaptiveQuantization

newtype Mpeg2SpatialAdaptiveQuantization Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Ord Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Read Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Show Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Generic Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Associated Types

type Rep Mpeg2SpatialAdaptiveQuantization :: Type -> Type #

NFData Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

Hashable Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToJSON Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToJSONKey Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

FromJSON Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

FromJSONKey Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToLog Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToHeader Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToQuery Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

FromXML Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToXML Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToByteString Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

FromText Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

ToText Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

type Rep Mpeg2SpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization

type Rep Mpeg2SpatialAdaptiveQuantization = D1 ('MetaData "Mpeg2SpatialAdaptiveQuantization" "Amazonka.MediaConvert.Types.Mpeg2SpatialAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2SpatialAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2SpatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2Syntax

newtype Mpeg2Syntax Source #

Specify whether this output's video uses the D10 syntax. Keep the default value to not use the syntax. Related settings: When you choose D10 (D_10) for your MXF profile (profile), you must also set this value to to D10 (D_10).

Constructors

Mpeg2Syntax' 

Bundled Patterns

pattern Mpeg2Syntax_DEFAULT :: Mpeg2Syntax 
pattern Mpeg2Syntax_D_10 :: Mpeg2Syntax 

Instances

Instances details
Eq Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Ord Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Read Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Show Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Generic Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Associated Types

type Rep Mpeg2Syntax :: Type -> Type #

NFData Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Methods

rnf :: Mpeg2Syntax -> () #

Hashable Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToJSON Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToJSONKey Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

FromJSON Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

FromJSONKey Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToLog Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToHeader Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToQuery Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

FromXML Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToXML Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Methods

toXML :: Mpeg2Syntax -> XML #

ToByteString Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

FromText Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

ToText Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

Methods

toText :: Mpeg2Syntax -> Text #

type Rep Mpeg2Syntax Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Syntax

type Rep Mpeg2Syntax = D1 ('MetaData "Mpeg2Syntax" "Amazonka.MediaConvert.Types.Mpeg2Syntax" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2Syntax'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2Syntax") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2Telecine

newtype Mpeg2Telecine Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

Constructors

Mpeg2Telecine' 

Instances

Instances details
Eq Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Ord Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Read Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Show Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Generic Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Associated Types

type Rep Mpeg2Telecine :: Type -> Type #

NFData Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Methods

rnf :: Mpeg2Telecine -> () #

Hashable Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToJSON Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToJSONKey Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

FromJSON Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

FromJSONKey Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToLog Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToHeader Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToQuery Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

FromXML Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToXML Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Methods

toXML :: Mpeg2Telecine -> XML #

ToByteString Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

FromText Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

ToText Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

Methods

toText :: Mpeg2Telecine -> Text #

type Rep Mpeg2Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Telecine

type Rep Mpeg2Telecine = D1 ('MetaData "Mpeg2Telecine" "Amazonka.MediaConvert.Types.Mpeg2Telecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2Telecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2Telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Mpeg2TemporalAdaptiveQuantization

newtype Mpeg2TemporalAdaptiveQuantization Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

Instances

Instances details
Eq Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Ord Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Read Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Show Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Generic Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Associated Types

type Rep Mpeg2TemporalAdaptiveQuantization :: Type -> Type #

NFData Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

Hashable Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToJSON Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToJSONKey Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

FromJSON Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

FromJSONKey Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToLog Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToHeader Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToQuery Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

FromXML Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToXML Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToByteString Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

FromText Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

ToText Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

type Rep Mpeg2TemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization

type Rep Mpeg2TemporalAdaptiveQuantization = D1 ('MetaData "Mpeg2TemporalAdaptiveQuantization" "Amazonka.MediaConvert.Types.Mpeg2TemporalAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Mpeg2TemporalAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMpeg2TemporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MsSmoothAudioDeduplication

newtype MsSmoothAudioDeduplication Source #

COMBINE_DUPLICATE_STREAMS combines identical audio encoding settings across a Microsoft Smooth output group into a single audio stream.

Instances

Instances details
Eq MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Ord MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Read MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Show MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Generic MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Associated Types

type Rep MsSmoothAudioDeduplication :: Type -> Type #

NFData MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

Hashable MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToJSON MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToJSONKey MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

FromJSON MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

FromJSONKey MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToLog MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToHeader MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToQuery MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

FromXML MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToXML MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToByteString MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

FromText MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

ToText MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

type Rep MsSmoothAudioDeduplication Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication

type Rep MsSmoothAudioDeduplication = D1 ('MetaData "MsSmoothAudioDeduplication" "Amazonka.MediaConvert.Types.MsSmoothAudioDeduplication" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MsSmoothAudioDeduplication'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMsSmoothAudioDeduplication") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MsSmoothFragmentLengthControl

newtype MsSmoothFragmentLengthControl Source #

Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

Instances

Instances details
Eq MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Ord MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Read MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Show MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Generic MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Associated Types

type Rep MsSmoothFragmentLengthControl :: Type -> Type #

NFData MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

Hashable MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToJSON MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToJSONKey MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

FromJSON MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

FromJSONKey MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToLog MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToHeader MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToQuery MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

FromXML MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToXML MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToByteString MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

FromText MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

ToText MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

type Rep MsSmoothFragmentLengthControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl

type Rep MsSmoothFragmentLengthControl = D1 ('MetaData "MsSmoothFragmentLengthControl" "Amazonka.MediaConvert.Types.MsSmoothFragmentLengthControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MsSmoothFragmentLengthControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMsSmoothFragmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MsSmoothManifestEncoding

newtype MsSmoothManifestEncoding Source #

Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding format for the server and client manifest. Valid options are utf8 and utf16.

Instances

Instances details
Eq MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Ord MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Read MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Show MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Generic MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Associated Types

type Rep MsSmoothManifestEncoding :: Type -> Type #

NFData MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

Hashable MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToJSON MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToJSONKey MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

FromJSON MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

FromJSONKey MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToLog MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToHeader MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToQuery MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

FromXML MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToXML MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToByteString MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

FromText MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

ToText MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

type Rep MsSmoothManifestEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothManifestEncoding

type Rep MsSmoothManifestEncoding = D1 ('MetaData "MsSmoothManifestEncoding" "Amazonka.MediaConvert.Types.MsSmoothManifestEncoding" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MsSmoothManifestEncoding'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMsSmoothManifestEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MxfAfdSignaling

newtype MxfAfdSignaling Source #

Optional. When you have AFD signaling set up in your output video stream, use this setting to choose whether to also include it in the MXF wrapper. Choose Don't copy (NO_COPY) to exclude AFD signaling from the MXF wrapper. Choose Copy from video stream (COPY_FROM_VIDEO) to copy the AFD values from the video stream for this output to the MXF wrapper. Regardless of which option you choose, the AFD values remain in the video stream. Related settings: To set up your output to include or exclude AFD values, see AfdSignaling, under VideoDescription. On the console, find AFD signaling under the output's video encoding settings.

Constructors

MxfAfdSignaling' 

Instances

Instances details
Eq MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Ord MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Read MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Show MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Generic MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Associated Types

type Rep MxfAfdSignaling :: Type -> Type #

NFData MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Methods

rnf :: MxfAfdSignaling -> () #

Hashable MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToJSON MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToJSONKey MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

FromJSON MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

FromJSONKey MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToLog MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToHeader MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToQuery MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

FromXML MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToXML MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

Methods

toXML :: MxfAfdSignaling -> XML #

ToByteString MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

FromText MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

ToText MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

type Rep MxfAfdSignaling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfAfdSignaling

type Rep MxfAfdSignaling = D1 ('MetaData "MxfAfdSignaling" "Amazonka.MediaConvert.Types.MxfAfdSignaling" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MxfAfdSignaling'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMxfAfdSignaling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MxfProfile

newtype MxfProfile Source #

Specify the MXF profile, also called shim, for this output. When you choose Auto, MediaConvert chooses a profile based on the video codec and resolution. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.

Constructors

MxfProfile' 

Fields

Bundled Patterns

pattern MxfProfile_D_10 :: MxfProfile 
pattern MxfProfile_OP1A :: MxfProfile 
pattern MxfProfile_XAVC :: MxfProfile 
pattern MxfProfile_XDCAM :: MxfProfile 

Instances

Instances details
Eq MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Ord MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Read MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Show MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Generic MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Associated Types

type Rep MxfProfile :: Type -> Type #

NFData MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Methods

rnf :: MxfProfile -> () #

Hashable MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToJSON MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToJSONKey MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

FromJSON MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

FromJSONKey MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToLog MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToHeader MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToQuery MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

FromXML MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToXML MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Methods

toXML :: MxfProfile -> XML #

ToByteString MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

FromText MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

ToText MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

Methods

toText :: MxfProfile -> Text #

type Rep MxfProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfProfile

type Rep MxfProfile = D1 ('MetaData "MxfProfile" "Amazonka.MediaConvert.Types.MxfProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MxfProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMxfProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MxfXavcDurationMode

newtype MxfXavcDurationMode Source #

To create an output that complies with the XAVC file format guidelines for interoperability, keep the default value, Drop frames for compliance (DROP_FRAMES_FOR_COMPLIANCE). To include all frames from your input in this output, keep the default setting, Allow any duration (ALLOW_ANY_DURATION). The number of frames that MediaConvert excludes when you set this to Drop frames for compliance depends on the output frame rate and duration.

Instances

Instances details
Eq MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Ord MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Read MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Show MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Generic MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Associated Types

type Rep MxfXavcDurationMode :: Type -> Type #

NFData MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

Methods

rnf :: MxfXavcDurationMode -> () #

Hashable MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToJSON MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToJSONKey MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

FromJSON MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

FromJSONKey MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToLog MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToHeader MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToQuery MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

FromXML MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToXML MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToByteString MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

FromText MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

ToText MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

type Rep MxfXavcDurationMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcDurationMode

type Rep MxfXavcDurationMode = D1 ('MetaData "MxfXavcDurationMode" "Amazonka.MediaConvert.Types.MxfXavcDurationMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "MxfXavcDurationMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMxfXavcDurationMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

NielsenActiveWatermarkProcessType

newtype NielsenActiveWatermarkProcessType Source #

Choose the type of Nielsen watermarks that you want in your outputs. When you choose NAES 2 and NW (NAES2_AND_NW), you must provide a value for the setting SID (sourceId). When you choose CBET (CBET), you must provide a value for the setting CSID (cbetSourceId). When you choose NAES 2, NW, and CBET (NAES2_AND_NW_AND_CBET), you must provide values for both of these settings.

Instances

Instances details
Eq NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Ord NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Read NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Show NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Generic NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Associated Types

type Rep NielsenActiveWatermarkProcessType :: Type -> Type #

NFData NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

Hashable NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToJSON NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToJSONKey NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

FromJSON NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

FromJSONKey NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToLog NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToHeader NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToQuery NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

FromXML NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToXML NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToByteString NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

FromText NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

ToText NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

type Rep NielsenActiveWatermarkProcessType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType

type Rep NielsenActiveWatermarkProcessType = D1 ('MetaData "NielsenActiveWatermarkProcessType" "Amazonka.MediaConvert.Types.NielsenActiveWatermarkProcessType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "NielsenActiveWatermarkProcessType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromNielsenActiveWatermarkProcessType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

NielsenSourceWatermarkStatusType

newtype NielsenSourceWatermarkStatusType Source #

Required. Specify whether your source content already contains Nielsen non-linear watermarks. When you set this value to Watermarked (WATERMARKED), the service fails the job. Nielsen requires that you add non-linear watermarking to only clean content that doesn't already have non-linear Nielsen watermarks.

Instances

Instances details
Eq NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Ord NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Read NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Show NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Generic NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Associated Types

type Rep NielsenSourceWatermarkStatusType :: Type -> Type #

NFData NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

Hashable NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToJSON NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToJSONKey NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

FromJSON NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

FromJSONKey NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToLog NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToHeader NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToQuery NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

FromXML NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToXML NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToByteString NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

FromText NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

ToText NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

type Rep NielsenSourceWatermarkStatusType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType

type Rep NielsenSourceWatermarkStatusType = D1 ('MetaData "NielsenSourceWatermarkStatusType" "Amazonka.MediaConvert.Types.NielsenSourceWatermarkStatusType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "NielsenSourceWatermarkStatusType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromNielsenSourceWatermarkStatusType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

NielsenUniqueTicPerAudioTrackType

newtype NielsenUniqueTicPerAudioTrackType Source #

To create assets that have the same TIC values in each audio track, keep the default value Share TICs (SAME_TICS_PER_TRACK). To create assets that have unique TIC values for each audio track, choose Use unique TICs (RESERVE_UNIQUE_TICS_PER_TRACK).

Instances

Instances details
Eq NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Ord NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Read NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Show NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Generic NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Associated Types

type Rep NielsenUniqueTicPerAudioTrackType :: Type -> Type #

NFData NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

Hashable NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToJSON NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToJSONKey NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

FromJSON NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

FromJSONKey NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToLog NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToHeader NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToQuery NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

FromXML NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToXML NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToByteString NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

FromText NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

ToText NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

type Rep NielsenUniqueTicPerAudioTrackType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType

type Rep NielsenUniqueTicPerAudioTrackType = D1 ('MetaData "NielsenUniqueTicPerAudioTrackType" "Amazonka.MediaConvert.Types.NielsenUniqueTicPerAudioTrackType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "NielsenUniqueTicPerAudioTrackType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromNielsenUniqueTicPerAudioTrackType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

NoiseFilterPostTemporalSharpening

newtype NoiseFilterPostTemporalSharpening Source #

Optional. When you set Noise reducer (noiseReducer) to Temporal (TEMPORAL), you can use this setting to apply sharpening. The default behavior, Auto (AUTO), allows the transcoder to determine whether to apply filtering, depending on input type and quality. When you set Noise reducer to Temporal, your output bandwidth is reduced. When Post temporal sharpening is also enabled, that bandwidth reduction is smaller.

Instances

Instances details
Eq NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Ord NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Read NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Show NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Generic NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Associated Types

type Rep NoiseFilterPostTemporalSharpening :: Type -> Type #

NFData NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

Hashable NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToJSON NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToJSONKey NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

FromJSON NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

FromJSONKey NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToLog NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToHeader NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToQuery NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

FromXML NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToXML NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToByteString NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

FromText NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

ToText NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

type Rep NoiseFilterPostTemporalSharpening Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening

type Rep NoiseFilterPostTemporalSharpening = D1 ('MetaData "NoiseFilterPostTemporalSharpening" "Amazonka.MediaConvert.Types.NoiseFilterPostTemporalSharpening" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "NoiseFilterPostTemporalSharpening'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromNoiseFilterPostTemporalSharpening") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

NoiseReducerFilter

newtype NoiseReducerFilter Source #

Use Noise reducer filter (NoiseReducerFilter) to select one of the following spatial image filtering functions. To use this setting, you must also enable Noise reducer (NoiseReducer). * Bilateral preserves edges while reducing noise. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) do convolution filtering. * Conserve does min/max noise reduction. * Spatial does frequency-domain filtering based on JND principles. * Temporal optimizes video quality for complex motion.

Instances

Instances details
Eq NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Ord NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Read NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Show NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Generic NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Associated Types

type Rep NoiseReducerFilter :: Type -> Type #

NFData NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

Methods

rnf :: NoiseReducerFilter -> () #

Hashable NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToJSON NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToJSONKey NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

FromJSON NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

FromJSONKey NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToLog NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToHeader NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToQuery NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

FromXML NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToXML NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToByteString NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

FromText NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

ToText NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

type Rep NoiseReducerFilter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilter

type Rep NoiseReducerFilter = D1 ('MetaData "NoiseReducerFilter" "Amazonka.MediaConvert.Types.NoiseReducerFilter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "NoiseReducerFilter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromNoiseReducerFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Order

newtype Order Source #

Optional. When you request lists of resources, you can specify whether they are sorted in ASCENDING or DESCENDING order. Default varies by resource.

Constructors

Order' 

Fields

Bundled Patterns

pattern Order_ASCENDING :: Order 
pattern Order_DESCENDING :: Order 

Instances

Instances details
Eq Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

(==) :: Order -> Order -> Bool #

(/=) :: Order -> Order -> Bool #

Ord Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

compare :: Order -> Order -> Ordering #

(<) :: Order -> Order -> Bool #

(<=) :: Order -> Order -> Bool #

(>) :: Order -> Order -> Bool #

(>=) :: Order -> Order -> Bool #

max :: Order -> Order -> Order #

min :: Order -> Order -> Order #

Read Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Show Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

showsPrec :: Int -> Order -> ShowS #

show :: Order -> String #

showList :: [Order] -> ShowS #

Generic Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Associated Types

type Rep Order :: Type -> Type #

Methods

from :: Order -> Rep Order x #

to :: Rep Order x -> Order #

NFData Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

rnf :: Order -> () #

Hashable Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

hashWithSalt :: Int -> Order -> Int #

hash :: Order -> Int #

ToJSON Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

ToJSONKey Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

FromJSON Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

FromJSONKey Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

ToLog Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

ToHeader Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

toHeader :: HeaderName -> Order -> [Header] #

ToQuery Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

toQuery :: Order -> QueryString #

FromXML Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

parseXML :: [Node] -> Either String Order #

ToXML Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

toXML :: Order -> XML #

ToByteString Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

toBS :: Order -> ByteString #

FromText Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

ToText Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

Methods

toText :: Order -> Text #

type Rep Order Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Order

type Rep Order = D1 ('MetaData "Order" "Amazonka.MediaConvert.Types.Order" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Order'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromOrder") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

OutputGroupType

newtype OutputGroupType Source #

Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, CMAF)

Constructors

OutputGroupType' 

Instances

Instances details
Eq OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Ord OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Read OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Show OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Generic OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Associated Types

type Rep OutputGroupType :: Type -> Type #

NFData OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Methods

rnf :: OutputGroupType -> () #

Hashable OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToJSON OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToJSONKey OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

FromJSON OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

FromJSONKey OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToLog OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToHeader OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToQuery OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

FromXML OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToXML OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

Methods

toXML :: OutputGroupType -> XML #

ToByteString OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

FromText OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

ToText OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

type Rep OutputGroupType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupType

type Rep OutputGroupType = D1 ('MetaData "OutputGroupType" "Amazonka.MediaConvert.Types.OutputGroupType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "OutputGroupType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromOutputGroupType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

OutputSdt

newtype OutputSdt Source #

Selects method of inserting SDT information into output stream. "Follow input SDT" copies SDT information from input stream to output stream. "Follow input SDT if present" copies SDT information from input stream to output stream if SDT information is present in the input, otherwise it will fall back on the user-defined values. Enter "SDT Manually" means user will enter the SDT information. "No SDT" means output stream will not contain SDT information.

Constructors

OutputSdt' 

Fields

Instances

Instances details
Eq OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Ord OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Read OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Show OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Generic OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Associated Types

type Rep OutputSdt :: Type -> Type #

NFData OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Methods

rnf :: OutputSdt -> () #

Hashable OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToJSON OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToJSONKey OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

FromJSON OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

FromJSONKey OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToLog OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToHeader OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToQuery OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

FromXML OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToXML OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Methods

toXML :: OutputSdt -> XML #

ToByteString OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Methods

toBS :: OutputSdt -> ByteString #

FromText OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

ToText OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

Methods

toText :: OutputSdt -> Text #

type Rep OutputSdt Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSdt

type Rep OutputSdt = D1 ('MetaData "OutputSdt" "Amazonka.MediaConvert.Types.OutputSdt" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "OutputSdt'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromOutputSdt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

PresetListBy

newtype PresetListBy Source #

Optional. When you request a list of presets, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by name.

Constructors

PresetListBy' 

Instances

Instances details
Eq PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Ord PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Read PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Show PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Generic PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Associated Types

type Rep PresetListBy :: Type -> Type #

NFData PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Methods

rnf :: PresetListBy -> () #

Hashable PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToJSON PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToJSONKey PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

FromJSON PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

FromJSONKey PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToLog PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToHeader PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToQuery PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

FromXML PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToXML PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Methods

toXML :: PresetListBy -> XML #

ToByteString PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

FromText PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

ToText PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

Methods

toText :: PresetListBy -> Text #

type Rep PresetListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetListBy

type Rep PresetListBy = D1 ('MetaData "PresetListBy" "Amazonka.MediaConvert.Types.PresetListBy" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "PresetListBy'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromPresetListBy") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

PricingPlan

newtype PricingPlan Source #

Specifies whether the pricing plan for the queue is on-demand or reserved. For on-demand, you pay per minute, billed in increments of .01 minute. For reserved, you pay for the transcoding capacity of the entire queue, regardless of how much or how little you use it. Reserved pricing requires a 12-month commitment.

Constructors

PricingPlan' 

Bundled Patterns

pattern PricingPlan_ON_DEMAND :: PricingPlan 
pattern PricingPlan_RESERVED :: PricingPlan 

Instances

Instances details
Eq PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Ord PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Read PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Show PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Generic PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Associated Types

type Rep PricingPlan :: Type -> Type #

NFData PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Methods

rnf :: PricingPlan -> () #

Hashable PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToJSON PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToJSONKey PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

FromJSON PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

FromJSONKey PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToLog PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToHeader PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToQuery PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

FromXML PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToXML PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Methods

toXML :: PricingPlan -> XML #

ToByteString PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

FromText PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

ToText PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

Methods

toText :: PricingPlan -> Text #

type Rep PricingPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PricingPlan

type Rep PricingPlan = D1 ('MetaData "PricingPlan" "Amazonka.MediaConvert.Types.PricingPlan" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "PricingPlan'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromPricingPlan") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresChromaSampling

newtype ProresChromaSampling Source #

This setting applies only to ProRes 4444 and ProRes 4444 XQ outputs that you create from inputs that use 4:4:4 chroma sampling. Set Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING) to allow outputs to also use 4:4:4 chroma sampling. You must specify a value for this setting when your output codec profile supports 4:4:4 chroma sampling. Related Settings: When you set Chroma sampling to Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING), you must choose an output codec profile that supports 4:4:4 chroma sampling. These values for Profile (CodecProfile) support 4:4:4 chroma sampling: Apple ProRes 4444 (APPLE_PRORES_4444) or Apple ProRes 4444 XQ (APPLE_PRORES_4444_XQ). When you set Chroma sampling to Preserve 4:4:4 sampling, you must disable all video preprocessors except for Nexguard file marker (PartnerWatermarking). When you set Chroma sampling to Preserve 4:4:4 sampling and use framerate conversion, you must set Frame rate conversion algorithm (FramerateConversionAlgorithm) to Drop duplicate (DUPLICATE_DROP).

Instances

Instances details
Eq ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Ord ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Read ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Show ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Generic ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Associated Types

type Rep ProresChromaSampling :: Type -> Type #

NFData ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

Methods

rnf :: ProresChromaSampling -> () #

Hashable ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToJSON ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToJSONKey ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

FromJSON ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

FromJSONKey ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToLog ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToHeader ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToQuery ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

FromXML ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToXML ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToByteString ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

FromText ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

ToText ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

type Rep ProresChromaSampling Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresChromaSampling

type Rep ProresChromaSampling = D1 ('MetaData "ProresChromaSampling" "Amazonka.MediaConvert.Types.ProresChromaSampling" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresChromaSampling'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresChromaSampling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresCodecProfile

newtype ProresCodecProfile Source #

Use Profile (ProResCodecProfile) to specify the type of Apple ProRes codec to use for this output.

Instances

Instances details
Eq ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Ord ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Read ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Show ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Generic ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Associated Types

type Rep ProresCodecProfile :: Type -> Type #

NFData ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

Methods

rnf :: ProresCodecProfile -> () #

Hashable ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToJSON ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToJSONKey ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

FromJSON ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

FromJSONKey ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToLog ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToHeader ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToQuery ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

FromXML ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToXML ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToByteString ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

FromText ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

ToText ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

type Rep ProresCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresCodecProfile

type Rep ProresCodecProfile = D1 ('MetaData "ProresCodecProfile" "Amazonka.MediaConvert.Types.ProresCodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresCodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresCodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresFramerateControl

newtype ProresFramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Ord ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Read ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Show ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Generic ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Associated Types

type Rep ProresFramerateControl :: Type -> Type #

NFData ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

Methods

rnf :: ProresFramerateControl -> () #

Hashable ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToJSON ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToJSONKey ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

FromJSON ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

FromJSONKey ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToLog ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToHeader ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToQuery ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

FromXML ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToXML ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToByteString ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

FromText ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

ToText ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

type Rep ProresFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateControl

type Rep ProresFramerateControl = D1 ('MetaData "ProresFramerateControl" "Amazonka.MediaConvert.Types.ProresFramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresFramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresFramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresFramerateConversionAlgorithm

newtype ProresFramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Ord ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Read ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Show ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Generic ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Associated Types

type Rep ProresFramerateConversionAlgorithm :: Type -> Type #

NFData ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

Hashable ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToJSON ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToJSONKey ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

FromJSON ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

FromJSONKey ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToLog ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToHeader ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToQuery ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

FromXML ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToXML ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToByteString ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

FromText ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

ToText ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

type Rep ProresFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm

type Rep ProresFramerateConversionAlgorithm = D1 ('MetaData "ProresFramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.ProresFramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresFramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresFramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresInterlaceMode

newtype ProresInterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Ord ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Read ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Show ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Generic ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Associated Types

type Rep ProresInterlaceMode :: Type -> Type #

NFData ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

Methods

rnf :: ProresInterlaceMode -> () #

Hashable ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToJSON ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToJSONKey ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

FromJSON ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

FromJSONKey ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToLog ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToHeader ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToQuery ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

FromXML ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToXML ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToByteString ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

FromText ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

ToText ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

type Rep ProresInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresInterlaceMode

type Rep ProresInterlaceMode = D1 ('MetaData "ProresInterlaceMode" "Amazonka.MediaConvert.Types.ProresInterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresInterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresInterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresParControl

newtype ProresParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Instances

Instances details
Eq ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Ord ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Read ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Show ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Generic ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Associated Types

type Rep ProresParControl :: Type -> Type #

NFData ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

Methods

rnf :: ProresParControl -> () #

Hashable ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToJSON ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToJSONKey ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

FromJSON ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

FromJSONKey ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToLog ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToHeader ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToQuery ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

FromXML ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToXML ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToByteString ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

FromText ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

ToText ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

type Rep ProresParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresParControl

type Rep ProresParControl = D1 ('MetaData "ProresParControl" "Amazonka.MediaConvert.Types.ProresParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresScanTypeConversionMode

newtype ProresScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Ord ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Read ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Show ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Generic ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Associated Types

type Rep ProresScanTypeConversionMode :: Type -> Type #

NFData ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

Hashable ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToJSON ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToJSONKey ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

FromJSON ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

FromJSONKey ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToLog ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToHeader ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToQuery ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

FromXML ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToXML ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToByteString ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

FromText ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

ToText ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

type Rep ProresScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresScanTypeConversionMode

type Rep ProresScanTypeConversionMode = D1 ('MetaData "ProresScanTypeConversionMode" "Amazonka.MediaConvert.Types.ProresScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresSlowPal

newtype ProresSlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

ProresSlowPal' 

Instances

Instances details
Eq ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Ord ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Read ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Show ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Generic ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Associated Types

type Rep ProresSlowPal :: Type -> Type #

NFData ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Methods

rnf :: ProresSlowPal -> () #

Hashable ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToJSON ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToJSONKey ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

FromJSON ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

FromJSONKey ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToLog ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToHeader ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToQuery ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

FromXML ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToXML ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Methods

toXML :: ProresSlowPal -> XML #

ToByteString ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

FromText ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

ToText ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

Methods

toText :: ProresSlowPal -> Text #

type Rep ProresSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSlowPal

type Rep ProresSlowPal = D1 ('MetaData "ProresSlowPal" "Amazonka.MediaConvert.Types.ProresSlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresSlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresSlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ProresTelecine

newtype ProresTelecine Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

Constructors

ProresTelecine' 

Bundled Patterns

pattern ProresTelecine_HARD :: ProresTelecine 
pattern ProresTelecine_NONE :: ProresTelecine 

Instances

Instances details
Eq ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Ord ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Read ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Show ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Generic ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Associated Types

type Rep ProresTelecine :: Type -> Type #

NFData ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Methods

rnf :: ProresTelecine -> () #

Hashable ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToJSON ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToJSONKey ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

FromJSON ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

FromJSONKey ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToLog ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToHeader ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToQuery ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

FromXML ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToXML ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

Methods

toXML :: ProresTelecine -> XML #

ToByteString ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

FromText ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

ToText ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

type Rep ProresTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresTelecine

type Rep ProresTelecine = D1 ('MetaData "ProresTelecine" "Amazonka.MediaConvert.Types.ProresTelecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ProresTelecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromProresTelecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

QueueListBy

newtype QueueListBy Source #

Optional. When you request a list of queues, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by creation date.

Constructors

QueueListBy' 

Bundled Patterns

pattern QueueListBy_CREATION_DATE :: QueueListBy 
pattern QueueListBy_NAME :: QueueListBy 

Instances

Instances details
Eq QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Ord QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Read QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Show QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Generic QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Associated Types

type Rep QueueListBy :: Type -> Type #

NFData QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Methods

rnf :: QueueListBy -> () #

Hashable QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToJSON QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToJSONKey QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

FromJSON QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

FromJSONKey QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToLog QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToHeader QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToQuery QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

FromXML QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToXML QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Methods

toXML :: QueueListBy -> XML #

ToByteString QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

FromText QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

ToText QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

Methods

toText :: QueueListBy -> Text #

type Rep QueueListBy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueListBy

type Rep QueueListBy = D1 ('MetaData "QueueListBy" "Amazonka.MediaConvert.Types.QueueListBy" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "QueueListBy'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromQueueListBy") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

QueueStatus

newtype QueueStatus Source #

Queues can be ACTIVE or PAUSED. If you pause a queue, jobs in that queue won't begin. Jobs that are running when you pause a queue continue to run until they finish or result in an error.

Constructors

QueueStatus' 

Bundled Patterns

pattern QueueStatus_ACTIVE :: QueueStatus 
pattern QueueStatus_PAUSED :: QueueStatus 

Instances

Instances details
Eq QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Ord QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Read QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Show QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Generic QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Associated Types

type Rep QueueStatus :: Type -> Type #

NFData QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Methods

rnf :: QueueStatus -> () #

Hashable QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToJSON QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToJSONKey QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

FromJSON QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

FromJSONKey QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToLog QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToHeader QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToQuery QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

FromXML QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToXML QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Methods

toXML :: QueueStatus -> XML #

ToByteString QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

FromText QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

ToText QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

Methods

toText :: QueueStatus -> Text #

type Rep QueueStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueStatus

type Rep QueueStatus = D1 ('MetaData "QueueStatus" "Amazonka.MediaConvert.Types.QueueStatus" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "QueueStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromQueueStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

RenewalType

newtype RenewalType Source #

Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term.

Constructors

RenewalType' 

Bundled Patterns

pattern RenewalType_AUTO_RENEW :: RenewalType 
pattern RenewalType_EXPIRE :: RenewalType 

Instances

Instances details
Eq RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Ord RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Read RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Show RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Generic RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Associated Types

type Rep RenewalType :: Type -> Type #

NFData RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Methods

rnf :: RenewalType -> () #

Hashable RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToJSON RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToJSONKey RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

FromJSON RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

FromJSONKey RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToLog RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToHeader RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToQuery RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

FromXML RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToXML RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Methods

toXML :: RenewalType -> XML #

ToByteString RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

FromText RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

ToText RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

Methods

toText :: RenewalType -> Text #

type Rep RenewalType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RenewalType

type Rep RenewalType = D1 ('MetaData "RenewalType" "Amazonka.MediaConvert.Types.RenewalType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "RenewalType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromRenewalType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ReservationPlanStatus

newtype ReservationPlanStatus Source #

Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.

Instances

Instances details
Eq ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Ord ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Read ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Show ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Generic ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Associated Types

type Rep ReservationPlanStatus :: Type -> Type #

NFData ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

Methods

rnf :: ReservationPlanStatus -> () #

Hashable ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToJSON ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToJSONKey ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

FromJSON ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

FromJSONKey ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToLog ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToHeader ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToQuery ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

FromXML ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToXML ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToByteString ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

FromText ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

ToText ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

type Rep ReservationPlanStatus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanStatus

type Rep ReservationPlanStatus = D1 ('MetaData "ReservationPlanStatus" "Amazonka.MediaConvert.Types.ReservationPlanStatus" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ReservationPlanStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromReservationPlanStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

RespondToAfd

newtype RespondToAfd Source #

Use Respond to AFD (RespondToAfd) to specify how the service changes the video itself in response to AFD values in the input. * Choose Respond to clip the input video frame according to the AFD value, input display aspect ratio, and output display aspect ratio. * Choose Passthrough to include the input AFD values. Do not choose this when AfdSignaling is set to (NONE). A preferred implementation of this workflow is to set RespondToAfd to (NONE) and set AfdSignaling to (AUTO). * Choose None to remove all input AFD values from this output.

Constructors

RespondToAfd' 

Instances

Instances details
Eq RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Ord RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Read RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Show RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Generic RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Associated Types

type Rep RespondToAfd :: Type -> Type #

NFData RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Methods

rnf :: RespondToAfd -> () #

Hashable RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToJSON RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToJSONKey RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

FromJSON RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

FromJSONKey RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToLog RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToHeader RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToQuery RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

FromXML RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToXML RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Methods

toXML :: RespondToAfd -> XML #

ToByteString RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

FromText RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

ToText RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

Methods

toText :: RespondToAfd -> Text #

type Rep RespondToAfd Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RespondToAfd

type Rep RespondToAfd = D1 ('MetaData "RespondToAfd" "Amazonka.MediaConvert.Types.RespondToAfd" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "RespondToAfd'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromRespondToAfd") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

S3ObjectCannedAcl

newtype S3ObjectCannedAcl Source #

Choose an Amazon S3 canned ACL for MediaConvert to apply to this output.

Instances

Instances details
Eq S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Ord S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Read S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Show S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Generic S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Associated Types

type Rep S3ObjectCannedAcl :: Type -> Type #

NFData S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

Methods

rnf :: S3ObjectCannedAcl -> () #

Hashable S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToJSON S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToJSONKey S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

FromJSON S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

FromJSONKey S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToLog S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToHeader S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToQuery S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

FromXML S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToXML S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToByteString S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

FromText S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

ToText S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

type Rep S3ObjectCannedAcl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ObjectCannedAcl

type Rep S3ObjectCannedAcl = D1 ('MetaData "S3ObjectCannedAcl" "Amazonka.MediaConvert.Types.S3ObjectCannedAcl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "S3ObjectCannedAcl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromS3ObjectCannedAcl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

S3ServerSideEncryptionType

newtype S3ServerSideEncryptionType Source #

Specify how you want your data keys managed. AWS uses data keys to encrypt your content. AWS also encrypts the data keys themselves, using a customer master key (CMK), and then stores the encrypted data keys alongside your encrypted content. Use this setting to specify which AWS service manages the CMK. For simplest set up, choose Amazon S3 (SERVER_SIDE_ENCRYPTION_S3). If you want your master key to be managed by AWS Key Management Service (KMS), choose AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). By default, when you choose AWS KMS, KMS uses the AWS managed customer master key (CMK) associated with Amazon S3 to encrypt your data keys. You can optionally choose to specify a different, customer managed CMK. Do so by specifying the Amazon Resource Name (ARN) of the key for the setting KMS ARN (kmsKeyArn).

Instances

Instances details
Eq S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Ord S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Read S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Show S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Generic S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Associated Types

type Rep S3ServerSideEncryptionType :: Type -> Type #

NFData S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

Hashable S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToJSON S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToJSONKey S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

FromJSON S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

FromJSONKey S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToLog S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToHeader S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToQuery S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

FromXML S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToXML S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToByteString S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

FromText S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

ToText S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

type Rep S3ServerSideEncryptionType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3ServerSideEncryptionType

type Rep S3ServerSideEncryptionType = D1 ('MetaData "S3ServerSideEncryptionType" "Amazonka.MediaConvert.Types.S3ServerSideEncryptionType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "S3ServerSideEncryptionType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromS3ServerSideEncryptionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SampleRangeConversion

newtype SampleRangeConversion Source #

Specify the video color sample range for this output. To create a full range output, you must start with a full range YUV input and keep the default value, None (NONE). To create a limited range output from a full range input, choose Limited range (LIMITED_RANGE_SQUEEZE). With RGB inputs, your output is always limited range, regardless of your choice here. When you create a limited range output from a full range input, MediaConvert limits the active pixel values in a way that depends on the output's bit depth: 8-bit outputs contain only values from 16 through 235 and 10-bit outputs contain only values from 64 through 940. With this conversion, MediaConvert also changes the output metadata to note the limited range.

Instances

Instances details
Eq SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Ord SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Read SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Show SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Generic SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Associated Types

type Rep SampleRangeConversion :: Type -> Type #

NFData SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

Methods

rnf :: SampleRangeConversion -> () #

Hashable SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToJSON SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToJSONKey SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

FromJSON SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

FromJSONKey SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToLog SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToHeader SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToQuery SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

FromXML SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToXML SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToByteString SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

FromText SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

ToText SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

type Rep SampleRangeConversion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SampleRangeConversion

type Rep SampleRangeConversion = D1 ('MetaData "SampleRangeConversion" "Amazonka.MediaConvert.Types.SampleRangeConversion" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "SampleRangeConversion'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSampleRangeConversion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ScalingBehavior

newtype ScalingBehavior Source #

Specify how the service handles outputs that have a different aspect ratio from the input aspect ratio. Choose Stretch to output (STRETCH_TO_OUTPUT) to have the service stretch your video image to fit. Keep the setting Default (DEFAULT) to have the service letterbox your video instead. This setting overrides any value that you specify for the setting Selection placement (position) in this output.

Constructors

ScalingBehavior' 

Instances

Instances details
Eq ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Ord ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Read ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Show ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Generic ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Associated Types

type Rep ScalingBehavior :: Type -> Type #

NFData ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Methods

rnf :: ScalingBehavior -> () #

Hashable ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToJSON ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToJSONKey ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

FromJSON ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

FromJSONKey ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToLog ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToHeader ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToQuery ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

FromXML ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToXML ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

Methods

toXML :: ScalingBehavior -> XML #

ToByteString ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

FromText ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

ToText ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

type Rep ScalingBehavior Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ScalingBehavior

type Rep ScalingBehavior = D1 ('MetaData "ScalingBehavior" "Amazonka.MediaConvert.Types.ScalingBehavior" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "ScalingBehavior'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromScalingBehavior") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SccDestinationFramerate

newtype SccDestinationFramerate Source #

Set Framerate (SccDestinationFramerate) to make sure that the captions and the video are synchronized in the output. Specify a frame rate that matches the frame rate of the associated video. If the video frame rate is 29.97, choose 29.97 dropframe (FRAMERATE_29_97_DROPFRAME) only if the video has video_insertion=true and drop_frame_timecode=true; otherwise, choose 29.97 non-dropframe (FRAMERATE_29_97_NON_DROPFRAME).

Instances

Instances details
Eq SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Ord SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Read SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Show SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Generic SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Associated Types

type Rep SccDestinationFramerate :: Type -> Type #

NFData SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

Methods

rnf :: SccDestinationFramerate -> () #

Hashable SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToJSON SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToJSONKey SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

FromJSON SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

FromJSONKey SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToLog SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToHeader SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToQuery SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

FromXML SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToXML SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToByteString SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

FromText SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

ToText SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

type Rep SccDestinationFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationFramerate

type Rep SccDestinationFramerate = D1 ('MetaData "SccDestinationFramerate" "Amazonka.MediaConvert.Types.SccDestinationFramerate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "SccDestinationFramerate'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSccDestinationFramerate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SimulateReservedQueue

newtype SimulateReservedQueue Source #

Enable this setting when you run a test job to estimate how many reserved transcoding slots (RTS) you need. When this is enabled, MediaConvert runs your job from an on-demand queue with similar performance to what you will see with one RTS in a reserved queue. This setting is disabled by default.

Instances

Instances details
Eq SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Ord SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Read SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Show SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Generic SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Associated Types

type Rep SimulateReservedQueue :: Type -> Type #

NFData SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

Methods

rnf :: SimulateReservedQueue -> () #

Hashable SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToJSON SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToJSONKey SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

FromJSON SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

FromJSONKey SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToLog SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToHeader SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToQuery SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

FromXML SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToXML SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToByteString SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

FromText SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

ToText SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

type Rep SimulateReservedQueue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SimulateReservedQueue

type Rep SimulateReservedQueue = D1 ('MetaData "SimulateReservedQueue" "Amazonka.MediaConvert.Types.SimulateReservedQueue" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "SimulateReservedQueue'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSimulateReservedQueue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SrtStylePassthrough

newtype SrtStylePassthrough Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

Instances

Instances details
Eq SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Ord SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Read SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Show SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Generic SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Associated Types

type Rep SrtStylePassthrough :: Type -> Type #

NFData SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

Methods

rnf :: SrtStylePassthrough -> () #

Hashable SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToJSON SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToJSONKey SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

FromJSON SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

FromJSONKey SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToLog SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToHeader SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToQuery SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

FromXML SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToXML SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToByteString SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

FromText SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

ToText SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

type Rep SrtStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtStylePassthrough

type Rep SrtStylePassthrough = D1 ('MetaData "SrtStylePassthrough" "Amazonka.MediaConvert.Types.SrtStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "SrtStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSrtStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

StatusUpdateInterval

newtype StatusUpdateInterval Source #

Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

Instances

Instances details
Eq StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Ord StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Read StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Show StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Generic StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Associated Types

type Rep StatusUpdateInterval :: Type -> Type #

NFData StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

Methods

rnf :: StatusUpdateInterval -> () #

Hashable StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToJSON StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToJSONKey StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

FromJSON StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

FromJSONKey StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToLog StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToHeader StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToQuery StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

FromXML StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToXML StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToByteString StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

FromText StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

ToText StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

type Rep StatusUpdateInterval Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StatusUpdateInterval

type Rep StatusUpdateInterval = D1 ('MetaData "StatusUpdateInterval" "Amazonka.MediaConvert.Types.StatusUpdateInterval" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "StatusUpdateInterval'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromStatusUpdateInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TeletextPageType

newtype TeletextPageType Source #

A page type as defined in the standard ETSI EN 300 468, Table 94

Instances

Instances details
Eq TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Ord TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Read TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Show TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Generic TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Associated Types

type Rep TeletextPageType :: Type -> Type #

NFData TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

Methods

rnf :: TeletextPageType -> () #

Hashable TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToJSON TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToJSONKey TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

FromJSON TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

FromJSONKey TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToLog TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToHeader TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToQuery TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

FromXML TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToXML TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToByteString TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

FromText TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

ToText TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

type Rep TeletextPageType Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextPageType

type Rep TeletextPageType = D1 ('MetaData "TeletextPageType" "Amazonka.MediaConvert.Types.TeletextPageType" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "TeletextPageType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTeletextPageType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TimecodeBurninPosition

newtype TimecodeBurninPosition Source #

Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to specify the location the burned-in timecode on output video.

Instances

Instances details
Eq TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Ord TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Read TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Show TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Generic TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Associated Types

type Rep TimecodeBurninPosition :: Type -> Type #

NFData TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

Methods

rnf :: TimecodeBurninPosition -> () #

Hashable TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToJSON TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToJSONKey TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

FromJSON TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

FromJSONKey TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToLog TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToHeader TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToQuery TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

FromXML TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToXML TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToByteString TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

FromText TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

ToText TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

type Rep TimecodeBurninPosition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurninPosition

type Rep TimecodeBurninPosition = D1 ('MetaData "TimecodeBurninPosition" "Amazonka.MediaConvert.Types.TimecodeBurninPosition" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "TimecodeBurninPosition'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTimecodeBurninPosition") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TimecodeSource

newtype TimecodeSource Source #

Use Source (TimecodeSource) to set how timecodes are handled within this job. To make sure that your video, audio, captions, and markers are synchronized and that time-based features, such as image inserter, work correctly, choose the Timecode source option that matches your assets. All timecodes are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - Use the timecode that is in the input video. If no embedded timecode is in the source, the service will use Start at 0 (ZEROBASED) instead. * Start at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame to a value other than zero. You use Start timecode (Start) to provide this value.

Constructors

TimecodeSource' 

Instances

Instances details
Eq TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Ord TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Read TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Show TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Generic TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Associated Types

type Rep TimecodeSource :: Type -> Type #

NFData TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Methods

rnf :: TimecodeSource -> () #

Hashable TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToJSON TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToJSONKey TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

FromJSON TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

FromJSONKey TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToLog TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToHeader TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToQuery TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

FromXML TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToXML TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

Methods

toXML :: TimecodeSource -> XML #

ToByteString TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

FromText TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

ToText TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

type Rep TimecodeSource Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeSource

type Rep TimecodeSource = D1 ('MetaData "TimecodeSource" "Amazonka.MediaConvert.Types.TimecodeSource" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "TimecodeSource'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTimecodeSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TimedMetadata

newtype TimedMetadata Source #

Applies only to HLS outputs. Use this setting to specify whether the service inserts the ID3 timed metadata from the input in this output.

Constructors

TimedMetadata' 

Instances

Instances details
Eq TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Ord TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Read TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Show TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Generic TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Associated Types

type Rep TimedMetadata :: Type -> Type #

NFData TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Methods

rnf :: TimedMetadata -> () #

Hashable TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToJSON TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToJSONKey TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

FromJSON TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

FromJSONKey TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToLog TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToHeader TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToQuery TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

FromXML TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToXML TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Methods

toXML :: TimedMetadata -> XML #

ToByteString TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

FromText TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

ToText TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

Methods

toText :: TimedMetadata -> Text #

type Rep TimedMetadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadata

type Rep TimedMetadata = D1 ('MetaData "TimedMetadata" "Amazonka.MediaConvert.Types.TimedMetadata" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "TimedMetadata'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTimedMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TtmlStylePassthrough

newtype TtmlStylePassthrough Source #

Pass through style and position information from a TTML-like input source (TTML, IMSC, SMPTE-TT) to the TTML output.

Instances

Instances details
Eq TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Ord TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Read TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Show TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Generic TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Associated Types

type Rep TtmlStylePassthrough :: Type -> Type #

NFData TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

Methods

rnf :: TtmlStylePassthrough -> () #

Hashable TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToJSON TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToJSONKey TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

FromJSON TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

FromJSONKey TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToLog TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToHeader TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToQuery TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

FromXML TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToXML TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToByteString TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

FromText TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

ToText TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

type Rep TtmlStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlStylePassthrough

type Rep TtmlStylePassthrough = D1 ('MetaData "TtmlStylePassthrough" "Amazonka.MediaConvert.Types.TtmlStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "TtmlStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTtmlStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Type

newtype Type Source #

Constructors

Type' 

Fields

Bundled Patterns

pattern Type_CUSTOM :: Type 
pattern Type_SYSTEM :: Type 

Instances

Instances details
Eq Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

(==) :: Type -> Type -> Bool #

(/=) :: Type -> Type -> Bool #

Ord Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

compare :: Type -> Type -> Ordering #

(<) :: Type -> Type -> Bool #

(<=) :: Type -> Type -> Bool #

(>) :: Type -> Type -> Bool #

(>=) :: Type -> Type -> Bool #

max :: Type -> Type -> Type #

min :: Type -> Type -> Type #

Read Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Show Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

showsPrec :: Int -> Type -> ShowS #

show :: Type -> String #

showList :: [Type] -> ShowS #

Generic Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Associated Types

type Rep Type :: Type -> Type #

Methods

from :: Type -> Rep Type x #

to :: Rep Type x -> Type #

NFData Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

rnf :: Type -> () #

Hashable Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

hashWithSalt :: Int -> Type -> Int #

hash :: Type -> Int #

ToJSON Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

ToJSONKey Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

FromJSON Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

FromJSONKey Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

ToLog Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

ToHeader Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

toHeader :: HeaderName -> Type -> [Header] #

ToQuery Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

toQuery :: Type -> QueryString #

FromXML Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

parseXML :: [Node] -> Either String Type #

ToXML Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

toXML :: Type -> XML #

ToByteString Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

toBS :: Type -> ByteString #

FromText Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

ToText Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

Methods

toText :: Type -> Text #

type Rep Type Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Type

type Rep Type = D1 ('MetaData "Type" "Amazonka.MediaConvert.Types.Type" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Type'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3Class

newtype Vc3Class Source #

Specify the VC3 class to choose the quality characteristics for this output. VC3 class, together with the settings Framerate (framerateNumerator and framerateDenominator) and Resolution (height and width), determine your output bitrate. For example, say that your video resolution is 1920x1080 and your framerate is 29.97. Then Class 145 (CLASS_145) gives you an output with a bitrate of approximately 145 Mbps and Class 220 (CLASS_220) gives you and output with a bitrate of approximately 220 Mbps. VC3 class also specifies the color bit depth of your output.

Constructors

Vc3Class' 

Fields

Instances

Instances details
Eq Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Ord Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Read Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Show Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Generic Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Associated Types

type Rep Vc3Class :: Type -> Type #

Methods

from :: Vc3Class -> Rep Vc3Class x #

to :: Rep Vc3Class x -> Vc3Class #

NFData Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

rnf :: Vc3Class -> () #

Hashable Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

hashWithSalt :: Int -> Vc3Class -> Int #

hash :: Vc3Class -> Int #

ToJSON Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

ToJSONKey Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

FromJSON Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

FromJSONKey Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

ToLog Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

ToHeader Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

toHeader :: HeaderName -> Vc3Class -> [Header] #

ToQuery Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

FromXML Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

ToXML Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

toXML :: Vc3Class -> XML #

ToByteString Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

toBS :: Vc3Class -> ByteString #

FromText Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

ToText Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

Methods

toText :: Vc3Class -> Text #

type Rep Vc3Class Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Class

type Rep Vc3Class = D1 ('MetaData "Vc3Class" "Amazonka.MediaConvert.Types.Vc3Class" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3Class'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3Class") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3FramerateControl

newtype Vc3FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Ord Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Read Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Show Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Generic Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Associated Types

type Rep Vc3FramerateControl :: Type -> Type #

NFData Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

Methods

rnf :: Vc3FramerateControl -> () #

Hashable Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToJSON Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToJSONKey Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

FromJSON Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

FromJSONKey Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToLog Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToHeader Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToQuery Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

FromXML Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToXML Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToByteString Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

FromText Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

ToText Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

type Rep Vc3FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateControl

type Rep Vc3FramerateControl = D1 ('MetaData "Vc3FramerateControl" "Amazonka.MediaConvert.Types.Vc3FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3FramerateConversionAlgorithm

newtype Vc3FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Ord Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Read Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Show Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Generic Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Associated Types

type Rep Vc3FramerateConversionAlgorithm :: Type -> Type #

NFData Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

Hashable Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToJSON Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToJSONKey Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

FromJSON Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

FromJSONKey Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToLog Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToHeader Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToQuery Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

FromXML Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToXML Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToByteString Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

FromText Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

ToText Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

type Rep Vc3FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm

type Rep Vc3FramerateConversionAlgorithm = D1 ('MetaData "Vc3FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.Vc3FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3InterlaceMode

newtype Vc3InterlaceMode Source #

Optional. Choose the scan line type for this output. If you don't specify a value, MediaConvert will create a progressive output.

Instances

Instances details
Eq Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Ord Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Read Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Show Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Generic Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Associated Types

type Rep Vc3InterlaceMode :: Type -> Type #

NFData Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

Methods

rnf :: Vc3InterlaceMode -> () #

Hashable Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToJSON Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToJSONKey Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

FromJSON Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

FromJSONKey Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToLog Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToHeader Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToQuery Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

FromXML Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToXML Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToByteString Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

FromText Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

ToText Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

type Rep Vc3InterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3InterlaceMode

type Rep Vc3InterlaceMode = D1 ('MetaData "Vc3InterlaceMode" "Amazonka.MediaConvert.Types.Vc3InterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3InterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3InterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3ScanTypeConversionMode

newtype Vc3ScanTypeConversionMode Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

Instances

Instances details
Eq Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Ord Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Read Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Show Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Generic Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Associated Types

type Rep Vc3ScanTypeConversionMode :: Type -> Type #

NFData Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

Hashable Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToJSON Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToJSONKey Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

FromJSON Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

FromJSONKey Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToLog Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToHeader Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToQuery Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

FromXML Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToXML Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToByteString Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

FromText Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

ToText Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

type Rep Vc3ScanTypeConversionMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode

type Rep Vc3ScanTypeConversionMode = D1 ('MetaData "Vc3ScanTypeConversionMode" "Amazonka.MediaConvert.Types.Vc3ScanTypeConversionMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3ScanTypeConversionMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3ScanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3SlowPal

newtype Vc3SlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

Vc3SlowPal' 

Fields

Bundled Patterns

pattern Vc3SlowPal_DISABLED :: Vc3SlowPal 
pattern Vc3SlowPal_ENABLED :: Vc3SlowPal 

Instances

Instances details
Eq Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Ord Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Read Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Show Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Generic Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Associated Types

type Rep Vc3SlowPal :: Type -> Type #

NFData Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Methods

rnf :: Vc3SlowPal -> () #

Hashable Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToJSON Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToJSONKey Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

FromJSON Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

FromJSONKey Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToLog Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToHeader Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToQuery Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

FromXML Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToXML Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Methods

toXML :: Vc3SlowPal -> XML #

ToByteString Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

FromText Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

ToText Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

Methods

toText :: Vc3SlowPal -> Text #

type Rep Vc3SlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3SlowPal

type Rep Vc3SlowPal = D1 ('MetaData "Vc3SlowPal" "Amazonka.MediaConvert.Types.Vc3SlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3SlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3SlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vc3Telecine

newtype Vc3Telecine Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

Constructors

Vc3Telecine' 

Bundled Patterns

pattern Vc3Telecine_HARD :: Vc3Telecine 
pattern Vc3Telecine_NONE :: Vc3Telecine 

Instances

Instances details
Eq Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Ord Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Read Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Show Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Generic Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Associated Types

type Rep Vc3Telecine :: Type -> Type #

NFData Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Methods

rnf :: Vc3Telecine -> () #

Hashable Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToJSON Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToJSONKey Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

FromJSON Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

FromJSONKey Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToLog Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToHeader Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToQuery Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

FromXML Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToXML Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Methods

toXML :: Vc3Telecine -> XML #

ToByteString Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

FromText Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

ToText Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

Methods

toText :: Vc3Telecine -> Text #

type Rep Vc3Telecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Telecine

type Rep Vc3Telecine = D1 ('MetaData "Vc3Telecine" "Amazonka.MediaConvert.Types.Vc3Telecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vc3Telecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVc3Telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

VchipAction

newtype VchipAction Source #

The action to take on content advisory XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

Constructors

VchipAction' 

Bundled Patterns

pattern VchipAction_PASSTHROUGH :: VchipAction 
pattern VchipAction_STRIP :: VchipAction 

Instances

Instances details
Eq VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Ord VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Read VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Show VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Generic VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Associated Types

type Rep VchipAction :: Type -> Type #

NFData VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Methods

rnf :: VchipAction -> () #

Hashable VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToJSON VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToJSONKey VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

FromJSON VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

FromJSONKey VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToLog VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToHeader VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToQuery VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

FromXML VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToXML VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Methods

toXML :: VchipAction -> XML #

ToByteString VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

FromText VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

ToText VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

Methods

toText :: VchipAction -> Text #

type Rep VchipAction Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VchipAction

type Rep VchipAction = D1 ('MetaData "VchipAction" "Amazonka.MediaConvert.Types.VchipAction" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "VchipAction'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVchipAction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

VideoCodec

newtype VideoCodec Source #

Type of video codec

Constructors

VideoCodec' 

Fields

Instances

Instances details
Eq VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Ord VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Read VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Show VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Generic VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Associated Types

type Rep VideoCodec :: Type -> Type #

NFData VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Methods

rnf :: VideoCodec -> () #

Hashable VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToJSON VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToJSONKey VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

FromJSON VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

FromJSONKey VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToLog VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToHeader VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToQuery VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

FromXML VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToXML VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Methods

toXML :: VideoCodec -> XML #

ToByteString VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

FromText VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

ToText VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

Methods

toText :: VideoCodec -> Text #

type Rep VideoCodec Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodec

type Rep VideoCodec = D1 ('MetaData "VideoCodec" "Amazonka.MediaConvert.Types.VideoCodec" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "VideoCodec'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVideoCodec") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

VideoTimecodeInsertion

newtype VideoTimecodeInsertion Source #

Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode insertion when the input frame rate is identical to the output frame rate. To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. When the service inserts timecodes in an output, by default, it uses any embedded timecodes from the input. If none are present, the service will set the timecode for the first output frame to zero. To change this default behavior, adjust the settings under Timecode configuration (TimecodeConfig). In the console, these settings are located under Job > Job settings > Timecode configuration. Note - Timecode source under input settings (InputTimecodeSource) does not affect the timecodes that are inserted in the output. Source under Job settings > Timecode configuration (TimecodeSource) does.

Instances

Instances details
Eq VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Ord VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Read VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Show VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Generic VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Associated Types

type Rep VideoTimecodeInsertion :: Type -> Type #

NFData VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

Methods

rnf :: VideoTimecodeInsertion -> () #

Hashable VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToJSON VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToJSONKey VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

FromJSON VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

FromJSONKey VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToLog VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToHeader VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToQuery VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

FromXML VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToXML VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToByteString VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

FromText VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

ToText VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

type Rep VideoTimecodeInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoTimecodeInsertion

type Rep VideoTimecodeInsertion = D1 ('MetaData "VideoTimecodeInsertion" "Amazonka.MediaConvert.Types.VideoTimecodeInsertion" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "VideoTimecodeInsertion'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVideoTimecodeInsertion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp8FramerateControl

newtype Vp8FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Ord Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Read Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Show Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Generic Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Associated Types

type Rep Vp8FramerateControl :: Type -> Type #

NFData Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

Methods

rnf :: Vp8FramerateControl -> () #

Hashable Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToJSON Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToJSONKey Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

FromJSON Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

FromJSONKey Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToLog Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToHeader Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToQuery Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

FromXML Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToXML Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToByteString Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

FromText Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

ToText Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

type Rep Vp8FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateControl

type Rep Vp8FramerateControl = D1 ('MetaData "Vp8FramerateControl" "Amazonka.MediaConvert.Types.Vp8FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp8FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp8FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp8FramerateConversionAlgorithm

newtype Vp8FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Ord Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Read Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Show Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Generic Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Associated Types

type Rep Vp8FramerateConversionAlgorithm :: Type -> Type #

NFData Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

Hashable Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToJSON Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToJSONKey Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

FromJSON Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

FromJSONKey Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToLog Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToHeader Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToQuery Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

FromXML Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToXML Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToByteString Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

FromText Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

ToText Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

type Rep Vp8FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm

type Rep Vp8FramerateConversionAlgorithm = D1 ('MetaData "Vp8FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.Vp8FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp8FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp8FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp8ParControl

newtype Vp8ParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Constructors

Vp8ParControl' 

Instances

Instances details
Eq Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Ord Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Read Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Show Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Generic Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Associated Types

type Rep Vp8ParControl :: Type -> Type #

NFData Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Methods

rnf :: Vp8ParControl -> () #

Hashable Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToJSON Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToJSONKey Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

FromJSON Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

FromJSONKey Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToLog Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToHeader Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToQuery Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

FromXML Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToXML Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Methods

toXML :: Vp8ParControl -> XML #

ToByteString Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

FromText Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

ToText Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

Methods

toText :: Vp8ParControl -> Text #

type Rep Vp8ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8ParControl

type Rep Vp8ParControl = D1 ('MetaData "Vp8ParControl" "Amazonka.MediaConvert.Types.Vp8ParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp8ParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp8ParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp8QualityTuningLevel

newtype Vp8QualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

Instances

Instances details
Eq Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Ord Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Read Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Show Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Generic Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Associated Types

type Rep Vp8QualityTuningLevel :: Type -> Type #

NFData Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

Methods

rnf :: Vp8QualityTuningLevel -> () #

Hashable Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToJSON Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToJSONKey Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

FromJSON Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

FromJSONKey Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToLog Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToHeader Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToQuery Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

FromXML Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToXML Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToByteString Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

FromText Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

ToText Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

type Rep Vp8QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8QualityTuningLevel

type Rep Vp8QualityTuningLevel = D1 ('MetaData "Vp8QualityTuningLevel" "Amazonka.MediaConvert.Types.Vp8QualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp8QualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp8QualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp8RateControlMode

newtype Vp8RateControlMode Source #

With the VP8 codec, you can use only the variable bitrate (VBR) rate control mode.

Bundled Patterns

pattern Vp8RateControlMode_VBR :: Vp8RateControlMode 

Instances

Instances details
Eq Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Ord Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Read Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Show Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Generic Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Associated Types

type Rep Vp8RateControlMode :: Type -> Type #

NFData Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

Methods

rnf :: Vp8RateControlMode -> () #

Hashable Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToJSON Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToJSONKey Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

FromJSON Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

FromJSONKey Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToLog Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToHeader Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToQuery Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

FromXML Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToXML Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToByteString Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

FromText Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

ToText Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

type Rep Vp8RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8RateControlMode

type Rep Vp8RateControlMode = D1 ('MetaData "Vp8RateControlMode" "Amazonka.MediaConvert.Types.Vp8RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp8RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp8RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp9FramerateControl

newtype Vp9FramerateControl Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Ord Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Read Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Show Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Generic Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Associated Types

type Rep Vp9FramerateControl :: Type -> Type #

NFData Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

Methods

rnf :: Vp9FramerateControl -> () #

Hashable Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToJSON Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToJSONKey Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

FromJSON Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

FromJSONKey Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToLog Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToHeader Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToQuery Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

FromXML Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToXML Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToByteString Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

FromText Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

ToText Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

type Rep Vp9FramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateControl

type Rep Vp9FramerateControl = D1 ('MetaData "Vp9FramerateControl" "Amazonka.MediaConvert.Types.Vp9FramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp9FramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp9FramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp9FramerateConversionAlgorithm

newtype Vp9FramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Ord Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Read Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Show Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Generic Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Associated Types

type Rep Vp9FramerateConversionAlgorithm :: Type -> Type #

NFData Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

Hashable Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToJSON Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToJSONKey Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

FromJSON Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

FromJSONKey Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToLog Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToHeader Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToQuery Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

FromXML Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToXML Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToByteString Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

FromText Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

ToText Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

type Rep Vp9FramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm

type Rep Vp9FramerateConversionAlgorithm = D1 ('MetaData "Vp9FramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.Vp9FramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp9FramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp9FramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp9ParControl

newtype Vp9ParControl Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

Constructors

Vp9ParControl' 

Instances

Instances details
Eq Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Ord Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Read Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Show Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Generic Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Associated Types

type Rep Vp9ParControl :: Type -> Type #

NFData Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Methods

rnf :: Vp9ParControl -> () #

Hashable Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToJSON Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToJSONKey Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

FromJSON Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

FromJSONKey Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToLog Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToHeader Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToQuery Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

FromXML Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToXML Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Methods

toXML :: Vp9ParControl -> XML #

ToByteString Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

FromText Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

ToText Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

Methods

toText :: Vp9ParControl -> Text #

type Rep Vp9ParControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9ParControl

type Rep Vp9ParControl = D1 ('MetaData "Vp9ParControl" "Amazonka.MediaConvert.Types.Vp9ParControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp9ParControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp9ParControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp9QualityTuningLevel

newtype Vp9QualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

Instances

Instances details
Eq Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Ord Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Read Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Show Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Generic Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Associated Types

type Rep Vp9QualityTuningLevel :: Type -> Type #

NFData Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

Methods

rnf :: Vp9QualityTuningLevel -> () #

Hashable Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToJSON Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToJSONKey Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

FromJSON Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

FromJSONKey Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToLog Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToHeader Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToQuery Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

FromXML Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToXML Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToByteString Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

FromText Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

ToText Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

type Rep Vp9QualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9QualityTuningLevel

type Rep Vp9QualityTuningLevel = D1 ('MetaData "Vp9QualityTuningLevel" "Amazonka.MediaConvert.Types.Vp9QualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp9QualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp9QualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Vp9RateControlMode

newtype Vp9RateControlMode Source #

With the VP9 codec, you can use only the variable bitrate (VBR) rate control mode.

Bundled Patterns

pattern Vp9RateControlMode_VBR :: Vp9RateControlMode 

Instances

Instances details
Eq Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Ord Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Read Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Show Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Generic Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Associated Types

type Rep Vp9RateControlMode :: Type -> Type #

NFData Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

Methods

rnf :: Vp9RateControlMode -> () #

Hashable Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToJSON Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToJSONKey Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

FromJSON Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

FromJSONKey Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToLog Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToHeader Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToQuery Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

FromXML Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToXML Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToByteString Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

FromText Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

ToText Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

type Rep Vp9RateControlMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9RateControlMode

type Rep Vp9RateControlMode = D1 ('MetaData "Vp9RateControlMode" "Amazonka.MediaConvert.Types.Vp9RateControlMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Vp9RateControlMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVp9RateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

WatermarkingStrength

newtype WatermarkingStrength Source #

Optional. Ignore this setting unless Nagra support directs you to specify a value. When you don't specify a value here, the Nagra NexGuard library uses its default value.

Instances

Instances details
Eq WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Ord WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Read WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Show WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Generic WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Associated Types

type Rep WatermarkingStrength :: Type -> Type #

NFData WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

Methods

rnf :: WatermarkingStrength -> () #

Hashable WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToJSON WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToJSONKey WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

FromJSON WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

FromJSONKey WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToLog WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToHeader WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToQuery WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

FromXML WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToXML WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToByteString WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

FromText WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

ToText WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

type Rep WatermarkingStrength Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WatermarkingStrength

type Rep WatermarkingStrength = D1 ('MetaData "WatermarkingStrength" "Amazonka.MediaConvert.Types.WatermarkingStrength" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "WatermarkingStrength'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromWatermarkingStrength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

WavFormat

newtype WavFormat Source #

The service defaults to using RIFF for WAV outputs. If your output audio is likely to exceed 4 GB in file size, or if you otherwise need the extended support of the RF64 format, set your output WAV file format to RF64.

Constructors

WavFormat' 

Fields

Bundled Patterns

pattern WavFormat_RF64 :: WavFormat 
pattern WavFormat_RIFF :: WavFormat 

Instances

Instances details
Eq WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Ord WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Read WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Show WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Generic WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Associated Types

type Rep WavFormat :: Type -> Type #

NFData WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Methods

rnf :: WavFormat -> () #

Hashable WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToJSON WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToJSONKey WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

FromJSON WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

FromJSONKey WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToLog WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToHeader WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToQuery WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

FromXML WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToXML WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Methods

toXML :: WavFormat -> XML #

ToByteString WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Methods

toBS :: WavFormat -> ByteString #

FromText WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

ToText WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

Methods

toText :: WavFormat -> Text #

type Rep WavFormat Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavFormat

type Rep WavFormat = D1 ('MetaData "WavFormat" "Amazonka.MediaConvert.Types.WavFormat" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "WavFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromWavFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

WebvttStylePassthrough

newtype WebvttStylePassthrough Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

Instances

Instances details
Eq WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Ord WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Read WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Show WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Generic WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Associated Types

type Rep WebvttStylePassthrough :: Type -> Type #

NFData WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

Methods

rnf :: WebvttStylePassthrough -> () #

Hashable WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToJSON WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToJSONKey WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

FromJSON WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

FromJSONKey WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToLog WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToHeader WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToQuery WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

FromXML WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToXML WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToByteString WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

FromText WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

ToText WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

type Rep WebvttStylePassthrough Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttStylePassthrough

type Rep WebvttStylePassthrough = D1 ('MetaData "WebvttStylePassthrough" "Amazonka.MediaConvert.Types.WebvttStylePassthrough" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "WebvttStylePassthrough'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromWebvttStylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Xavc4kIntraCbgProfileClass

newtype Xavc4kIntraCbgProfileClass Source #

Specify the XAVC Intra 4k (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Ord Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Read Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Show Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Generic Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Associated Types

type Rep Xavc4kIntraCbgProfileClass :: Type -> Type #

NFData Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

Hashable Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToJSON Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToJSONKey Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

FromJSON Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

FromJSONKey Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToLog Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToHeader Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToQuery Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

FromXML Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToXML Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToByteString Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

FromText Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

ToText Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

type Rep Xavc4kIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass

type Rep Xavc4kIntraCbgProfileClass = D1 ('MetaData "Xavc4kIntraCbgProfileClass" "Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Xavc4kIntraCbgProfileClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavc4kIntraCbgProfileClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Xavc4kIntraVbrProfileClass

newtype Xavc4kIntraVbrProfileClass Source #

Specify the XAVC Intra 4k (VBR) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Ord Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Read Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Show Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Generic Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Associated Types

type Rep Xavc4kIntraVbrProfileClass :: Type -> Type #

NFData Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

Hashable Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToJSON Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToJSONKey Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

FromJSON Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

FromJSONKey Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToLog Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToHeader Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToQuery Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

FromXML Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToXML Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToByteString Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

FromText Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

ToText Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

type Rep Xavc4kIntraVbrProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass

type Rep Xavc4kIntraVbrProfileClass = D1 ('MetaData "Xavc4kIntraVbrProfileClass" "Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Xavc4kIntraVbrProfileClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavc4kIntraVbrProfileClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Xavc4kProfileBitrateClass

newtype Xavc4kProfileBitrateClass Source #

Specify the XAVC 4k (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Ord Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Read Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Show Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Generic Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Associated Types

type Rep Xavc4kProfileBitrateClass :: Type -> Type #

NFData Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

Hashable Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToJSON Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToJSONKey Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

FromJSON Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

FromJSONKey Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToLog Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToHeader Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToQuery Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

FromXML Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToXML Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToByteString Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

FromText Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

ToText Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

type Rep Xavc4kProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass

type Rep Xavc4kProfileBitrateClass = D1 ('MetaData "Xavc4kProfileBitrateClass" "Amazonka.MediaConvert.Types.Xavc4kProfileBitrateClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Xavc4kProfileBitrateClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavc4kProfileBitrateClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Xavc4kProfileCodecProfile

newtype Xavc4kProfileCodecProfile Source #

Specify the codec profile for this output. Choose High, 8-bit, 4:2:0 (HIGH) or High, 10-bit, 4:2:2 (HIGH_422). These profiles are specified in ITU-T H.264.

Instances

Instances details
Eq Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Ord Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Read Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Show Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Generic Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Associated Types

type Rep Xavc4kProfileCodecProfile :: Type -> Type #

NFData Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

Hashable Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToJSON Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToJSONKey Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

FromJSON Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

FromJSONKey Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToLog Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToHeader Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToQuery Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

FromXML Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToXML Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToByteString Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

FromText Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

ToText Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

type Rep Xavc4kProfileCodecProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile

type Rep Xavc4kProfileCodecProfile = D1 ('MetaData "Xavc4kProfileCodecProfile" "Amazonka.MediaConvert.Types.Xavc4kProfileCodecProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Xavc4kProfileCodecProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavc4kProfileCodecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Xavc4kProfileQualityTuningLevel

newtype Xavc4kProfileQualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Instances

Instances details
Eq Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Ord Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Read Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Show Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Generic Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Associated Types

type Rep Xavc4kProfileQualityTuningLevel :: Type -> Type #

NFData Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

Hashable Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToJSON Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToJSONKey Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

FromJSON Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

FromJSONKey Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToLog Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToHeader Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToQuery Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

FromXML Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToXML Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToByteString Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

FromText Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

ToText Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

type Rep Xavc4kProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel

type Rep Xavc4kProfileQualityTuningLevel = D1 ('MetaData "Xavc4kProfileQualityTuningLevel" "Amazonka.MediaConvert.Types.Xavc4kProfileQualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "Xavc4kProfileQualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavc4kProfileQualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcAdaptiveQuantization

newtype XavcAdaptiveQuantization Source #

Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set Adaptive quantization (adaptiveQuantization) to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization to Off (OFF). Related settings: The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

Instances

Instances details
Eq XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Ord XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Read XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Show XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Generic XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Associated Types

type Rep XavcAdaptiveQuantization :: Type -> Type #

NFData XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

Hashable XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToJSON XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToJSONKey XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

FromJSON XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

FromJSONKey XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToLog XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToHeader XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToQuery XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

FromXML XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToXML XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToByteString XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

FromText XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

ToText XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

type Rep XavcAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcAdaptiveQuantization

type Rep XavcAdaptiveQuantization = D1 ('MetaData "XavcAdaptiveQuantization" "Amazonka.MediaConvert.Types.XavcAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcEntropyEncoding

newtype XavcEntropyEncoding Source #

Optional. Choose a specific entropy encoding mode only when you want to override XAVC recommendations. If you choose the value auto, MediaConvert uses the mode that the XAVC file format specifies given this output's operating point.

Instances

Instances details
Eq XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Ord XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Read XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Show XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Generic XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Associated Types

type Rep XavcEntropyEncoding :: Type -> Type #

NFData XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

Methods

rnf :: XavcEntropyEncoding -> () #

Hashable XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToJSON XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToJSONKey XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

FromJSON XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

FromJSONKey XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToLog XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToHeader XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToQuery XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

FromXML XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToXML XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToByteString XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

FromText XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

ToText XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

type Rep XavcEntropyEncoding Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcEntropyEncoding

type Rep XavcEntropyEncoding = D1 ('MetaData "XavcEntropyEncoding" "Amazonka.MediaConvert.Types.XavcEntropyEncoding" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcEntropyEncoding'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcEntropyEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcFlickerAdaptiveQuantization

newtype XavcFlickerAdaptiveQuantization Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

Instances

Instances details
Eq XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Ord XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Read XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Show XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Generic XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Associated Types

type Rep XavcFlickerAdaptiveQuantization :: Type -> Type #

NFData XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

Hashable XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToJSON XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToJSONKey XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

FromJSON XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

FromJSONKey XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToLog XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToHeader XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToQuery XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

FromXML XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToXML XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToByteString XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

FromText XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

ToText XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

type Rep XavcFlickerAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization

type Rep XavcFlickerAdaptiveQuantization = D1 ('MetaData "XavcFlickerAdaptiveQuantization" "Amazonka.MediaConvert.Types.XavcFlickerAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcFlickerAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcFlickerAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcFramerateControl

newtype XavcFramerateControl Source #

If you are using the console, use the Frame rate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list. The framerates shown in the dropdown list are decimal approximations of fractions. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate that you specify in the settings FramerateNumerator and FramerateDenominator.

Instances

Instances details
Eq XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Ord XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Read XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Show XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Generic XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Associated Types

type Rep XavcFramerateControl :: Type -> Type #

NFData XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

Methods

rnf :: XavcFramerateControl -> () #

Hashable XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToJSON XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToJSONKey XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

FromJSON XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

FromJSONKey XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToLog XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToHeader XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToQuery XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

FromXML XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToXML XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToByteString XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

FromText XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

ToText XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

type Rep XavcFramerateControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateControl

type Rep XavcFramerateControl = D1 ('MetaData "XavcFramerateControl" "Amazonka.MediaConvert.Types.XavcFramerateControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcFramerateControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcFramerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcFramerateConversionAlgorithm

newtype XavcFramerateConversionAlgorithm Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

Instances

Instances details
Eq XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Ord XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Read XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Show XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Generic XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Associated Types

type Rep XavcFramerateConversionAlgorithm :: Type -> Type #

NFData XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

Hashable XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToJSON XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToJSONKey XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

FromJSON XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

FromJSONKey XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToLog XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToHeader XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToQuery XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

FromXML XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToXML XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToByteString XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

FromText XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

ToText XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

type Rep XavcFramerateConversionAlgorithm Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm

type Rep XavcFramerateConversionAlgorithm = D1 ('MetaData "XavcFramerateConversionAlgorithm" "Amazonka.MediaConvert.Types.XavcFramerateConversionAlgorithm" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcFramerateConversionAlgorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcFramerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcGopBReference

newtype XavcGopBReference Source #

Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

Instances

Instances details
Eq XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Ord XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Read XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Show XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Generic XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Associated Types

type Rep XavcGopBReference :: Type -> Type #

NFData XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

Methods

rnf :: XavcGopBReference -> () #

Hashable XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToJSON XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToJSONKey XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

FromJSON XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

FromJSONKey XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToLog XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToHeader XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToQuery XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

FromXML XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToXML XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToByteString XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

FromText XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

ToText XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

type Rep XavcGopBReference Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcGopBReference

type Rep XavcGopBReference = D1 ('MetaData "XavcGopBReference" "Amazonka.MediaConvert.Types.XavcGopBReference" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcGopBReference'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcGopBReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcHdIntraCbgProfileClass

newtype XavcHdIntraCbgProfileClass Source #

Specify the XAVC Intra HD (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Ord XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Read XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Show XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Generic XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Associated Types

type Rep XavcHdIntraCbgProfileClass :: Type -> Type #

NFData XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

Hashable XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToJSON XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToJSONKey XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

FromJSON XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

FromJSONKey XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToLog XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToHeader XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToQuery XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

FromXML XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToXML XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToByteString XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

FromText XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

ToText XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

type Rep XavcHdIntraCbgProfileClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass

type Rep XavcHdIntraCbgProfileClass = D1 ('MetaData "XavcHdIntraCbgProfileClass" "Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcHdIntraCbgProfileClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcHdIntraCbgProfileClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcHdProfileBitrateClass

newtype XavcHdProfileBitrateClass Source #

Specify the XAVC HD (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Ord XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Read XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Show XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Generic XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Associated Types

type Rep XavcHdProfileBitrateClass :: Type -> Type #

NFData XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

Hashable XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToJSON XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToJSONKey XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

FromJSON XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

FromJSONKey XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToLog XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToHeader XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToQuery XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

FromXML XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToXML XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToByteString XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

FromText XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

ToText XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

type Rep XavcHdProfileBitrateClass Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass

type Rep XavcHdProfileBitrateClass = D1 ('MetaData "XavcHdProfileBitrateClass" "Amazonka.MediaConvert.Types.XavcHdProfileBitrateClass" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcHdProfileBitrateClass'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcHdProfileBitrateClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcHdProfileQualityTuningLevel

newtype XavcHdProfileQualityTuningLevel Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

Instances

Instances details
Eq XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Ord XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Read XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Show XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Generic XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Associated Types

type Rep XavcHdProfileQualityTuningLevel :: Type -> Type #

NFData XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

Hashable XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToJSON XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToJSONKey XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

FromJSON XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

FromJSONKey XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToLog XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToHeader XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToQuery XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

FromXML XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToXML XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToByteString XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

FromText XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

ToText XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

type Rep XavcHdProfileQualityTuningLevel Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel

type Rep XavcHdProfileQualityTuningLevel = D1 ('MetaData "XavcHdProfileQualityTuningLevel" "Amazonka.MediaConvert.Types.XavcHdProfileQualityTuningLevel" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcHdProfileQualityTuningLevel'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcHdProfileQualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcHdProfileTelecine

newtype XavcHdProfileTelecine Source #

Ignore this setting unless you set Frame rate (framerateNumerator divided by framerateDenominator) to 29.970. If your input framerate is 23.976, choose Hard (HARD). Otherwise, keep the default value None (NONE). For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-telecine-and-inverse-telecine.html.

Instances

Instances details
Eq XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Ord XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Read XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Show XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Generic XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Associated Types

type Rep XavcHdProfileTelecine :: Type -> Type #

NFData XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

Methods

rnf :: XavcHdProfileTelecine -> () #

Hashable XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToJSON XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToJSONKey XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

FromJSON XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

FromJSONKey XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToLog XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToHeader XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToQuery XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

FromXML XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToXML XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToByteString XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

FromText XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

ToText XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

type Rep XavcHdProfileTelecine Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileTelecine

type Rep XavcHdProfileTelecine = D1 ('MetaData "XavcHdProfileTelecine" "Amazonka.MediaConvert.Types.XavcHdProfileTelecine" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcHdProfileTelecine'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcHdProfileTelecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcInterlaceMode

newtype XavcInterlaceMode Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

Instances

Instances details
Eq XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Ord XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Read XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Show XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Generic XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Associated Types

type Rep XavcInterlaceMode :: Type -> Type #

NFData XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

Methods

rnf :: XavcInterlaceMode -> () #

Hashable XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToJSON XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToJSONKey XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

FromJSON XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

FromJSONKey XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToLog XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToHeader XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToQuery XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

FromXML XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToXML XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToByteString XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

FromText XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

ToText XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

type Rep XavcInterlaceMode Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcInterlaceMode

type Rep XavcInterlaceMode = D1 ('MetaData "XavcInterlaceMode" "Amazonka.MediaConvert.Types.XavcInterlaceMode" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcInterlaceMode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcInterlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcProfile

newtype XavcProfile Source #

Specify the XAVC profile for this output. For more information, see the Sony documentation at https://www.xavc-info.org/. Note that MediaConvert doesn't support the interlaced video XAVC operating points for XAVC_HD_INTRA_CBG. To create an interlaced XAVC output, choose the profile XAVC_HD.

Constructors

XavcProfile' 

Instances

Instances details
Eq XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Ord XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Read XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Show XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Generic XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Associated Types

type Rep XavcProfile :: Type -> Type #

NFData XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Methods

rnf :: XavcProfile -> () #

Hashable XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToJSON XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToJSONKey XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

FromJSON XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

FromJSONKey XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToLog XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToHeader XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToQuery XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

FromXML XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToXML XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Methods

toXML :: XavcProfile -> XML #

ToByteString XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

FromText XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

ToText XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

Methods

toText :: XavcProfile -> Text #

type Rep XavcProfile Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcProfile

type Rep XavcProfile = D1 ('MetaData "XavcProfile" "Amazonka.MediaConvert.Types.XavcProfile" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcProfile'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcSlowPal

newtype XavcSlowPal Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Frame rate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

Constructors

XavcSlowPal' 

Bundled Patterns

pattern XavcSlowPal_DISABLED :: XavcSlowPal 
pattern XavcSlowPal_ENABLED :: XavcSlowPal 

Instances

Instances details
Eq XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Ord XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Read XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Show XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Generic XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Associated Types

type Rep XavcSlowPal :: Type -> Type #

NFData XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Methods

rnf :: XavcSlowPal -> () #

Hashable XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToJSON XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToJSONKey XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

FromJSON XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

FromJSONKey XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToLog XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToHeader XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToQuery XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

FromXML XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToXML XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Methods

toXML :: XavcSlowPal -> XML #

ToByteString XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

FromText XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

ToText XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

Methods

toText :: XavcSlowPal -> Text #

type Rep XavcSlowPal Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSlowPal

type Rep XavcSlowPal = D1 ('MetaData "XavcSlowPal" "Amazonka.MediaConvert.Types.XavcSlowPal" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcSlowPal'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcSlowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcSpatialAdaptiveQuantization

newtype XavcSpatialAdaptiveQuantization Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Ord XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Read XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Show XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Generic XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Associated Types

type Rep XavcSpatialAdaptiveQuantization :: Type -> Type #

NFData XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

Hashable XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToJSON XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToJSONKey XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

FromJSON XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

FromJSONKey XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToLog XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToHeader XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToQuery XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

FromXML XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToXML XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToByteString XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

FromText XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

ToText XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

type Rep XavcSpatialAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization

type Rep XavcSpatialAdaptiveQuantization = D1 ('MetaData "XavcSpatialAdaptiveQuantization" "Amazonka.MediaConvert.Types.XavcSpatialAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcSpatialAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcSpatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

XavcTemporalAdaptiveQuantization

newtype XavcTemporalAdaptiveQuantization Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal adaptive quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

Instances

Instances details
Eq XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Ord XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Read XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Show XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Generic XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Associated Types

type Rep XavcTemporalAdaptiveQuantization :: Type -> Type #

NFData XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

Hashable XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToJSON XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToJSONKey XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

FromJSON XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

FromJSONKey XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToLog XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToHeader XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToQuery XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

FromXML XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToXML XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToByteString XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

FromText XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

ToText XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

type Rep XavcTemporalAdaptiveQuantization Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization

type Rep XavcTemporalAdaptiveQuantization = D1 ('MetaData "XavcTemporalAdaptiveQuantization" "Amazonka.MediaConvert.Types.XavcTemporalAdaptiveQuantization" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'True) (C1 ('MetaCons "XavcTemporalAdaptiveQuantization'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromXavcTemporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AacSettings

data AacSettings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AAC. The service accepts one of two mutually exclusive groups of AAC settings--VBR and CBR. To select one of these modes, set the value of Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you control the audio quality with the setting VBR quality (vbrQuality). In CBR mode, you use the setting Bitrate (bitrate). Defaults and valid values depend on the rate control mode.

See: newAacSettings smart constructor.

Constructors

AacSettings' 

Fields

  • audioDescriptionBroadcasterMix :: Maybe AacAudioDescriptionBroadcasterMix

    Choose BROADCASTER_MIXED_AD when the input contains pre-mixed main audio + audio description (AD) as a stereo pair. The value for AudioType will be set to 3, which signals to downstream systems that this stream contains "broadcaster mixed AD". Note that the input received by the encoder must contain pre-mixed audio; the encoder does not perform the mixing. When you choose BROADCASTER_MIXED_AD, the encoder ignores any values you provide in AudioType and FollowInputAudioType. Choose NORMAL when the input does not contain pre-mixed audio + audio description (AD). In this case, the encoder will use any values you provide for AudioType and FollowInputAudioType.

  • rawFormat :: Maybe AacRawFormat

    Enables LATM/LOAS AAC output. Note that if you use LATM/LOAS AAC in an output, you must choose "No container" for the output container.

  • codingMode :: Maybe AacCodingMode

    Mono (Audio Description), Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control mode and profile. "1.0 - Audio Description (Receiver Mix)" setting receives a stereo description plus control track and emits a mono AAC encode of the description track, with control data emitted in the PES header as per ETSI TS 101 154 Annex E.

  • rateControlMode :: Maybe AacRateControlMode

    Rate Control Mode.

  • sampleRate :: Maybe Natural

    Sample rate in Hz. Valid values depend on rate control mode and profile.

  • specification :: Maybe AacSpecification

    Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream containers.

  • codecProfile :: Maybe AacCodecProfile

    AAC Profile.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. The set of valid values for this setting is: 6000, 8000, 10000, 12000, 14000, 16000, 20000, 24000, 28000, 32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 288000, 320000, 384000, 448000, 512000, 576000, 640000, 768000, 896000, 1024000. The value you set is also constrained by the values that you choose for Profile (codecProfile), Bitrate control mode (codingMode), and Sample rate (sampleRate). Default values depend on Bitrate control mode and Profile.

  • vbrQuality :: Maybe AacVbrQuality

    VBR Quality Level - Only used if rate_control_mode is VBR.

Instances

Instances details
Eq AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

Read AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

Show AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

Generic AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

Associated Types

type Rep AacSettings :: Type -> Type #

NFData AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

Methods

rnf :: AacSettings -> () #

Hashable AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

ToJSON AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

FromJSON AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

type Rep AacSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AacSettings

newAacSettings :: AacSettings Source #

Create a value of AacSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:audioDescriptionBroadcasterMix:AacSettings', aacSettings_audioDescriptionBroadcasterMix - Choose BROADCASTER_MIXED_AD when the input contains pre-mixed main audio + audio description (AD) as a stereo pair. The value for AudioType will be set to 3, which signals to downstream systems that this stream contains "broadcaster mixed AD". Note that the input received by the encoder must contain pre-mixed audio; the encoder does not perform the mixing. When you choose BROADCASTER_MIXED_AD, the encoder ignores any values you provide in AudioType and FollowInputAudioType. Choose NORMAL when the input does not contain pre-mixed audio + audio description (AD). In this case, the encoder will use any values you provide for AudioType and FollowInputAudioType.

$sel:rawFormat:AacSettings', aacSettings_rawFormat - Enables LATM/LOAS AAC output. Note that if you use LATM/LOAS AAC in an output, you must choose "No container" for the output container.

$sel:codingMode:AacSettings', aacSettings_codingMode - Mono (Audio Description), Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control mode and profile. "1.0 - Audio Description (Receiver Mix)" setting receives a stereo description plus control track and emits a mono AAC encode of the description track, with control data emitted in the PES header as per ETSI TS 101 154 Annex E.

$sel:rateControlMode:AacSettings', aacSettings_rateControlMode - Rate Control Mode.

$sel:sampleRate:AacSettings', aacSettings_sampleRate - Sample rate in Hz. Valid values depend on rate control mode and profile.

$sel:specification:AacSettings', aacSettings_specification - Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream containers.

$sel:codecProfile:AacSettings', aacSettings_codecProfile - AAC Profile.

$sel:bitrate:AacSettings', aacSettings_bitrate - Specify the average bitrate in bits per second. The set of valid values for this setting is: 6000, 8000, 10000, 12000, 14000, 16000, 20000, 24000, 28000, 32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 288000, 320000, 384000, 448000, 512000, 576000, 640000, 768000, 896000, 1024000. The value you set is also constrained by the values that you choose for Profile (codecProfile), Bitrate control mode (codingMode), and Sample rate (sampleRate). Default values depend on Bitrate control mode and Profile.

$sel:vbrQuality:AacSettings', aacSettings_vbrQuality - VBR Quality Level - Only used if rate_control_mode is VBR.

aacSettings_audioDescriptionBroadcasterMix :: Lens' AacSettings (Maybe AacAudioDescriptionBroadcasterMix) Source #

Choose BROADCASTER_MIXED_AD when the input contains pre-mixed main audio + audio description (AD) as a stereo pair. The value for AudioType will be set to 3, which signals to downstream systems that this stream contains "broadcaster mixed AD". Note that the input received by the encoder must contain pre-mixed audio; the encoder does not perform the mixing. When you choose BROADCASTER_MIXED_AD, the encoder ignores any values you provide in AudioType and FollowInputAudioType. Choose NORMAL when the input does not contain pre-mixed audio + audio description (AD). In this case, the encoder will use any values you provide for AudioType and FollowInputAudioType.

aacSettings_rawFormat :: Lens' AacSettings (Maybe AacRawFormat) Source #

Enables LATM/LOAS AAC output. Note that if you use LATM/LOAS AAC in an output, you must choose "No container" for the output container.

aacSettings_codingMode :: Lens' AacSettings (Maybe AacCodingMode) Source #

Mono (Audio Description), Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control mode and profile. "1.0 - Audio Description (Receiver Mix)" setting receives a stereo description plus control track and emits a mono AAC encode of the description track, with control data emitted in the PES header as per ETSI TS 101 154 Annex E.

aacSettings_sampleRate :: Lens' AacSettings (Maybe Natural) Source #

Sample rate in Hz. Valid values depend on rate control mode and profile.

aacSettings_specification :: Lens' AacSettings (Maybe AacSpecification) Source #

Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream containers.

aacSettings_bitrate :: Lens' AacSettings (Maybe Natural) Source #

Specify the average bitrate in bits per second. The set of valid values for this setting is: 6000, 8000, 10000, 12000, 14000, 16000, 20000, 24000, 28000, 32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 288000, 320000, 384000, 448000, 512000, 576000, 640000, 768000, 896000, 1024000. The value you set is also constrained by the values that you choose for Profile (codecProfile), Bitrate control mode (codingMode), and Sample rate (sampleRate). Default values depend on Bitrate control mode and Profile.

aacSettings_vbrQuality :: Lens' AacSettings (Maybe AacVbrQuality) Source #

VBR Quality Level - Only used if rate_control_mode is VBR.

Ac3Settings

data Ac3Settings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AC3.

See: newAc3Settings smart constructor.

Constructors

Ac3Settings' 

Fields

  • lfeFilter :: Maybe Ac3LfeFilter

    Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

  • dynamicRangeCompressionLine :: Maybe Ac3DynamicRangeCompressionLine

    Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • metadataControl :: Maybe Ac3MetadataControl

    When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

  • bitstreamMode :: Maybe Ac3BitstreamMode

    Specify the bitstream mode for the AC-3 stream that the encoder emits. For more information about the AC3 bitstream mode, see ATSC A/52-2012 (Annex E).

  • dynamicRangeCompressionRf :: Maybe Ac3DynamicRangeCompressionRf

    Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • codingMode :: Maybe Ac3CodingMode

    Dolby Digital coding mode. Determines number of channels.

  • sampleRate :: Maybe Natural

    This value is always 48000. It represents the sample rate in Hz.

  • dynamicRangeCompressionProfile :: Maybe Ac3DynamicRangeCompressionProfile

    When you want to add Dolby dynamic range compression (DRC) signaling to your output stream, we recommend that you use the mode-specific settings instead of Dynamic range compression profile (DynamicRangeCompressionProfile). The mode-specific settings are Dynamic range compression profile, line mode (dynamicRangeCompressionLine) and Dynamic range compression profile, RF mode (dynamicRangeCompressionRf). Note that when you specify values for all three settings, MediaConvert ignores the value of this setting in favor of the mode-specific settings. If you do use this setting instead of the mode-specific settings, choose None (NONE) to leave out DRC signaling. Keep the default Film standard (FILM_STANDARD) to set the profile to Dolby's film standard profile for all operating modes.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

  • dialnorm :: Maybe Natural

    Sets the dialnorm for the output. If blank and input audio is Dolby Digital, dialnorm will be passed through.

Instances

Instances details
Eq Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

Read Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

Show Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

Generic Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

Associated Types

type Rep Ac3Settings :: Type -> Type #

NFData Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

Methods

rnf :: Ac3Settings -> () #

Hashable Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

ToJSON Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

FromJSON Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

type Rep Ac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Ac3Settings

type Rep Ac3Settings = D1 ('MetaData "Ac3Settings" "Amazonka.MediaConvert.Types.Ac3Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Ac3Settings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "lfeFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3LfeFilter)) :*: S1 ('MetaSel ('Just "dynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3DynamicRangeCompressionLine))) :*: (S1 ('MetaSel ('Just "metadataControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3MetadataControl)) :*: (S1 ('MetaSel ('Just "bitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3BitstreamMode)) :*: S1 ('MetaSel ('Just "dynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3DynamicRangeCompressionRf))))) :*: ((S1 ('MetaSel ('Just "codingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3CodingMode)) :*: S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "dynamicRangeCompressionProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3DynamicRangeCompressionProfile)) :*: (S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "dialnorm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newAc3Settings :: Ac3Settings Source #

Create a value of Ac3Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:lfeFilter:Ac3Settings', ac3Settings_lfeFilter - Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

$sel:dynamicRangeCompressionLine:Ac3Settings', ac3Settings_dynamicRangeCompressionLine - Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:metadataControl:Ac3Settings', ac3Settings_metadataControl - When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

$sel:bitstreamMode:Ac3Settings', ac3Settings_bitstreamMode - Specify the bitstream mode for the AC-3 stream that the encoder emits. For more information about the AC3 bitstream mode, see ATSC A/52-2012 (Annex E).

$sel:dynamicRangeCompressionRf:Ac3Settings', ac3Settings_dynamicRangeCompressionRf - Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:codingMode:Ac3Settings', ac3Settings_codingMode - Dolby Digital coding mode. Determines number of channels.

$sel:sampleRate:Ac3Settings', ac3Settings_sampleRate - This value is always 48000. It represents the sample rate in Hz.

$sel:dynamicRangeCompressionProfile:Ac3Settings', ac3Settings_dynamicRangeCompressionProfile - When you want to add Dolby dynamic range compression (DRC) signaling to your output stream, we recommend that you use the mode-specific settings instead of Dynamic range compression profile (DynamicRangeCompressionProfile). The mode-specific settings are Dynamic range compression profile, line mode (dynamicRangeCompressionLine) and Dynamic range compression profile, RF mode (dynamicRangeCompressionRf). Note that when you specify values for all three settings, MediaConvert ignores the value of this setting in favor of the mode-specific settings. If you do use this setting instead of the mode-specific settings, choose None (NONE) to leave out DRC signaling. Keep the default Film standard (FILM_STANDARD) to set the profile to Dolby's film standard profile for all operating modes.

$sel:bitrate:Ac3Settings', ac3Settings_bitrate - Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

$sel:dialnorm:Ac3Settings', ac3Settings_dialnorm - Sets the dialnorm for the output. If blank and input audio is Dolby Digital, dialnorm will be passed through.

ac3Settings_lfeFilter :: Lens' Ac3Settings (Maybe Ac3LfeFilter) Source #

Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

ac3Settings_dynamicRangeCompressionLine :: Lens' Ac3Settings (Maybe Ac3DynamicRangeCompressionLine) Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

ac3Settings_metadataControl :: Lens' Ac3Settings (Maybe Ac3MetadataControl) Source #

When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

ac3Settings_bitstreamMode :: Lens' Ac3Settings (Maybe Ac3BitstreamMode) Source #

Specify the bitstream mode for the AC-3 stream that the encoder emits. For more information about the AC3 bitstream mode, see ATSC A/52-2012 (Annex E).

ac3Settings_dynamicRangeCompressionRf :: Lens' Ac3Settings (Maybe Ac3DynamicRangeCompressionRf) Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

ac3Settings_codingMode :: Lens' Ac3Settings (Maybe Ac3CodingMode) Source #

Dolby Digital coding mode. Determines number of channels.

ac3Settings_sampleRate :: Lens' Ac3Settings (Maybe Natural) Source #

This value is always 48000. It represents the sample rate in Hz.

ac3Settings_dynamicRangeCompressionProfile :: Lens' Ac3Settings (Maybe Ac3DynamicRangeCompressionProfile) Source #

When you want to add Dolby dynamic range compression (DRC) signaling to your output stream, we recommend that you use the mode-specific settings instead of Dynamic range compression profile (DynamicRangeCompressionProfile). The mode-specific settings are Dynamic range compression profile, line mode (dynamicRangeCompressionLine) and Dynamic range compression profile, RF mode (dynamicRangeCompressionRf). Note that when you specify values for all three settings, MediaConvert ignores the value of this setting in favor of the mode-specific settings. If you do use this setting instead of the mode-specific settings, choose None (NONE) to leave out DRC signaling. Keep the default Film standard (FILM_STANDARD) to set the profile to Dolby's film standard profile for all operating modes.

ac3Settings_bitrate :: Lens' Ac3Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

ac3Settings_dialnorm :: Lens' Ac3Settings (Maybe Natural) Source #

Sets the dialnorm for the output. If blank and input audio is Dolby Digital, dialnorm will be passed through.

AccelerationSettings

data AccelerationSettings Source #

Accelerated transcoding can significantly speed up jobs with long, visually complex content.

See: newAccelerationSettings smart constructor.

Constructors

AccelerationSettings' 

Fields

  • mode :: AccelerationMode

    Specify the conditions when the service will run your job with accelerated transcoding.

Instances

Instances details
Eq AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

Read AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

Show AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

Generic AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

Associated Types

type Rep AccelerationSettings :: Type -> Type #

NFData AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

Methods

rnf :: AccelerationSettings -> () #

Hashable AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

ToJSON AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

FromJSON AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

type Rep AccelerationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AccelerationSettings

type Rep AccelerationSettings = D1 ('MetaData "AccelerationSettings" "Amazonka.MediaConvert.Types.AccelerationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AccelerationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "mode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 AccelerationMode)))

newAccelerationSettings Source #

Create a value of AccelerationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:mode:AccelerationSettings', accelerationSettings_mode - Specify the conditions when the service will run your job with accelerated transcoding.

accelerationSettings_mode :: Lens' AccelerationSettings AccelerationMode Source #

Specify the conditions when the service will run your job with accelerated transcoding.

AiffSettings

data AiffSettings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AIFF.

See: newAiffSettings smart constructor.

Constructors

AiffSettings' 

Fields

  • bitDepth :: Maybe Natural

    Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

  • channels :: Maybe Natural

    Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

  • sampleRate :: Maybe Natural

    Sample rate in hz.

Instances

Instances details
Eq AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

Read AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

Show AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

Generic AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

Associated Types

type Rep AiffSettings :: Type -> Type #

NFData AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

Methods

rnf :: AiffSettings -> () #

Hashable AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

ToJSON AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

FromJSON AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

type Rep AiffSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AiffSettings

type Rep AiffSettings = D1 ('MetaData "AiffSettings" "Amazonka.MediaConvert.Types.AiffSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AiffSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "bitDepth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newAiffSettings :: AiffSettings Source #

Create a value of AiffSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:bitDepth:AiffSettings', aiffSettings_bitDepth - Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

$sel:channels:AiffSettings', aiffSettings_channels - Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

$sel:sampleRate:AiffSettings', aiffSettings_sampleRate - Sample rate in hz.

aiffSettings_bitDepth :: Lens' AiffSettings (Maybe Natural) Source #

Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

aiffSettings_channels :: Lens' AiffSettings (Maybe Natural) Source #

Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

AncillarySourceSettings

data AncillarySourceSettings Source #

Settings for ancillary captions source.

See: newAncillarySourceSettings smart constructor.

Constructors

AncillarySourceSettings' 

Fields

  • convert608To708 :: Maybe AncillaryConvert608To708

    Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

  • terminateCaptions :: Maybe AncillaryTerminateCaptions

    By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

  • sourceAncillaryChannelNumber :: Maybe Natural

    Specifies the 608 channel number in the ancillary data track from which to extract captions. Unused for passthrough.

Instances

Instances details
Eq AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

Read AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

Show AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

Generic AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

Associated Types

type Rep AncillarySourceSettings :: Type -> Type #

NFData AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

Methods

rnf :: AncillarySourceSettings -> () #

Hashable AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

ToJSON AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

FromJSON AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

type Rep AncillarySourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AncillarySourceSettings

type Rep AncillarySourceSettings = D1 ('MetaData "AncillarySourceSettings" "Amazonka.MediaConvert.Types.AncillarySourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AncillarySourceSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "convert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AncillaryConvert608To708)) :*: (S1 ('MetaSel ('Just "terminateCaptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AncillaryTerminateCaptions)) :*: S1 ('MetaSel ('Just "sourceAncillaryChannelNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newAncillarySourceSettings :: AncillarySourceSettings Source #

Create a value of AncillarySourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:convert608To708:AncillarySourceSettings', ancillarySourceSettings_convert608To708 - Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

$sel:terminateCaptions:AncillarySourceSettings', ancillarySourceSettings_terminateCaptions - By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

$sel:sourceAncillaryChannelNumber:AncillarySourceSettings', ancillarySourceSettings_sourceAncillaryChannelNumber - Specifies the 608 channel number in the ancillary data track from which to extract captions. Unused for passthrough.

ancillarySourceSettings_convert608To708 :: Lens' AncillarySourceSettings (Maybe AncillaryConvert608To708) Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

ancillarySourceSettings_terminateCaptions :: Lens' AncillarySourceSettings (Maybe AncillaryTerminateCaptions) Source #

By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

ancillarySourceSettings_sourceAncillaryChannelNumber :: Lens' AncillarySourceSettings (Maybe Natural) Source #

Specifies the 608 channel number in the ancillary data track from which to extract captions. Unused for passthrough.

AudioChannelTaggingSettings

data AudioChannelTaggingSettings Source #

When you mimic a multi-channel audio layout with multiple mono-channel tracks, you can tag each channel layout manually. For example, you would tag the tracks that contain your left, right, and center audio with Left (L), Right (R), and Center (C), respectively. When you don't specify a value, MediaConvert labels your track as Center (C) by default. To use audio layout tagging, your output must be in a QuickTime (.mov) container; your audio codec must be AAC, WAV, or AIFF; and you must set up your audio track to have only one channel.

See: newAudioChannelTaggingSettings smart constructor.

Constructors

AudioChannelTaggingSettings' 

Fields

  • channelTag :: Maybe AudioChannelTag

    You can add a tag for this mono-channel audio track to mimic its placement in a multi-channel layout. For example, if this track is the left surround channel, choose Left surround (LS).

Instances

Instances details
Eq AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

Read AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

Show AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

Generic AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

Associated Types

type Rep AudioChannelTaggingSettings :: Type -> Type #

NFData AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

Hashable AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

ToJSON AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

FromJSON AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

type Rep AudioChannelTaggingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioChannelTaggingSettings

type Rep AudioChannelTaggingSettings = D1 ('MetaData "AudioChannelTaggingSettings" "Amazonka.MediaConvert.Types.AudioChannelTaggingSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioChannelTaggingSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channelTag") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioChannelTag))))

newAudioChannelTaggingSettings :: AudioChannelTaggingSettings Source #

Create a value of AudioChannelTaggingSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channelTag:AudioChannelTaggingSettings', audioChannelTaggingSettings_channelTag - You can add a tag for this mono-channel audio track to mimic its placement in a multi-channel layout. For example, if this track is the left surround channel, choose Left surround (LS).

audioChannelTaggingSettings_channelTag :: Lens' AudioChannelTaggingSettings (Maybe AudioChannelTag) Source #

You can add a tag for this mono-channel audio track to mimic its placement in a multi-channel layout. For example, if this track is the left surround channel, choose Left surround (LS).

AudioCodecSettings

data AudioCodecSettings Source #

Settings related to audio encoding. The settings in this group vary depending on the value that you choose for your audio codec.

See: newAudioCodecSettings smart constructor.

Constructors

AudioCodecSettings' 

Fields

  • aiffSettings :: Maybe AiffSettings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AIFF.

  • codec :: Maybe AudioCodec

    Choose the audio codec for this output. Note that the option Dolby Digital passthrough (PASSTHROUGH) applies only to Dolby Digital and Dolby Digital Plus audio inputs. Make sure that you choose a codec that's supported with your output container: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#reference-codecs-containers-output-audio For audio-only outputs, make sure that both your input audio codec and your output audio codec are supported for audio-only workflows. For more information, see: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers-input.html#reference-codecs-containers-input-audio-only and https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#audio-only-output

  • ac3Settings :: Maybe Ac3Settings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AC3.

  • opusSettings :: Maybe OpusSettings

    Required when you set Codec, under AudioDescriptions>CodecSettings, to the value OPUS.

  • mp2Settings :: Maybe Mp2Settings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value MP2.

  • wavSettings :: Maybe WavSettings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value WAV.

  • eac3AtmosSettings :: Maybe Eac3AtmosSettings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3_ATMOS.

  • mp3Settings :: Maybe Mp3Settings

    Required when you set Codec, under AudioDescriptions>CodecSettings, to the value MP3.

  • vorbisSettings :: Maybe VorbisSettings

    Required when you set Codec, under AudioDescriptions>CodecSettings, to the value Vorbis.

  • aacSettings :: Maybe AacSettings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AAC. The service accepts one of two mutually exclusive groups of AAC settings--VBR and CBR. To select one of these modes, set the value of Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you control the audio quality with the setting VBR quality (vbrQuality). In CBR mode, you use the setting Bitrate (bitrate). Defaults and valid values depend on the rate control mode.

  • eac3Settings :: Maybe Eac3Settings

    Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3.

Instances

Instances details
Eq AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

Read AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

Show AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

Generic AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

Associated Types

type Rep AudioCodecSettings :: Type -> Type #

NFData AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

Methods

rnf :: AudioCodecSettings -> () #

Hashable AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

ToJSON AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

FromJSON AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

type Rep AudioCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioCodecSettings

type Rep AudioCodecSettings = D1 ('MetaData "AudioCodecSettings" "Amazonka.MediaConvert.Types.AudioCodecSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioCodecSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "aiffSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AiffSettings)) :*: S1 ('MetaSel ('Just "codec") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioCodec))) :*: (S1 ('MetaSel ('Just "ac3Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Ac3Settings)) :*: (S1 ('MetaSel ('Just "opusSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OpusSettings)) :*: S1 ('MetaSel ('Just "mp2Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp2Settings))))) :*: ((S1 ('MetaSel ('Just "wavSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WavSettings)) :*: (S1 ('MetaSel ('Just "eac3AtmosSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosSettings)) :*: S1 ('MetaSel ('Just "mp3Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp3Settings)))) :*: (S1 ('MetaSel ('Just "vorbisSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VorbisSettings)) :*: (S1 ('MetaSel ('Just "aacSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AacSettings)) :*: S1 ('MetaSel ('Just "eac3Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3Settings)))))))

newAudioCodecSettings :: AudioCodecSettings Source #

Create a value of AudioCodecSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:aiffSettings:AudioCodecSettings', audioCodecSettings_aiffSettings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AIFF.

$sel:codec:AudioCodecSettings', audioCodecSettings_codec - Choose the audio codec for this output. Note that the option Dolby Digital passthrough (PASSTHROUGH) applies only to Dolby Digital and Dolby Digital Plus audio inputs. Make sure that you choose a codec that's supported with your output container: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#reference-codecs-containers-output-audio For audio-only outputs, make sure that both your input audio codec and your output audio codec are supported for audio-only workflows. For more information, see: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers-input.html#reference-codecs-containers-input-audio-only and https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#audio-only-output

$sel:ac3Settings:AudioCodecSettings', audioCodecSettings_ac3Settings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AC3.

$sel:opusSettings:AudioCodecSettings', audioCodecSettings_opusSettings - Required when you set Codec, under AudioDescriptions>CodecSettings, to the value OPUS.

$sel:mp2Settings:AudioCodecSettings', audioCodecSettings_mp2Settings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value MP2.

$sel:wavSettings:AudioCodecSettings', audioCodecSettings_wavSettings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value WAV.

$sel:eac3AtmosSettings:AudioCodecSettings', audioCodecSettings_eac3AtmosSettings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3_ATMOS.

$sel:mp3Settings:AudioCodecSettings', audioCodecSettings_mp3Settings - Required when you set Codec, under AudioDescriptions>CodecSettings, to the value MP3.

$sel:vorbisSettings:AudioCodecSettings', audioCodecSettings_vorbisSettings - Required when you set Codec, under AudioDescriptions>CodecSettings, to the value Vorbis.

$sel:aacSettings:AudioCodecSettings', audioCodecSettings_aacSettings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AAC. The service accepts one of two mutually exclusive groups of AAC settings--VBR and CBR. To select one of these modes, set the value of Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you control the audio quality with the setting VBR quality (vbrQuality). In CBR mode, you use the setting Bitrate (bitrate). Defaults and valid values depend on the rate control mode.

$sel:eac3Settings:AudioCodecSettings', audioCodecSettings_eac3Settings - Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3.

audioCodecSettings_aiffSettings :: Lens' AudioCodecSettings (Maybe AiffSettings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AIFF.

audioCodecSettings_codec :: Lens' AudioCodecSettings (Maybe AudioCodec) Source #

Choose the audio codec for this output. Note that the option Dolby Digital passthrough (PASSTHROUGH) applies only to Dolby Digital and Dolby Digital Plus audio inputs. Make sure that you choose a codec that's supported with your output container: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#reference-codecs-containers-output-audio For audio-only outputs, make sure that both your input audio codec and your output audio codec are supported for audio-only workflows. For more information, see: https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers-input.html#reference-codecs-containers-input-audio-only and https://docs.aws.amazon.com/mediaconvert/latest/ug/reference-codecs-containers.html#audio-only-output

audioCodecSettings_ac3Settings :: Lens' AudioCodecSettings (Maybe Ac3Settings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AC3.

audioCodecSettings_opusSettings :: Lens' AudioCodecSettings (Maybe OpusSettings) Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value OPUS.

audioCodecSettings_mp2Settings :: Lens' AudioCodecSettings (Maybe Mp2Settings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value MP2.

audioCodecSettings_wavSettings :: Lens' AudioCodecSettings (Maybe WavSettings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value WAV.

audioCodecSettings_eac3AtmosSettings :: Lens' AudioCodecSettings (Maybe Eac3AtmosSettings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3_ATMOS.

audioCodecSettings_mp3Settings :: Lens' AudioCodecSettings (Maybe Mp3Settings) Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value MP3.

audioCodecSettings_vorbisSettings :: Lens' AudioCodecSettings (Maybe VorbisSettings) Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value Vorbis.

audioCodecSettings_aacSettings :: Lens' AudioCodecSettings (Maybe AacSettings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value AAC. The service accepts one of two mutually exclusive groups of AAC settings--VBR and CBR. To select one of these modes, set the value of Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you control the audio quality with the setting VBR quality (vbrQuality). In CBR mode, you use the setting Bitrate (bitrate). Defaults and valid values depend on the rate control mode.

audioCodecSettings_eac3Settings :: Lens' AudioCodecSettings (Maybe Eac3Settings) Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3.

AudioDescription

data AudioDescription Source #

Settings related to one audio tab on the MediaConvert console. In your job JSON, an instance of AudioDescription is equivalent to one audio tab in the console. Usually, one audio tab corresponds to one output audio track. Depending on how you set up your input audio selectors and whether you use audio selector groups, one audio tab can correspond to a group of output audio tracks.

See: newAudioDescription smart constructor.

Constructors

AudioDescription' 

Fields

  • audioSourceName :: Maybe Text

    Specifies which audio data to use from each input. In the simplest case, specify an "Audio Selector":#inputs-audio_selector by name based on its order within each input. For example if you specify "Audio Selector 3", then the third audio selector will be used from each input. If an input does not have an "Audio Selector 3", then the audio selector marked as "default" in that input will be used. If there is no audio selector marked as "default", silence will be inserted for the duration of that input. Alternatively, an "Audio Selector Group":#inputs-audio_selector_group name may be specified, with similar default/silence behavior. If no audio_source_name is specified, then "Audio Selector 1" will be chosen automatically.

  • customLanguageCode :: Maybe Text

    Specify the language for this audio output track. The service puts this language code into your output audio track when you set Language code control (AudioLanguageCodeControl) to Use configured (USE_CONFIGURED). The service also uses your specified custom language code when you set Language code control (AudioLanguageCodeControl) to Follow input (FOLLOW_INPUT), but your input file doesn't specify a language code. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

  • languageCode :: Maybe LanguageCode

    Indicates the language of the audio output track. The ISO 639 language specified in the 'Language Code' drop down will be used when 'Follow Input Language Code' is not selected or when 'Follow Input Language Code' is selected but there is no ISO 639 language code specified by the input.

  • audioChannelTaggingSettings :: Maybe AudioChannelTaggingSettings

    When you mimic a multi-channel audio layout with multiple mono-channel tracks, you can tag each channel layout manually. For example, you would tag the tracks that contain your left, right, and center audio with Left (L), Right (R), and Center (C), respectively. When you don't specify a value, MediaConvert labels your track as Center (C) by default. To use audio layout tagging, your output must be in a QuickTime (.mov) container; your audio codec must be AAC, WAV, or AIFF; and you must set up your audio track to have only one channel.

  • audioType :: Maybe Natural

    Applies only if Follow Input Audio Type is unchecked (false). A number between 0 and 255. The following are defined in ISO-IEC 13818-1: 0 = Undefined, 1 = Clean Effects, 2 = Hearing Impaired, 3 = Visually Impaired Commentary, 4-255 = Reserved.

  • audioNormalizationSettings :: Maybe AudioNormalizationSettings

    Advanced audio normalization settings. Ignore these settings unless you need to comply with a loudness standard.

  • languageCodeControl :: Maybe AudioLanguageCodeControl

    Specify which source for language code takes precedence for this audio track. When you choose Follow input (FOLLOW_INPUT), the service uses the language code from the input track if it's present. If there's no languge code on the input track, the service uses the code that you specify in the setting Language code (languageCode or customLanguageCode). When you choose Use configured (USE_CONFIGURED), the service uses the language code that you specify.

  • codecSettings :: Maybe AudioCodecSettings

    Settings related to audio encoding. The settings in this group vary depending on the value that you choose for your audio codec.

  • streamName :: Maybe Text

    Specify a label for this output audio stream. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

  • remixSettings :: Maybe RemixSettings

    Advanced audio remixing settings.

  • audioTypeControl :: Maybe AudioTypeControl

    When set to FOLLOW_INPUT, if the input contains an ISO 639 audio_type, then that value is passed through to the output. If the input contains no ISO 639 audio_type, the value in Audio Type is included in the output. Otherwise the value in Audio Type is included in the output. Note that this field and audioType are both ignored if audioDescriptionBroadcasterMix is set to BROADCASTER_MIXED_AD.

Instances

Instances details
Eq AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

Read AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

Show AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

Generic AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

Associated Types

type Rep AudioDescription :: Type -> Type #

NFData AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

Methods

rnf :: AudioDescription -> () #

Hashable AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

ToJSON AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

FromJSON AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

type Rep AudioDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioDescription

type Rep AudioDescription = D1 ('MetaData "AudioDescription" "Amazonka.MediaConvert.Types.AudioDescription" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioDescription'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "audioSourceName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: (S1 ('MetaSel ('Just "audioChannelTaggingSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioChannelTaggingSettings)) :*: S1 ('MetaSel ('Just "audioType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "audioNormalizationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioNormalizationSettings)) :*: (S1 ('MetaSel ('Just "languageCodeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioLanguageCodeControl)) :*: S1 ('MetaSel ('Just "codecSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioCodecSettings)))) :*: (S1 ('MetaSel ('Just "streamName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "remixSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RemixSettings)) :*: S1 ('MetaSel ('Just "audioTypeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioTypeControl)))))))

newAudioDescription :: AudioDescription Source #

Create a value of AudioDescription with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:audioSourceName:AudioDescription', audioDescription_audioSourceName - Specifies which audio data to use from each input. In the simplest case, specify an "Audio Selector":#inputs-audio_selector by name based on its order within each input. For example if you specify "Audio Selector 3", then the third audio selector will be used from each input. If an input does not have an "Audio Selector 3", then the audio selector marked as "default" in that input will be used. If there is no audio selector marked as "default", silence will be inserted for the duration of that input. Alternatively, an "Audio Selector Group":#inputs-audio_selector_group name may be specified, with similar default/silence behavior. If no audio_source_name is specified, then "Audio Selector 1" will be chosen automatically.

$sel:customLanguageCode:AudioDescription', audioDescription_customLanguageCode - Specify the language for this audio output track. The service puts this language code into your output audio track when you set Language code control (AudioLanguageCodeControl) to Use configured (USE_CONFIGURED). The service also uses your specified custom language code when you set Language code control (AudioLanguageCodeControl) to Follow input (FOLLOW_INPUT), but your input file doesn't specify a language code. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

$sel:languageCode:AudioDescription', audioDescription_languageCode - Indicates the language of the audio output track. The ISO 639 language specified in the 'Language Code' drop down will be used when 'Follow Input Language Code' is not selected or when 'Follow Input Language Code' is selected but there is no ISO 639 language code specified by the input.

$sel:audioChannelTaggingSettings:AudioDescription', audioDescription_audioChannelTaggingSettings - When you mimic a multi-channel audio layout with multiple mono-channel tracks, you can tag each channel layout manually. For example, you would tag the tracks that contain your left, right, and center audio with Left (L), Right (R), and Center (C), respectively. When you don't specify a value, MediaConvert labels your track as Center (C) by default. To use audio layout tagging, your output must be in a QuickTime (.mov) container; your audio codec must be AAC, WAV, or AIFF; and you must set up your audio track to have only one channel.

$sel:audioType:AudioDescription', audioDescription_audioType - Applies only if Follow Input Audio Type is unchecked (false). A number between 0 and 255. The following are defined in ISO-IEC 13818-1: 0 = Undefined, 1 = Clean Effects, 2 = Hearing Impaired, 3 = Visually Impaired Commentary, 4-255 = Reserved.

$sel:audioNormalizationSettings:AudioDescription', audioDescription_audioNormalizationSettings - Advanced audio normalization settings. Ignore these settings unless you need to comply with a loudness standard.

$sel:languageCodeControl:AudioDescription', audioDescription_languageCodeControl - Specify which source for language code takes precedence for this audio track. When you choose Follow input (FOLLOW_INPUT), the service uses the language code from the input track if it's present. If there's no languge code on the input track, the service uses the code that you specify in the setting Language code (languageCode or customLanguageCode). When you choose Use configured (USE_CONFIGURED), the service uses the language code that you specify.

$sel:codecSettings:AudioDescription', audioDescription_codecSettings - Settings related to audio encoding. The settings in this group vary depending on the value that you choose for your audio codec.

$sel:streamName:AudioDescription', audioDescription_streamName - Specify a label for this output audio stream. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

$sel:remixSettings:AudioDescription', audioDescription_remixSettings - Advanced audio remixing settings.

$sel:audioTypeControl:AudioDescription', audioDescription_audioTypeControl - When set to FOLLOW_INPUT, if the input contains an ISO 639 audio_type, then that value is passed through to the output. If the input contains no ISO 639 audio_type, the value in Audio Type is included in the output. Otherwise the value in Audio Type is included in the output. Note that this field and audioType are both ignored if audioDescriptionBroadcasterMix is set to BROADCASTER_MIXED_AD.

audioDescription_audioSourceName :: Lens' AudioDescription (Maybe Text) Source #

Specifies which audio data to use from each input. In the simplest case, specify an "Audio Selector":#inputs-audio_selector by name based on its order within each input. For example if you specify "Audio Selector 3", then the third audio selector will be used from each input. If an input does not have an "Audio Selector 3", then the audio selector marked as "default" in that input will be used. If there is no audio selector marked as "default", silence will be inserted for the duration of that input. Alternatively, an "Audio Selector Group":#inputs-audio_selector_group name may be specified, with similar default/silence behavior. If no audio_source_name is specified, then "Audio Selector 1" will be chosen automatically.

audioDescription_customLanguageCode :: Lens' AudioDescription (Maybe Text) Source #

Specify the language for this audio output track. The service puts this language code into your output audio track when you set Language code control (AudioLanguageCodeControl) to Use configured (USE_CONFIGURED). The service also uses your specified custom language code when you set Language code control (AudioLanguageCodeControl) to Follow input (FOLLOW_INPUT), but your input file doesn't specify a language code. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

audioDescription_languageCode :: Lens' AudioDescription (Maybe LanguageCode) Source #

Indicates the language of the audio output track. The ISO 639 language specified in the 'Language Code' drop down will be used when 'Follow Input Language Code' is not selected or when 'Follow Input Language Code' is selected but there is no ISO 639 language code specified by the input.

audioDescription_audioChannelTaggingSettings :: Lens' AudioDescription (Maybe AudioChannelTaggingSettings) Source #

When you mimic a multi-channel audio layout with multiple mono-channel tracks, you can tag each channel layout manually. For example, you would tag the tracks that contain your left, right, and center audio with Left (L), Right (R), and Center (C), respectively. When you don't specify a value, MediaConvert labels your track as Center (C) by default. To use audio layout tagging, your output must be in a QuickTime (.mov) container; your audio codec must be AAC, WAV, or AIFF; and you must set up your audio track to have only one channel.

audioDescription_audioType :: Lens' AudioDescription (Maybe Natural) Source #

Applies only if Follow Input Audio Type is unchecked (false). A number between 0 and 255. The following are defined in ISO-IEC 13818-1: 0 = Undefined, 1 = Clean Effects, 2 = Hearing Impaired, 3 = Visually Impaired Commentary, 4-255 = Reserved.

audioDescription_audioNormalizationSettings :: Lens' AudioDescription (Maybe AudioNormalizationSettings) Source #

Advanced audio normalization settings. Ignore these settings unless you need to comply with a loudness standard.

audioDescription_languageCodeControl :: Lens' AudioDescription (Maybe AudioLanguageCodeControl) Source #

Specify which source for language code takes precedence for this audio track. When you choose Follow input (FOLLOW_INPUT), the service uses the language code from the input track if it's present. If there's no languge code on the input track, the service uses the code that you specify in the setting Language code (languageCode or customLanguageCode). When you choose Use configured (USE_CONFIGURED), the service uses the language code that you specify.

audioDescription_codecSettings :: Lens' AudioDescription (Maybe AudioCodecSettings) Source #

Settings related to audio encoding. The settings in this group vary depending on the value that you choose for your audio codec.

audioDescription_streamName :: Lens' AudioDescription (Maybe Text) Source #

Specify a label for this output audio stream. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

audioDescription_audioTypeControl :: Lens' AudioDescription (Maybe AudioTypeControl) Source #

When set to FOLLOW_INPUT, if the input contains an ISO 639 audio_type, then that value is passed through to the output. If the input contains no ISO 639 audio_type, the value in Audio Type is included in the output. Otherwise the value in Audio Type is included in the output. Note that this field and audioType are both ignored if audioDescriptionBroadcasterMix is set to BROADCASTER_MIXED_AD.

AudioNormalizationSettings

data AudioNormalizationSettings Source #

Advanced audio normalization settings. Ignore these settings unless you need to comply with a loudness standard.

See: newAudioNormalizationSettings smart constructor.

Constructors

AudioNormalizationSettings' 

Fields

  • algorithmControl :: Maybe AudioNormalizationAlgorithmControl

    When enabled the output audio is corrected using the chosen algorithm. If disabled, the audio will be measured but not adjusted.

  • targetLkfs :: Maybe Double

    When you use Audio normalization (AudioNormalizationSettings), optionally use this setting to specify a target loudness. If you don't specify a value here, the encoder chooses a value for you, based on the algorithm that you choose for Algorithm (algorithm). If you choose algorithm 1770-1, the encoder will choose -24 LKFS; otherwise, the encoder will choose -23 LKFS.

  • peakCalculation :: Maybe AudioNormalizationPeakCalculation

    If set to TRUE_PEAK, calculate and log the TruePeak for each output's audio track loudness.

  • correctionGateLevel :: Maybe Int

    Content measuring above this level will be corrected to the target level. Content measuring below this level will not be corrected.

  • algorithm :: Maybe AudioNormalizationAlgorithm

    Choose one of the following audio normalization algorithms: ITU-R BS.1770-1: Ungated loudness. A measurement of ungated average loudness for an entire piece of content, suitable for measurement of short-form content under ATSC recommendation A/85. Supports up to 5.1 audio channels. ITU-R BS.1770-2: Gated loudness. A measurement of gated average loudness compliant with the requirements of EBU-R128. Supports up to 5.1 audio channels. ITU-R BS.1770-3: Modified peak. The same loudness measurement algorithm as 1770-2, with an updated true peak measurement. ITU-R BS.1770-4: Higher channel count. Allows for more audio channels than the other algorithms, including configurations such as 7.1.

  • loudnessLogging :: Maybe AudioNormalizationLoudnessLogging

    If set to LOG, log each output's audio track loudness to a CSV file.

Instances

Instances details
Eq AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

Read AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

Show AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

Generic AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

Associated Types

type Rep AudioNormalizationSettings :: Type -> Type #

NFData AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

Hashable AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

ToJSON AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

FromJSON AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

type Rep AudioNormalizationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioNormalizationSettings

type Rep AudioNormalizationSettings = D1 ('MetaData "AudioNormalizationSettings" "Amazonka.MediaConvert.Types.AudioNormalizationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioNormalizationSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "algorithmControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioNormalizationAlgorithmControl)) :*: (S1 ('MetaSel ('Just "targetLkfs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "peakCalculation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioNormalizationPeakCalculation)))) :*: (S1 ('MetaSel ('Just "correctionGateLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "algorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioNormalizationAlgorithm)) :*: S1 ('MetaSel ('Just "loudnessLogging") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioNormalizationLoudnessLogging))))))

newAudioNormalizationSettings :: AudioNormalizationSettings Source #

Create a value of AudioNormalizationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:algorithmControl:AudioNormalizationSettings', audioNormalizationSettings_algorithmControl - When enabled the output audio is corrected using the chosen algorithm. If disabled, the audio will be measured but not adjusted.

$sel:targetLkfs:AudioNormalizationSettings', audioNormalizationSettings_targetLkfs - When you use Audio normalization (AudioNormalizationSettings), optionally use this setting to specify a target loudness. If you don't specify a value here, the encoder chooses a value for you, based on the algorithm that you choose for Algorithm (algorithm). If you choose algorithm 1770-1, the encoder will choose -24 LKFS; otherwise, the encoder will choose -23 LKFS.

$sel:peakCalculation:AudioNormalizationSettings', audioNormalizationSettings_peakCalculation - If set to TRUE_PEAK, calculate and log the TruePeak for each output's audio track loudness.

$sel:correctionGateLevel:AudioNormalizationSettings', audioNormalizationSettings_correctionGateLevel - Content measuring above this level will be corrected to the target level. Content measuring below this level will not be corrected.

$sel:algorithm:AudioNormalizationSettings', audioNormalizationSettings_algorithm - Choose one of the following audio normalization algorithms: ITU-R BS.1770-1: Ungated loudness. A measurement of ungated average loudness for an entire piece of content, suitable for measurement of short-form content under ATSC recommendation A/85. Supports up to 5.1 audio channels. ITU-R BS.1770-2: Gated loudness. A measurement of gated average loudness compliant with the requirements of EBU-R128. Supports up to 5.1 audio channels. ITU-R BS.1770-3: Modified peak. The same loudness measurement algorithm as 1770-2, with an updated true peak measurement. ITU-R BS.1770-4: Higher channel count. Allows for more audio channels than the other algorithms, including configurations such as 7.1.

$sel:loudnessLogging:AudioNormalizationSettings', audioNormalizationSettings_loudnessLogging - If set to LOG, log each output's audio track loudness to a CSV file.

audioNormalizationSettings_algorithmControl :: Lens' AudioNormalizationSettings (Maybe AudioNormalizationAlgorithmControl) Source #

When enabled the output audio is corrected using the chosen algorithm. If disabled, the audio will be measured but not adjusted.

audioNormalizationSettings_targetLkfs :: Lens' AudioNormalizationSettings (Maybe Double) Source #

When you use Audio normalization (AudioNormalizationSettings), optionally use this setting to specify a target loudness. If you don't specify a value here, the encoder chooses a value for you, based on the algorithm that you choose for Algorithm (algorithm). If you choose algorithm 1770-1, the encoder will choose -24 LKFS; otherwise, the encoder will choose -23 LKFS.

audioNormalizationSettings_peakCalculation :: Lens' AudioNormalizationSettings (Maybe AudioNormalizationPeakCalculation) Source #

If set to TRUE_PEAK, calculate and log the TruePeak for each output's audio track loudness.

audioNormalizationSettings_correctionGateLevel :: Lens' AudioNormalizationSettings (Maybe Int) Source #

Content measuring above this level will be corrected to the target level. Content measuring below this level will not be corrected.

audioNormalizationSettings_algorithm :: Lens' AudioNormalizationSettings (Maybe AudioNormalizationAlgorithm) Source #

Choose one of the following audio normalization algorithms: ITU-R BS.1770-1: Ungated loudness. A measurement of ungated average loudness for an entire piece of content, suitable for measurement of short-form content under ATSC recommendation A/85. Supports up to 5.1 audio channels. ITU-R BS.1770-2: Gated loudness. A measurement of gated average loudness compliant with the requirements of EBU-R128. Supports up to 5.1 audio channels. ITU-R BS.1770-3: Modified peak. The same loudness measurement algorithm as 1770-2, with an updated true peak measurement. ITU-R BS.1770-4: Higher channel count. Allows for more audio channels than the other algorithms, including configurations such as 7.1.

audioNormalizationSettings_loudnessLogging :: Lens' AudioNormalizationSettings (Maybe AudioNormalizationLoudnessLogging) Source #

If set to LOG, log each output's audio track loudness to a CSV file.

AudioSelector

data AudioSelector Source #

Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

See: newAudioSelector smart constructor.

Constructors

AudioSelector' 

Fields

  • tracks :: Maybe [Natural]

    Identify a track from the input audio to include in this selector by entering the track index number. To include several tracks in a single audio selector, specify multiple tracks as follows. Using the console, enter a comma-separated list. For examle, type "1,2,3" to include tracks 1 through 3. Specifying directly in your JSON job file, provide the track numbers in an array. For example, "tracks": [1,2,3].

  • customLanguageCode :: Maybe Text

    Selects a specific language code from within an audio source, using the ISO 639-2 or ISO 639-3 three-letter language code

  • programSelection :: Maybe Natural

    Use this setting for input streams that contain Dolby E, to have the service extract specific program data from the track. To select multiple programs, create multiple selectors with the same Track and different Program numbers. In the console, this setting is visible when you set Selector type to Track. Choose the program number from the dropdown list. If you are sending a JSON file, provide the program ID, which is part of the audio metadata. If your input file has incorrect metadata, you can choose All channels instead of a program number to have the service ignore the program IDs and include all the programs in the track.

  • languageCode :: Maybe LanguageCode

    Selects a specific language code from within an audio source.

  • offset :: Maybe Int

    Specifies a time delta in milliseconds to offset the audio from the input video.

  • defaultSelection :: Maybe AudioDefaultSelection

    Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.

  • pids :: Maybe [Natural]

    Selects a specific PID from within an audio source (e.g. 257 selects PID 0x101).

  • hlsRenditionGroupSettings :: Maybe HlsRenditionGroupSettings

    Settings specific to audio sources in an HLS alternate rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique audio track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the properties provided, the job fails. If no properties in hlsRenditionGroupSettings are specified, the default audio track within the video segment is chosen. If there is no audio within video segment, the alternative audio with DEFAULT=YES is chosen instead.

  • selectorType :: Maybe AudioSelectorType

    Specifies the type of the audio selector.

  • externalAudioFileInput :: Maybe Text

    Specifies audio data from an external file source.

  • remixSettings :: Maybe RemixSettings

    Use these settings to reorder the audio channels of one input to match those of another input. This allows you to combine the two files into a single output, one after the other.

Instances

Instances details
Eq AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

Read AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

Show AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

Generic AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

Associated Types

type Rep AudioSelector :: Type -> Type #

NFData AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

Methods

rnf :: AudioSelector -> () #

Hashable AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

ToJSON AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

FromJSON AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

type Rep AudioSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelector

type Rep AudioSelector = D1 ('MetaData "AudioSelector" "Amazonka.MediaConvert.Types.AudioSelector" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioSelector'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "tracks") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Natural])) :*: S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "programSelection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: S1 ('MetaSel ('Just "offset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))))) :*: ((S1 ('MetaSel ('Just "defaultSelection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioDefaultSelection)) :*: (S1 ('MetaSel ('Just "pids") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Natural])) :*: S1 ('MetaSel ('Just "hlsRenditionGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsRenditionGroupSettings)))) :*: (S1 ('MetaSel ('Just "selectorType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AudioSelectorType)) :*: (S1 ('MetaSel ('Just "externalAudioFileInput") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "remixSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RemixSettings)))))))

newAudioSelector :: AudioSelector Source #

Create a value of AudioSelector with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tracks:AudioSelector', audioSelector_tracks - Identify a track from the input audio to include in this selector by entering the track index number. To include several tracks in a single audio selector, specify multiple tracks as follows. Using the console, enter a comma-separated list. For examle, type "1,2,3" to include tracks 1 through 3. Specifying directly in your JSON job file, provide the track numbers in an array. For example, "tracks": [1,2,3].

$sel:customLanguageCode:AudioSelector', audioSelector_customLanguageCode - Selects a specific language code from within an audio source, using the ISO 639-2 or ISO 639-3 three-letter language code

$sel:programSelection:AudioSelector', audioSelector_programSelection - Use this setting for input streams that contain Dolby E, to have the service extract specific program data from the track. To select multiple programs, create multiple selectors with the same Track and different Program numbers. In the console, this setting is visible when you set Selector type to Track. Choose the program number from the dropdown list. If you are sending a JSON file, provide the program ID, which is part of the audio metadata. If your input file has incorrect metadata, you can choose All channels instead of a program number to have the service ignore the program IDs and include all the programs in the track.

$sel:languageCode:AudioSelector', audioSelector_languageCode - Selects a specific language code from within an audio source.

$sel:offset:AudioSelector', audioSelector_offset - Specifies a time delta in milliseconds to offset the audio from the input video.

$sel:defaultSelection:AudioSelector', audioSelector_defaultSelection - Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.

$sel:pids:AudioSelector', audioSelector_pids - Selects a specific PID from within an audio source (e.g. 257 selects PID 0x101).

$sel:hlsRenditionGroupSettings:AudioSelector', audioSelector_hlsRenditionGroupSettings - Settings specific to audio sources in an HLS alternate rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique audio track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the properties provided, the job fails. If no properties in hlsRenditionGroupSettings are specified, the default audio track within the video segment is chosen. If there is no audio within video segment, the alternative audio with DEFAULT=YES is chosen instead.

$sel:selectorType:AudioSelector', audioSelector_selectorType - Specifies the type of the audio selector.

$sel:externalAudioFileInput:AudioSelector', audioSelector_externalAudioFileInput - Specifies audio data from an external file source.

$sel:remixSettings:AudioSelector', audioSelector_remixSettings - Use these settings to reorder the audio channels of one input to match those of another input. This allows you to combine the two files into a single output, one after the other.

audioSelector_tracks :: Lens' AudioSelector (Maybe [Natural]) Source #

Identify a track from the input audio to include in this selector by entering the track index number. To include several tracks in a single audio selector, specify multiple tracks as follows. Using the console, enter a comma-separated list. For examle, type "1,2,3" to include tracks 1 through 3. Specifying directly in your JSON job file, provide the track numbers in an array. For example, "tracks": [1,2,3].

audioSelector_customLanguageCode :: Lens' AudioSelector (Maybe Text) Source #

Selects a specific language code from within an audio source, using the ISO 639-2 or ISO 639-3 three-letter language code

audioSelector_programSelection :: Lens' AudioSelector (Maybe Natural) Source #

Use this setting for input streams that contain Dolby E, to have the service extract specific program data from the track. To select multiple programs, create multiple selectors with the same Track and different Program numbers. In the console, this setting is visible when you set Selector type to Track. Choose the program number from the dropdown list. If you are sending a JSON file, provide the program ID, which is part of the audio metadata. If your input file has incorrect metadata, you can choose All channels instead of a program number to have the service ignore the program IDs and include all the programs in the track.

audioSelector_languageCode :: Lens' AudioSelector (Maybe LanguageCode) Source #

Selects a specific language code from within an audio source.

audioSelector_offset :: Lens' AudioSelector (Maybe Int) Source #

Specifies a time delta in milliseconds to offset the audio from the input video.

audioSelector_defaultSelection :: Lens' AudioSelector (Maybe AudioDefaultSelection) Source #

Enable this setting on one audio selector to set it as the default for the job. The service uses this default for outputs where it can't find the specified input audio. If you don't set a default, those outputs have no audio.

audioSelector_pids :: Lens' AudioSelector (Maybe [Natural]) Source #

Selects a specific PID from within an audio source (e.g. 257 selects PID 0x101).

audioSelector_hlsRenditionGroupSettings :: Lens' AudioSelector (Maybe HlsRenditionGroupSettings) Source #

Settings specific to audio sources in an HLS alternate rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique audio track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the properties provided, the job fails. If no properties in hlsRenditionGroupSettings are specified, the default audio track within the video segment is chosen. If there is no audio within video segment, the alternative audio with DEFAULT=YES is chosen instead.

audioSelector_selectorType :: Lens' AudioSelector (Maybe AudioSelectorType) Source #

Specifies the type of the audio selector.

audioSelector_externalAudioFileInput :: Lens' AudioSelector (Maybe Text) Source #

Specifies audio data from an external file source.

audioSelector_remixSettings :: Lens' AudioSelector (Maybe RemixSettings) Source #

Use these settings to reorder the audio channels of one input to match those of another input. This allows you to combine the two files into a single output, one after the other.

AudioSelectorGroup

data AudioSelectorGroup Source #

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

See: newAudioSelectorGroup smart constructor.

Constructors

AudioSelectorGroup' 

Fields

  • audioSelectorNames :: Maybe [Text]

    Name of an Audio Selector within the same input to include in the group. Audio selector names are standardized, based on their order within the input (e.g., "Audio Selector 1"). The audio selector name parameter can be repeated to add any number of audio selectors to the group.

Instances

Instances details
Eq AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

Read AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

Show AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

Generic AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

Associated Types

type Rep AudioSelectorGroup :: Type -> Type #

NFData AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

Methods

rnf :: AudioSelectorGroup -> () #

Hashable AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

ToJSON AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

FromJSON AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

type Rep AudioSelectorGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AudioSelectorGroup

type Rep AudioSelectorGroup = D1 ('MetaData "AudioSelectorGroup" "Amazonka.MediaConvert.Types.AudioSelectorGroup" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AudioSelectorGroup'" 'PrefixI 'True) (S1 ('MetaSel ('Just "audioSelectorNames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newAudioSelectorGroup :: AudioSelectorGroup Source #

Create a value of AudioSelectorGroup with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:audioSelectorNames:AudioSelectorGroup', audioSelectorGroup_audioSelectorNames - Name of an Audio Selector within the same input to include in the group. Audio selector names are standardized, based on their order within the input (e.g., "Audio Selector 1"). The audio selector name parameter can be repeated to add any number of audio selectors to the group.

audioSelectorGroup_audioSelectorNames :: Lens' AudioSelectorGroup (Maybe [Text]) Source #

Name of an Audio Selector within the same input to include in the group. Audio selector names are standardized, based on their order within the input (e.g., "Audio Selector 1"). The audio selector name parameter can be repeated to add any number of audio selectors to the group.

AutomatedAbrSettings

data AutomatedAbrSettings Source #

Use automated ABR to have MediaConvert set up the renditions in your ABR package for you automatically, based on characteristics of your input video. This feature optimizes video quality while minimizing the overall size of your ABR package.

See: newAutomatedAbrSettings smart constructor.

Constructors

AutomatedAbrSettings' 

Fields

  • maxRenditions :: Maybe Natural

    Optional. The maximum number of renditions that MediaConvert will create in your automated ABR stack. The number of renditions is determined automatically, based on analysis of each job, but will never exceed this limit. When you set this to Auto in the console, which is equivalent to excluding it from your JSON job specification, MediaConvert defaults to a limit of 15.

  • maxAbrBitrate :: Maybe Natural

    Optional. The maximum target bit rate used in your automated ABR stack. Use this value to set an upper limit on the bandwidth consumed by the highest-quality rendition. This is the rendition that is delivered to viewers with the fastest internet connections. If you don't specify a value, MediaConvert uses 8,000,000 (8 mb/s) by default.

  • minAbrBitrate :: Maybe Natural

    Optional. The minimum target bitrate used in your automated ABR stack. Use this value to set a lower limit on the bitrate of video delivered to viewers with slow internet connections. If you don't specify a value, MediaConvert uses 600,000 (600 kb/s) by default.

Instances

Instances details
Eq AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

Read AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

Show AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

Generic AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

Associated Types

type Rep AutomatedAbrSettings :: Type -> Type #

NFData AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

Methods

rnf :: AutomatedAbrSettings -> () #

Hashable AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

ToJSON AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

FromJSON AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

type Rep AutomatedAbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedAbrSettings

type Rep AutomatedAbrSettings = D1 ('MetaData "AutomatedAbrSettings" "Amazonka.MediaConvert.Types.AutomatedAbrSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AutomatedAbrSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "maxRenditions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "maxAbrBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "minAbrBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newAutomatedAbrSettings :: AutomatedAbrSettings Source #

Create a value of AutomatedAbrSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:maxRenditions:AutomatedAbrSettings', automatedAbrSettings_maxRenditions - Optional. The maximum number of renditions that MediaConvert will create in your automated ABR stack. The number of renditions is determined automatically, based on analysis of each job, but will never exceed this limit. When you set this to Auto in the console, which is equivalent to excluding it from your JSON job specification, MediaConvert defaults to a limit of 15.

$sel:maxAbrBitrate:AutomatedAbrSettings', automatedAbrSettings_maxAbrBitrate - Optional. The maximum target bit rate used in your automated ABR stack. Use this value to set an upper limit on the bandwidth consumed by the highest-quality rendition. This is the rendition that is delivered to viewers with the fastest internet connections. If you don't specify a value, MediaConvert uses 8,000,000 (8 mb/s) by default.

$sel:minAbrBitrate:AutomatedAbrSettings', automatedAbrSettings_minAbrBitrate - Optional. The minimum target bitrate used in your automated ABR stack. Use this value to set a lower limit on the bitrate of video delivered to viewers with slow internet connections. If you don't specify a value, MediaConvert uses 600,000 (600 kb/s) by default.

automatedAbrSettings_maxRenditions :: Lens' AutomatedAbrSettings (Maybe Natural) Source #

Optional. The maximum number of renditions that MediaConvert will create in your automated ABR stack. The number of renditions is determined automatically, based on analysis of each job, but will never exceed this limit. When you set this to Auto in the console, which is equivalent to excluding it from your JSON job specification, MediaConvert defaults to a limit of 15.

automatedAbrSettings_maxAbrBitrate :: Lens' AutomatedAbrSettings (Maybe Natural) Source #

Optional. The maximum target bit rate used in your automated ABR stack. Use this value to set an upper limit on the bandwidth consumed by the highest-quality rendition. This is the rendition that is delivered to viewers with the fastest internet connections. If you don't specify a value, MediaConvert uses 8,000,000 (8 mb/s) by default.

automatedAbrSettings_minAbrBitrate :: Lens' AutomatedAbrSettings (Maybe Natural) Source #

Optional. The minimum target bitrate used in your automated ABR stack. Use this value to set a lower limit on the bitrate of video delivered to viewers with slow internet connections. If you don't specify a value, MediaConvert uses 600,000 (600 kb/s) by default.

AutomatedEncodingSettings

data AutomatedEncodingSettings Source #

Use automated encoding to have MediaConvert choose your encoding settings for you, based on characteristics of your input video.

See: newAutomatedEncodingSettings smart constructor.

Constructors

AutomatedEncodingSettings' 

Fields

  • abrSettings :: Maybe AutomatedAbrSettings

    Use automated ABR to have MediaConvert set up the renditions in your ABR package for you automatically, based on characteristics of your input video. This feature optimizes video quality while minimizing the overall size of your ABR package.

Instances

Instances details
Eq AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

Read AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

Show AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

Generic AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

Associated Types

type Rep AutomatedEncodingSettings :: Type -> Type #

NFData AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

Hashable AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

ToJSON AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

FromJSON AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

type Rep AutomatedEncodingSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AutomatedEncodingSettings

type Rep AutomatedEncodingSettings = D1 ('MetaData "AutomatedEncodingSettings" "Amazonka.MediaConvert.Types.AutomatedEncodingSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AutomatedEncodingSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "abrSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AutomatedAbrSettings))))

newAutomatedEncodingSettings :: AutomatedEncodingSettings Source #

Create a value of AutomatedEncodingSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:abrSettings:AutomatedEncodingSettings', automatedEncodingSettings_abrSettings - Use automated ABR to have MediaConvert set up the renditions in your ABR package for you automatically, based on characteristics of your input video. This feature optimizes video quality while minimizing the overall size of your ABR package.

automatedEncodingSettings_abrSettings :: Lens' AutomatedEncodingSettings (Maybe AutomatedAbrSettings) Source #

Use automated ABR to have MediaConvert set up the renditions in your ABR package for you automatically, based on characteristics of your input video. This feature optimizes video quality while minimizing the overall size of your ABR package.

Av1QvbrSettings

data Av1QvbrSettings Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

See: newAv1QvbrSettings smart constructor.

Constructors

Av1QvbrSettings' 

Fields

  • qvbrQualityLevelFineTune :: Maybe Double

    Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

  • qvbrQualityLevel :: Maybe Natural

    Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

Instances

Instances details
Eq Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

Read Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

Show Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

Generic Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

Associated Types

type Rep Av1QvbrSettings :: Type -> Type #

NFData Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

Methods

rnf :: Av1QvbrSettings -> () #

Hashable Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

ToJSON Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

FromJSON Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

type Rep Av1QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1QvbrSettings

type Rep Av1QvbrSettings = D1 ('MetaData "Av1QvbrSettings" "Amazonka.MediaConvert.Types.Av1QvbrSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Av1QvbrSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "qvbrQualityLevelFineTune") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "qvbrQualityLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newAv1QvbrSettings :: Av1QvbrSettings Source #

Create a value of Av1QvbrSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qvbrQualityLevelFineTune:Av1QvbrSettings', av1QvbrSettings_qvbrQualityLevelFineTune - Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

$sel:qvbrQualityLevel:Av1QvbrSettings', av1QvbrSettings_qvbrQualityLevel - Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

av1QvbrSettings_qvbrQualityLevelFineTune :: Lens' Av1QvbrSettings (Maybe Double) Source #

Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

av1QvbrSettings_qvbrQualityLevel :: Lens' Av1QvbrSettings (Maybe Natural) Source #

Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

Av1Settings

data Av1Settings Source #

Required when you set Codec, under VideoDescription>CodecSettings to the value AV1.

See: newAv1Settings smart constructor.

Constructors

Av1Settings' 

Fields

  • gopSize :: Maybe Double

    Specify the GOP length (keyframe interval) in frames. With AV1, MediaConvert doesn't support GOP length in seconds. This value must be greater than zero and preferably equal to 1 + ((numberBFrames + 1) * x), where x is an integer value.

  • numberBFramesBetweenReferenceFrames :: Maybe Natural

    Specify from the number of B-frames, in the range of 0-15. For AV1 encoding, we recommend using 7 or 15. Choose a larger number for a lower bitrate and smaller file size; choose a smaller number for better video quality.

  • slices :: Maybe Natural

    Specify the number of slices per picture. This value must be 1, 2, 4, 8, 16, or 32. For progressive pictures, this value must be less than or equal to the number of macroblock rows. For interlaced pictures, this value must be less than or equal to half the number of macroblock rows.

  • rateControlMode :: Maybe Av1RateControlMode

    'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'

  • qvbrSettings :: Maybe Av1QvbrSettings

    Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe Av1FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe Av1FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • adaptiveQuantization :: Maybe Av1AdaptiveQuantization

    Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization (spatialAdaptiveQuantization).

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • maxBitrate :: Maybe Natural

    Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

  • spatialAdaptiveQuantization :: Maybe Av1SpatialAdaptiveQuantization

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

Read Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

Show Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

Generic Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

Associated Types

type Rep Av1Settings :: Type -> Type #

NFData Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

Methods

rnf :: Av1Settings -> () #

Hashable Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

ToJSON Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

FromJSON Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

type Rep Av1Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Av1Settings

type Rep Av1Settings = D1 ('MetaData "Av1Settings" "Amazonka.MediaConvert.Types.Av1Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Av1Settings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "numberBFramesBetweenReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "slices") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1RateControlMode)) :*: (S1 ('MetaSel ('Just "qvbrSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1QvbrSettings)) :*: S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1FramerateConversionAlgorithm)) :*: (S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1FramerateControl)) :*: S1 ('MetaSel ('Just "adaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1AdaptiveQuantization)))) :*: (S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "spatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1SpatialAdaptiveQuantization)))))))

newAv1Settings :: Av1Settings Source #

Create a value of Av1Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:gopSize:Av1Settings', av1Settings_gopSize - Specify the GOP length (keyframe interval) in frames. With AV1, MediaConvert doesn't support GOP length in seconds. This value must be greater than zero and preferably equal to 1 + ((numberBFrames + 1) * x), where x is an integer value.

$sel:numberBFramesBetweenReferenceFrames:Av1Settings', av1Settings_numberBFramesBetweenReferenceFrames - Specify from the number of B-frames, in the range of 0-15. For AV1 encoding, we recommend using 7 or 15. Choose a larger number for a lower bitrate and smaller file size; choose a smaller number for better video quality.

$sel:slices:Av1Settings', av1Settings_slices - Specify the number of slices per picture. This value must be 1, 2, 4, 8, 16, or 32. For progressive pictures, this value must be less than or equal to the number of macroblock rows. For interlaced pictures, this value must be less than or equal to half the number of macroblock rows.

$sel:rateControlMode:Av1Settings', av1Settings_rateControlMode - 'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'

$sel:qvbrSettings:Av1Settings', av1Settings_qvbrSettings - Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

$sel:framerateDenominator:Av1Settings', av1Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:Av1Settings', av1Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:Av1Settings', av1Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:adaptiveQuantization:Av1Settings', av1Settings_adaptiveQuantization - Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization (spatialAdaptiveQuantization).

$sel:framerateNumerator:Av1Settings', av1Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:maxBitrate:Av1Settings', av1Settings_maxBitrate - Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

$sel:spatialAdaptiveQuantization:Av1Settings', av1Settings_spatialAdaptiveQuantization - Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

av1Settings_gopSize :: Lens' Av1Settings (Maybe Double) Source #

Specify the GOP length (keyframe interval) in frames. With AV1, MediaConvert doesn't support GOP length in seconds. This value must be greater than zero and preferably equal to 1 + ((numberBFrames + 1) * x), where x is an integer value.

av1Settings_numberBFramesBetweenReferenceFrames :: Lens' Av1Settings (Maybe Natural) Source #

Specify from the number of B-frames, in the range of 0-15. For AV1 encoding, we recommend using 7 or 15. Choose a larger number for a lower bitrate and smaller file size; choose a smaller number for better video quality.

av1Settings_slices :: Lens' Av1Settings (Maybe Natural) Source #

Specify the number of slices per picture. This value must be 1, 2, 4, 8, 16, or 32. For progressive pictures, this value must be less than or equal to the number of macroblock rows. For interlaced pictures, this value must be less than or equal to half the number of macroblock rows.

av1Settings_rateControlMode :: Lens' Av1Settings (Maybe Av1RateControlMode) Source #

'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'

av1Settings_qvbrSettings :: Lens' Av1Settings (Maybe Av1QvbrSettings) Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

av1Settings_framerateDenominator :: Lens' Av1Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

av1Settings_framerateConversionAlgorithm :: Lens' Av1Settings (Maybe Av1FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

av1Settings_framerateControl :: Lens' Av1Settings (Maybe Av1FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

av1Settings_adaptiveQuantization :: Lens' Av1Settings (Maybe Av1AdaptiveQuantization) Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization (spatialAdaptiveQuantization).

av1Settings_framerateNumerator :: Lens' Av1Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

av1Settings_maxBitrate :: Lens' Av1Settings (Maybe Natural) Source #

Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

av1Settings_spatialAdaptiveQuantization :: Lens' Av1Settings (Maybe Av1SpatialAdaptiveQuantization) Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

AvailBlanking

data AvailBlanking Source #

Use ad avail blanking settings to specify your output content during SCTE-35 triggered ad avails. You can blank your video or overlay it with an image. MediaConvert also removes any audio and embedded captions during the ad avail. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ad-avail-blanking.html.

See: newAvailBlanking smart constructor.

Constructors

AvailBlanking' 

Fields

Instances

Instances details
Eq AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

Read AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

Show AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

Generic AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

Associated Types

type Rep AvailBlanking :: Type -> Type #

NFData AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

Methods

rnf :: AvailBlanking -> () #

Hashable AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

ToJSON AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

FromJSON AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

type Rep AvailBlanking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvailBlanking

type Rep AvailBlanking = D1 ('MetaData "AvailBlanking" "Amazonka.MediaConvert.Types.AvailBlanking" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AvailBlanking'" 'PrefixI 'True) (S1 ('MetaSel ('Just "availBlankingImage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newAvailBlanking :: AvailBlanking Source #

Create a value of AvailBlanking with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:availBlankingImage:AvailBlanking', availBlanking_availBlankingImage - Blanking image to be used. Leave empty for solid black. Only bmp and png images are supported.

availBlanking_availBlankingImage :: Lens' AvailBlanking (Maybe Text) Source #

Blanking image to be used. Leave empty for solid black. Only bmp and png images are supported.

AvcIntraSettings

data AvcIntraSettings Source #

Required when you choose AVC-Intra for your output video codec. For more information about the AVC-Intra settings, see the relevant specification. For detailed information about SD and HD in AVC-Intra, see https://ieeexplore.ieee.org/document/7290936. For information about 4K/2K in AVC-Intra, see https://pro-av.panasonic.net/en/avc-ultra/AVC-ULTRAoverview.pdf.

See: newAvcIntraSettings smart constructor.

Constructors

AvcIntraSettings' 

Fields

  • slowPal :: Maybe AvcIntraSlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • avcIntraUhdSettings :: Maybe AvcIntraUhdSettings

    Optional when you set AVC-Intra class (avcIntraClass) to Class 4K/2K (CLASS_4K_2K). When you set AVC-Intra class to a different value, this object isn't allowed.

  • telecine :: Maybe AvcIntraTelecine

    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

  • interlaceMode :: Maybe AvcIntraInterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • scanTypeConversionMode :: Maybe AvcIntraScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • avcIntraClass :: Maybe AvcIntraClass

    Specify the AVC-Intra class of your output. The AVC-Intra class selection determines the output video bit rate depending on the frame rate of the output. Outputs with higher class values have higher bitrates and improved image quality. Note that for Class 4K/2K, MediaConvert supports only 4:2:2 chroma subsampling.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe AvcIntraFramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe AvcIntraFramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

Instances

Instances details
Eq AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

Read AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

Show AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

Generic AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

Associated Types

type Rep AvcIntraSettings :: Type -> Type #

NFData AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

Methods

rnf :: AvcIntraSettings -> () #

Hashable AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

ToJSON AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

FromJSON AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

type Rep AvcIntraSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraSettings

type Rep AvcIntraSettings = D1 ('MetaData "AvcIntraSettings" "Amazonka.MediaConvert.Types.AvcIntraSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AvcIntraSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraSlowPal)) :*: S1 ('MetaSel ('Just "avcIntraUhdSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraUhdSettings))) :*: (S1 ('MetaSel ('Just "telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraTelecine)) :*: (S1 ('MetaSel ('Just "interlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraInterlaceMode)) :*: S1 ('MetaSel ('Just "scanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraScanTypeConversionMode))))) :*: ((S1 ('MetaSel ('Just "avcIntraClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraClass)) :*: S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraFramerateConversionAlgorithm)) :*: (S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraFramerateControl)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newAvcIntraSettings :: AvcIntraSettings Source #

Create a value of AvcIntraSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:slowPal:AvcIntraSettings', avcIntraSettings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:avcIntraUhdSettings:AvcIntraSettings', avcIntraSettings_avcIntraUhdSettings - Optional when you set AVC-Intra class (avcIntraClass) to Class 4K/2K (CLASS_4K_2K). When you set AVC-Intra class to a different value, this object isn't allowed.

$sel:telecine:AvcIntraSettings', avcIntraSettings_telecine - When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

$sel:interlaceMode:AvcIntraSettings', avcIntraSettings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:scanTypeConversionMode:AvcIntraSettings', avcIntraSettings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:avcIntraClass:AvcIntraSettings', avcIntraSettings_avcIntraClass - Specify the AVC-Intra class of your output. The AVC-Intra class selection determines the output video bit rate depending on the frame rate of the output. Outputs with higher class values have higher bitrates and improved image quality. Note that for Class 4K/2K, MediaConvert supports only 4:2:2 chroma subsampling.

$sel:framerateDenominator:AvcIntraSettings', avcIntraSettings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:AvcIntraSettings', avcIntraSettings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:AvcIntraSettings', avcIntraSettings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:framerateNumerator:AvcIntraSettings', avcIntraSettings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

avcIntraSettings_slowPal :: Lens' AvcIntraSettings (Maybe AvcIntraSlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

avcIntraSettings_avcIntraUhdSettings :: Lens' AvcIntraSettings (Maybe AvcIntraUhdSettings) Source #

Optional when you set AVC-Intra class (avcIntraClass) to Class 4K/2K (CLASS_4K_2K). When you set AVC-Intra class to a different value, this object isn't allowed.

avcIntraSettings_telecine :: Lens' AvcIntraSettings (Maybe AvcIntraTelecine) Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

avcIntraSettings_interlaceMode :: Lens' AvcIntraSettings (Maybe AvcIntraInterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

avcIntraSettings_scanTypeConversionMode :: Lens' AvcIntraSettings (Maybe AvcIntraScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

avcIntraSettings_avcIntraClass :: Lens' AvcIntraSettings (Maybe AvcIntraClass) Source #

Specify the AVC-Intra class of your output. The AVC-Intra class selection determines the output video bit rate depending on the frame rate of the output. Outputs with higher class values have higher bitrates and improved image quality. Note that for Class 4K/2K, MediaConvert supports only 4:2:2 chroma subsampling.

avcIntraSettings_framerateDenominator :: Lens' AvcIntraSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

avcIntraSettings_framerateConversionAlgorithm :: Lens' AvcIntraSettings (Maybe AvcIntraFramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

avcIntraSettings_framerateControl :: Lens' AvcIntraSettings (Maybe AvcIntraFramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

avcIntraSettings_framerateNumerator :: Lens' AvcIntraSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

AvcIntraUhdSettings

data AvcIntraUhdSettings Source #

Optional when you set AVC-Intra class (avcIntraClass) to Class 4K/2K (CLASS_4K_2K). When you set AVC-Intra class to a different value, this object isn't allowed.

See: newAvcIntraUhdSettings smart constructor.

Constructors

AvcIntraUhdSettings' 

Fields

  • qualityTuningLevel :: Maybe AvcIntraUhdQualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how many transcoding passes MediaConvert does with your video. When you choose Multi-pass (MULTI_PASS), your video quality is better and your output bitrate is more accurate. That is, the actual bitrate of your output is closer to the target bitrate defined in the specification. When you choose Single-pass (SINGLE_PASS), your encoding time is faster. The default behavior is Single-pass (SINGLE_PASS).

Instances

Instances details
Eq AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

Read AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

Show AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

Generic AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

Associated Types

type Rep AvcIntraUhdSettings :: Type -> Type #

NFData AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

Methods

rnf :: AvcIntraUhdSettings -> () #

Hashable AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

ToJSON AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

FromJSON AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

type Rep AvcIntraUhdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.AvcIntraUhdSettings

type Rep AvcIntraUhdSettings = D1 ('MetaData "AvcIntraUhdSettings" "Amazonka.MediaConvert.Types.AvcIntraUhdSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "AvcIntraUhdSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraUhdQualityTuningLevel))))

newAvcIntraUhdSettings :: AvcIntraUhdSettings Source #

Create a value of AvcIntraUhdSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:AvcIntraUhdSettings', avcIntraUhdSettings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how many transcoding passes MediaConvert does with your video. When you choose Multi-pass (MULTI_PASS), your video quality is better and your output bitrate is more accurate. That is, the actual bitrate of your output is closer to the target bitrate defined in the specification. When you choose Single-pass (SINGLE_PASS), your encoding time is faster. The default behavior is Single-pass (SINGLE_PASS).

avcIntraUhdSettings_qualityTuningLevel :: Lens' AvcIntraUhdSettings (Maybe AvcIntraUhdQualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how many transcoding passes MediaConvert does with your video. When you choose Multi-pass (MULTI_PASS), your video quality is better and your output bitrate is more accurate. That is, the actual bitrate of your output is closer to the target bitrate defined in the specification. When you choose Single-pass (SINGLE_PASS), your encoding time is faster. The default behavior is Single-pass (SINGLE_PASS).

BurninDestinationSettings

data BurninDestinationSettings Source #

Burn-in is a captions delivery method, rather than a captions format. Burn-in writes the captions directly on your video frames, replacing pixels of video content with the captions. Set up burn-in captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/burn-in-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to BURN_IN.

See: newBurninDestinationSettings smart constructor.

Constructors

BurninDestinationSettings' 

Fields

  • backgroundOpacity :: Maybe Natural

    Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions.

  • fallbackFont :: Maybe BurninSubtitleFallbackFont

    Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

  • fontOpacity :: Maybe Natural

    Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent.

  • shadowYOffset :: Maybe Int

    Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present.

  • fontResolution :: Maybe Natural

    Specify the Font resolution (FontResolution) in DPI (dots per inch).

  • yPosition :: Maybe Natural

    Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output.

  • backgroundColor :: Maybe BurninSubtitleBackgroundColor

    Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • shadowXOffset :: Maybe Int

    Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left.

  • fontSize :: Maybe Natural

    Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size.

  • xPosition :: Maybe Natural

    Specify the horizontal position (XPosition) of the captions, relative to the left side of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the left of the output. If no explicit x_position is provided, the horizontal caption position will be determined by the alignment parameter.

  • teletextSpacing :: Maybe BurninSubtitleTeletextSpacing

    Specify whether the text spacing (TeletextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions.

  • fontScript :: Maybe FontScript

    Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese.

  • alignment :: Maybe BurninSubtitleAlignment

    Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates.

  • shadowOpacity :: Maybe Natural

    Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions.

  • applyFontColor :: Maybe BurninSubtitleApplyFontColor

    Ignore this setting unless Style passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

  • stylePassthrough :: Maybe BurnInSubtitleStylePassthrough

    Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

  • outlineColor :: Maybe BurninSubtitleOutlineColor

    Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present.

  • outlineSize :: Maybe Natural

    Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present.

  • shadowColor :: Maybe BurninSubtitleShadowColor

    Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present.

  • hexFontColor :: Maybe Text

    Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

  • fontColor :: Maybe BurninSubtitleFontColor

    Specify the color of the burned-in captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present.

Instances

Instances details
Eq BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

Read BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

Show BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

Generic BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

Associated Types

type Rep BurninDestinationSettings :: Type -> Type #

NFData BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

Hashable BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

ToJSON BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

FromJSON BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

type Rep BurninDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.BurninDestinationSettings

type Rep BurninDestinationSettings = D1 ('MetaData "BurninDestinationSettings" "Amazonka.MediaConvert.Types.BurninDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "BurninDestinationSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "backgroundOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "fallbackFont") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleFallbackFont))) :*: (S1 ('MetaSel ('Just "fontOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "shadowYOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "fontResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "yPosition") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "backgroundColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleBackgroundColor))) :*: (S1 ('MetaSel ('Just "shadowXOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "fontSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "xPosition") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))) :*: (((S1 ('MetaSel ('Just "teletextSpacing") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleTeletextSpacing)) :*: S1 ('MetaSel ('Just "fontScript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FontScript))) :*: (S1 ('MetaSel ('Just "alignment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleAlignment)) :*: (S1 ('MetaSel ('Just "shadowOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "applyFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleApplyFontColor))))) :*: ((S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurnInSubtitleStylePassthrough)) :*: (S1 ('MetaSel ('Just "outlineColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleOutlineColor)) :*: S1 ('MetaSel ('Just "outlineSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "shadowColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleShadowColor)) :*: (S1 ('MetaSel ('Just "hexFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "fontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninSubtitleFontColor))))))))

newBurninDestinationSettings :: BurninDestinationSettings Source #

Create a value of BurninDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:backgroundOpacity:BurninDestinationSettings', burninDestinationSettings_backgroundOpacity - Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions.

$sel:fallbackFont:BurninDestinationSettings', burninDestinationSettings_fallbackFont - Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

$sel:fontOpacity:BurninDestinationSettings', burninDestinationSettings_fontOpacity - Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent.

$sel:shadowYOffset:BurninDestinationSettings', burninDestinationSettings_shadowYOffset - Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present.

$sel:fontResolution:BurninDestinationSettings', burninDestinationSettings_fontResolution - Specify the Font resolution (FontResolution) in DPI (dots per inch).

$sel:yPosition:BurninDestinationSettings', burninDestinationSettings_yPosition - Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output.

$sel:backgroundColor:BurninDestinationSettings', burninDestinationSettings_backgroundColor - Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:shadowXOffset:BurninDestinationSettings', burninDestinationSettings_shadowXOffset - Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left.

$sel:fontSize:BurninDestinationSettings', burninDestinationSettings_fontSize - Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size.

$sel:xPosition:BurninDestinationSettings', burninDestinationSettings_xPosition - Specify the horizontal position (XPosition) of the captions, relative to the left side of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the left of the output. If no explicit x_position is provided, the horizontal caption position will be determined by the alignment parameter.

$sel:teletextSpacing:BurninDestinationSettings', burninDestinationSettings_teletextSpacing - Specify whether the text spacing (TeletextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions.

$sel:fontScript:BurninDestinationSettings', burninDestinationSettings_fontScript - Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese.

$sel:alignment:BurninDestinationSettings', burninDestinationSettings_alignment - Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates.

$sel:shadowOpacity:BurninDestinationSettings', burninDestinationSettings_shadowOpacity - Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions.

$sel:applyFontColor:BurninDestinationSettings', burninDestinationSettings_applyFontColor - Ignore this setting unless Style passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

$sel:stylePassthrough:BurninDestinationSettings', burninDestinationSettings_stylePassthrough - Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

$sel:outlineColor:BurninDestinationSettings', burninDestinationSettings_outlineColor - Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present.

$sel:outlineSize:BurninDestinationSettings', burninDestinationSettings_outlineSize - Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present.

$sel:shadowColor:BurninDestinationSettings', burninDestinationSettings_shadowColor - Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present.

$sel:hexFontColor:BurninDestinationSettings', burninDestinationSettings_hexFontColor - Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

$sel:fontColor:BurninDestinationSettings', burninDestinationSettings_fontColor - Specify the color of the burned-in captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present.

burninDestinationSettings_backgroundOpacity :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions.

burninDestinationSettings_fallbackFont :: Lens' BurninDestinationSettings (Maybe BurninSubtitleFallbackFont) Source #

Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

burninDestinationSettings_fontOpacity :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent.

burninDestinationSettings_shadowYOffset :: Lens' BurninDestinationSettings (Maybe Int) Source #

Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present.

burninDestinationSettings_fontResolution :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the Font resolution (FontResolution) in DPI (dots per inch).

burninDestinationSettings_yPosition :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output.

burninDestinationSettings_backgroundColor :: Lens' BurninDestinationSettings (Maybe BurninSubtitleBackgroundColor) Source #

Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

burninDestinationSettings_shadowXOffset :: Lens' BurninDestinationSettings (Maybe Int) Source #

Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left.

burninDestinationSettings_fontSize :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size.

burninDestinationSettings_xPosition :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the horizontal position (XPosition) of the captions, relative to the left side of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the left of the output. If no explicit x_position is provided, the horizontal caption position will be determined by the alignment parameter.

burninDestinationSettings_teletextSpacing :: Lens' BurninDestinationSettings (Maybe BurninSubtitleTeletextSpacing) Source #

Specify whether the text spacing (TeletextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions.

burninDestinationSettings_fontScript :: Lens' BurninDestinationSettings (Maybe FontScript) Source #

Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese.

burninDestinationSettings_alignment :: Lens' BurninDestinationSettings (Maybe BurninSubtitleAlignment) Source #

Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates.

burninDestinationSettings_shadowOpacity :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions.

burninDestinationSettings_applyFontColor :: Lens' BurninDestinationSettings (Maybe BurninSubtitleApplyFontColor) Source #

Ignore this setting unless Style passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

burninDestinationSettings_stylePassthrough :: Lens' BurninDestinationSettings (Maybe BurnInSubtitleStylePassthrough) Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

burninDestinationSettings_outlineColor :: Lens' BurninDestinationSettings (Maybe BurninSubtitleOutlineColor) Source #

Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present.

burninDestinationSettings_outlineSize :: Lens' BurninDestinationSettings (Maybe Natural) Source #

Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present.

burninDestinationSettings_shadowColor :: Lens' BurninDestinationSettings (Maybe BurninSubtitleShadowColor) Source #

Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present.

burninDestinationSettings_hexFontColor :: Lens' BurninDestinationSettings (Maybe Text) Source #

Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

burninDestinationSettings_fontColor :: Lens' BurninDestinationSettings (Maybe BurninSubtitleFontColor) Source #

Specify the color of the burned-in captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present.

CaptionDescription

data CaptionDescription Source #

This object holds groups of settings related to captions for one output. For each output that has captions, include one instance of CaptionDescriptions.

See: newCaptionDescription smart constructor.

Constructors

CaptionDescription' 

Fields

  • captionSelectorName :: Maybe Text

    Specifies which "Caption Selector":#inputs-caption_selector to use from each input when generating captions. The name should be of the format "Caption Selector ", which denotes that the Nth Caption Selector will be used from each input.

  • customLanguageCode :: Maybe Text

    Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

  • languageCode :: Maybe LanguageCode

    Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

  • destinationSettings :: Maybe CaptionDestinationSettings

    Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

  • languageDescription :: Maybe Text

    Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

Instances

Instances details
Eq CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

Read CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

Show CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

Generic CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

Associated Types

type Rep CaptionDescription :: Type -> Type #

NFData CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

Methods

rnf :: CaptionDescription -> () #

Hashable CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

ToJSON CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

FromJSON CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

type Rep CaptionDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescription

type Rep CaptionDescription = D1 ('MetaData "CaptionDescription" "Amazonka.MediaConvert.Types.CaptionDescription" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionDescription'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "captionSelectorName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: (S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionDestinationSettings)) :*: S1 ('MetaSel ('Just "languageDescription") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newCaptionDescription :: CaptionDescription Source #

Create a value of CaptionDescription with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:captionSelectorName:CaptionDescription', captionDescription_captionSelectorName - Specifies which "Caption Selector":#inputs-caption_selector to use from each input when generating captions. The name should be of the format "Caption Selector ", which denotes that the Nth Caption Selector will be used from each input.

$sel:customLanguageCode:CaptionDescription', captionDescription_customLanguageCode - Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

$sel:languageCode:CaptionDescription', captionDescription_languageCode - Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

$sel:destinationSettings:CaptionDescription', captionDescription_destinationSettings - Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

$sel:languageDescription:CaptionDescription', captionDescription_languageDescription - Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

captionDescription_captionSelectorName :: Lens' CaptionDescription (Maybe Text) Source #

Specifies which "Caption Selector":#inputs-caption_selector to use from each input when generating captions. The name should be of the format "Caption Selector ", which denotes that the Nth Caption Selector will be used from each input.

captionDescription_customLanguageCode :: Lens' CaptionDescription (Maybe Text) Source #

Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

captionDescription_languageCode :: Lens' CaptionDescription (Maybe LanguageCode) Source #

Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

captionDescription_destinationSettings :: Lens' CaptionDescription (Maybe CaptionDestinationSettings) Source #

Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

captionDescription_languageDescription :: Lens' CaptionDescription (Maybe Text) Source #

Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

CaptionDescriptionPreset

data CaptionDescriptionPreset Source #

Caption Description for preset

See: newCaptionDescriptionPreset smart constructor.

Constructors

CaptionDescriptionPreset' 

Fields

  • customLanguageCode :: Maybe Text

    Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

  • languageCode :: Maybe LanguageCode

    Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

  • destinationSettings :: Maybe CaptionDestinationSettings

    Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

  • languageDescription :: Maybe Text

    Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

Instances

Instances details
Eq CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

Read CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

Show CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

Generic CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

Associated Types

type Rep CaptionDescriptionPreset :: Type -> Type #

NFData CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

Hashable CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

ToJSON CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

FromJSON CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

type Rep CaptionDescriptionPreset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDescriptionPreset

type Rep CaptionDescriptionPreset = D1 ('MetaData "CaptionDescriptionPreset" "Amazonka.MediaConvert.Types.CaptionDescriptionPreset" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionDescriptionPreset'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode))) :*: (S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionDestinationSettings)) :*: S1 ('MetaSel ('Just "languageDescription") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newCaptionDescriptionPreset :: CaptionDescriptionPreset Source #

Create a value of CaptionDescriptionPreset with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:customLanguageCode:CaptionDescriptionPreset', captionDescriptionPreset_customLanguageCode - Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

$sel:languageCode:CaptionDescriptionPreset', captionDescriptionPreset_languageCode - Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

$sel:destinationSettings:CaptionDescriptionPreset', captionDescriptionPreset_destinationSettings - Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

$sel:languageDescription:CaptionDescriptionPreset', captionDescriptionPreset_languageDescription - Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

captionDescriptionPreset_customLanguageCode :: Lens' CaptionDescriptionPreset (Maybe Text) Source #

Specify the language for this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information when automatically selecting the font script for rendering the captions text. For all outputs, you can use an ISO 639-2 or ISO 639-3 code. For streaming outputs, you can also use any other code in the full RFC-5646 specification. Streaming outputs are those that are in one of the following output groups: CMAF, DASH ISO, Apple HLS, or Microsoft Smooth Streaming.

captionDescriptionPreset_languageCode :: Lens' CaptionDescriptionPreset (Maybe LanguageCode) Source #

Specify the language of this captions output track. For most captions output formats, the encoder puts this language information in the output captions metadata. If your output captions format is DVB-Sub or Burn in, the encoder uses this language information to choose the font language for rendering the captions text.

captionDescriptionPreset_destinationSettings :: Lens' CaptionDescriptionPreset (Maybe CaptionDestinationSettings) Source #

Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

captionDescriptionPreset_languageDescription :: Lens' CaptionDescriptionPreset (Maybe Text) Source #

Specify a label for this set of output captions. For example, "English", "Director commentary", or "track_2". For streaming outputs, MediaConvert passes this information into destination manifests for display on the end-viewer's player device. For outputs in other output groups, the service ignores this setting.

CaptionDestinationSettings

data CaptionDestinationSettings Source #

Settings related to one captions tab on the MediaConvert console. In your job JSON, an instance of captions DestinationSettings is equivalent to one captions tab in the console. Usually, one captions tab corresponds to one output captions track. Depending on your output captions format, one tab might correspond to a set of output captions tracks. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/including-captions.html.

See: newCaptionDestinationSettings smart constructor.

Constructors

CaptionDestinationSettings' 

Fields

  • srtDestinationSettings :: Maybe SrtDestinationSettings

    Settings related to SRT captions. SRT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SRT.

  • teletextDestinationSettings :: Maybe TeletextDestinationSettings

    Settings related to teletext captions. Set up teletext captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/teletext-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TELETEXT.

  • dvbSubDestinationSettings :: Maybe DvbSubDestinationSettings

    Settings related to DVB-Sub captions. Set up DVB-Sub captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/dvb-sub-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to DVB_SUB.

  • ttmlDestinationSettings :: Maybe TtmlDestinationSettings

    Settings related to TTML captions. TTML is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TTML.

  • destinationType :: Maybe CaptionDestinationType

    Specify the format for this set of captions on this output. The default format is embedded without SCTE-20. Note that your choice of video output container constrains your choice of output captions format. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/captions-support-tables.html. If you are using SCTE-20 and you want to create an output that complies with the SCTE-43 spec, choose SCTE-20 plus embedded (SCTE20_PLUS_EMBEDDED). To create a non-compliant output where the embedded captions come first, choose Embedded plus SCTE-20 (EMBEDDED_PLUS_SCTE20).

  • webvttDestinationSettings :: Maybe WebvttDestinationSettings

    Settings related to WebVTT captions. WebVTT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to WebVTT.

  • embeddedDestinationSettings :: Maybe EmbeddedDestinationSettings

    Settings related to CEA/EIA-608 and CEA/EIA-708 (also called embedded or ancillary) captions. Set up embedded captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/embedded-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to EMBEDDED, EMBEDDED_PLUS_SCTE20, or SCTE20_PLUS_EMBEDDED.

  • sccDestinationSettings :: Maybe SccDestinationSettings

    Settings related to SCC captions. SCC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/scc-srt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SCC.

  • burninDestinationSettings :: Maybe BurninDestinationSettings

    Burn-in is a captions delivery method, rather than a captions format. Burn-in writes the captions directly on your video frames, replacing pixels of video content with the captions. Set up burn-in captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/burn-in-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to BURN_IN.

  • imscDestinationSettings :: Maybe ImscDestinationSettings

    Settings related to IMSC captions. IMSC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to IMSC.

Instances

Instances details
Eq CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

Read CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

Show CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

Generic CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

Associated Types

type Rep CaptionDestinationSettings :: Type -> Type #

NFData CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

Hashable CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

ToJSON CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

FromJSON CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

type Rep CaptionDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionDestinationSettings

type Rep CaptionDestinationSettings = D1 ('MetaData "CaptionDestinationSettings" "Amazonka.MediaConvert.Types.CaptionDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionDestinationSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "srtDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SrtDestinationSettings)) :*: S1 ('MetaSel ('Just "teletextDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TeletextDestinationSettings))) :*: (S1 ('MetaSel ('Just "dvbSubDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubDestinationSettings)) :*: (S1 ('MetaSel ('Just "ttmlDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TtmlDestinationSettings)) :*: S1 ('MetaSel ('Just "destinationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionDestinationType))))) :*: ((S1 ('MetaSel ('Just "webvttDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WebvttDestinationSettings)) :*: S1 ('MetaSel ('Just "embeddedDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EmbeddedDestinationSettings))) :*: (S1 ('MetaSel ('Just "sccDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SccDestinationSettings)) :*: (S1 ('MetaSel ('Just "burninDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BurninDestinationSettings)) :*: S1 ('MetaSel ('Just "imscDestinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ImscDestinationSettings)))))))

newCaptionDestinationSettings :: CaptionDestinationSettings Source #

Create a value of CaptionDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:srtDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_srtDestinationSettings - Settings related to SRT captions. SRT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SRT.

$sel:teletextDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_teletextDestinationSettings - Settings related to teletext captions. Set up teletext captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/teletext-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TELETEXT.

$sel:dvbSubDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_dvbSubDestinationSettings - Settings related to DVB-Sub captions. Set up DVB-Sub captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/dvb-sub-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to DVB_SUB.

$sel:ttmlDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_ttmlDestinationSettings - Settings related to TTML captions. TTML is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TTML.

$sel:destinationType:CaptionDestinationSettings', captionDestinationSettings_destinationType - Specify the format for this set of captions on this output. The default format is embedded without SCTE-20. Note that your choice of video output container constrains your choice of output captions format. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/captions-support-tables.html. If you are using SCTE-20 and you want to create an output that complies with the SCTE-43 spec, choose SCTE-20 plus embedded (SCTE20_PLUS_EMBEDDED). To create a non-compliant output where the embedded captions come first, choose Embedded plus SCTE-20 (EMBEDDED_PLUS_SCTE20).

$sel:webvttDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_webvttDestinationSettings - Settings related to WebVTT captions. WebVTT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to WebVTT.

$sel:embeddedDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_embeddedDestinationSettings - Settings related to CEA/EIA-608 and CEA/EIA-708 (also called embedded or ancillary) captions. Set up embedded captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/embedded-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to EMBEDDED, EMBEDDED_PLUS_SCTE20, or SCTE20_PLUS_EMBEDDED.

$sel:sccDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_sccDestinationSettings - Settings related to SCC captions. SCC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/scc-srt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SCC.

$sel:burninDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_burninDestinationSettings - Burn-in is a captions delivery method, rather than a captions format. Burn-in writes the captions directly on your video frames, replacing pixels of video content with the captions. Set up burn-in captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/burn-in-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to BURN_IN.

$sel:imscDestinationSettings:CaptionDestinationSettings', captionDestinationSettings_imscDestinationSettings - Settings related to IMSC captions. IMSC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to IMSC.

captionDestinationSettings_srtDestinationSettings :: Lens' CaptionDestinationSettings (Maybe SrtDestinationSettings) Source #

Settings related to SRT captions. SRT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SRT.

captionDestinationSettings_teletextDestinationSettings :: Lens' CaptionDestinationSettings (Maybe TeletextDestinationSettings) Source #

Settings related to teletext captions. Set up teletext captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/teletext-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TELETEXT.

captionDestinationSettings_dvbSubDestinationSettings :: Lens' CaptionDestinationSettings (Maybe DvbSubDestinationSettings) Source #

Settings related to DVB-Sub captions. Set up DVB-Sub captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/dvb-sub-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to DVB_SUB.

captionDestinationSettings_ttmlDestinationSettings :: Lens' CaptionDestinationSettings (Maybe TtmlDestinationSettings) Source #

Settings related to TTML captions. TTML is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TTML.

captionDestinationSettings_destinationType :: Lens' CaptionDestinationSettings (Maybe CaptionDestinationType) Source #

Specify the format for this set of captions on this output. The default format is embedded without SCTE-20. Note that your choice of video output container constrains your choice of output captions format. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/captions-support-tables.html. If you are using SCTE-20 and you want to create an output that complies with the SCTE-43 spec, choose SCTE-20 plus embedded (SCTE20_PLUS_EMBEDDED). To create a non-compliant output where the embedded captions come first, choose Embedded plus SCTE-20 (EMBEDDED_PLUS_SCTE20).

captionDestinationSettings_webvttDestinationSettings :: Lens' CaptionDestinationSettings (Maybe WebvttDestinationSettings) Source #

Settings related to WebVTT captions. WebVTT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to WebVTT.

captionDestinationSettings_embeddedDestinationSettings :: Lens' CaptionDestinationSettings (Maybe EmbeddedDestinationSettings) Source #

Settings related to CEA/EIA-608 and CEA/EIA-708 (also called embedded or ancillary) captions. Set up embedded captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/embedded-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to EMBEDDED, EMBEDDED_PLUS_SCTE20, or SCTE20_PLUS_EMBEDDED.

captionDestinationSettings_sccDestinationSettings :: Lens' CaptionDestinationSettings (Maybe SccDestinationSettings) Source #

Settings related to SCC captions. SCC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/scc-srt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SCC.

captionDestinationSettings_burninDestinationSettings :: Lens' CaptionDestinationSettings (Maybe BurninDestinationSettings) Source #

Burn-in is a captions delivery method, rather than a captions format. Burn-in writes the captions directly on your video frames, replacing pixels of video content with the captions. Set up burn-in captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/burn-in-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to BURN_IN.

captionDestinationSettings_imscDestinationSettings :: Lens' CaptionDestinationSettings (Maybe ImscDestinationSettings) Source #

Settings related to IMSC captions. IMSC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to IMSC.

CaptionSelector

data CaptionSelector Source #

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

See: newCaptionSelector smart constructor.

Constructors

CaptionSelector' 

Fields

  • customLanguageCode :: Maybe Text

    The specific language to extract from source, using the ISO 639-2 or ISO 639-3 three-letter language code. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

  • languageCode :: Maybe LanguageCode

    The specific language to extract from source. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

  • sourceSettings :: Maybe CaptionSourceSettings

    If your input captions are SCC, TTML, STL, SMI, SRT, or IMSC in an xml file, specify the URI of the input captions source file. If your input captions are IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

Instances

Instances details
Eq CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

Read CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

Show CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

Generic CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

Associated Types

type Rep CaptionSelector :: Type -> Type #

NFData CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

Methods

rnf :: CaptionSelector -> () #

Hashable CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

ToJSON CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

FromJSON CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

type Rep CaptionSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSelector

type Rep CaptionSelector = D1 ('MetaData "CaptionSelector" "Amazonka.MediaConvert.Types.CaptionSelector" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionSelector'" 'PrefixI 'True) (S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: S1 ('MetaSel ('Just "sourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionSourceSettings)))))

newCaptionSelector :: CaptionSelector Source #

Create a value of CaptionSelector with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:customLanguageCode:CaptionSelector', captionSelector_customLanguageCode - The specific language to extract from source, using the ISO 639-2 or ISO 639-3 three-letter language code. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

$sel:languageCode:CaptionSelector', captionSelector_languageCode - The specific language to extract from source. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

$sel:sourceSettings:CaptionSelector', captionSelector_sourceSettings - If your input captions are SCC, TTML, STL, SMI, SRT, or IMSC in an xml file, specify the URI of the input captions source file. If your input captions are IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

captionSelector_customLanguageCode :: Lens' CaptionSelector (Maybe Text) Source #

The specific language to extract from source, using the ISO 639-2 or ISO 639-3 three-letter language code. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

captionSelector_languageCode :: Lens' CaptionSelector (Maybe LanguageCode) Source #

The specific language to extract from source. If input is SCTE-27, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or PID to select the caption language to extract. If input is DVB-Sub that is being passed through, omit this field (and PID field); there is no way to extract a specific language with pass-through captions.

captionSelector_sourceSettings :: Lens' CaptionSelector (Maybe CaptionSourceSettings) Source #

If your input captions are SCC, TTML, STL, SMI, SRT, or IMSC in an xml file, specify the URI of the input captions source file. If your input captions are IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

CaptionSourceFramerate

data CaptionSourceFramerate Source #

Ignore this setting unless your input captions format is SCC. To have the service compensate for differing frame rates between your input captions and input video, specify the frame rate of the captions file. Specify this value as a fraction. When you work directly in your JSON job specification, use the settings framerateNumerator and framerateDenominator. For example, you might specify 24 / 1 for 24 fps, 25 / 1 for 25 fps, 24000 / 1001 for 23.976 fps, or 30000 / 1001 for 29.97 fps.

See: newCaptionSourceFramerate smart constructor.

Constructors

CaptionSourceFramerate' 

Fields

  • framerateDenominator :: Maybe Natural

    Specify the denominator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate numerator (framerateNumerator).

  • framerateNumerator :: Maybe Natural

    Specify the numerator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate denominator (framerateDenominator).

Instances

Instances details
Eq CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

Read CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

Show CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

Generic CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

Associated Types

type Rep CaptionSourceFramerate :: Type -> Type #

NFData CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

Methods

rnf :: CaptionSourceFramerate -> () #

Hashable CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

ToJSON CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

FromJSON CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

type Rep CaptionSourceFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceFramerate

type Rep CaptionSourceFramerate = D1 ('MetaData "CaptionSourceFramerate" "Amazonka.MediaConvert.Types.CaptionSourceFramerate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionSourceFramerate'" 'PrefixI 'True) (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newCaptionSourceFramerate :: CaptionSourceFramerate Source #

Create a value of CaptionSourceFramerate with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:framerateDenominator:CaptionSourceFramerate', captionSourceFramerate_framerateDenominator - Specify the denominator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate numerator (framerateNumerator).

$sel:framerateNumerator:CaptionSourceFramerate', captionSourceFramerate_framerateNumerator - Specify the numerator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate denominator (framerateDenominator).

captionSourceFramerate_framerateDenominator :: Lens' CaptionSourceFramerate (Maybe Natural) Source #

Specify the denominator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate numerator (framerateNumerator).

captionSourceFramerate_framerateNumerator :: Lens' CaptionSourceFramerate (Maybe Natural) Source #

Specify the numerator of the fraction that represents the frame rate for the setting Caption source frame rate (CaptionSourceFramerate). Use this setting along with the setting Framerate denominator (framerateDenominator).

CaptionSourceSettings

data CaptionSourceSettings Source #

If your input captions are SCC, TTML, STL, SMI, SRT, or IMSC in an xml file, specify the URI of the input captions source file. If your input captions are IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

See: newCaptionSourceSettings smart constructor.

Constructors

CaptionSourceSettings' 

Fields

Instances

Instances details
Eq CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

Read CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

Show CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

Generic CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

Associated Types

type Rep CaptionSourceSettings :: Type -> Type #

NFData CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

Methods

rnf :: CaptionSourceSettings -> () #

Hashable CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

ToJSON CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

FromJSON CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

type Rep CaptionSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CaptionSourceSettings

type Rep CaptionSourceSettings = D1 ('MetaData "CaptionSourceSettings" "Amazonka.MediaConvert.Types.CaptionSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CaptionSourceSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "teletextSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TeletextSourceSettings)) :*: S1 ('MetaSel ('Just "sourceType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionSourceType))) :*: (S1 ('MetaSel ('Just "fileSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FileSourceSettings)) :*: S1 ('MetaSel ('Just "webvttHlsSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WebvttHlsSourceSettings)))) :*: ((S1 ('MetaSel ('Just "dvbSubSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubSourceSettings)) :*: S1 ('MetaSel ('Just "trackSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TrackSourceSettings))) :*: (S1 ('MetaSel ('Just "ancillarySourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AncillarySourceSettings)) :*: S1 ('MetaSel ('Just "embeddedSourceSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EmbeddedSourceSettings))))))

newCaptionSourceSettings :: CaptionSourceSettings Source #

Create a value of CaptionSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:teletextSourceSettings:CaptionSourceSettings', captionSourceSettings_teletextSourceSettings - Settings specific to Teletext caption sources, including Page number.

$sel:sourceType:CaptionSourceSettings', captionSourceSettings_sourceType - Use Source (SourceType) to identify the format of your input captions. The service cannot auto-detect caption format.

$sel:fileSourceSettings:CaptionSourceSettings', captionSourceSettings_fileSourceSettings - If your input captions are SCC, SMI, SRT, STL, TTML, WebVTT, or IMSC 1.1 in an xml file, specify the URI of the input caption source file. If your caption source is IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

$sel:webvttHlsSourceSettings:CaptionSourceSettings', captionSourceSettings_webvttHlsSourceSettings - Settings specific to WebVTT sources in HLS alternative rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique subtitle track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the specified properties, the job fails. If there is only one subtitle track in the rendition group, the settings can be left empty and the default subtitle track will be chosen. If your caption source is a sidecar file, use FileSourceSettings instead of WebvttHlsSourceSettings.

$sel:dvbSubSourceSettings:CaptionSourceSettings', captionSourceSettings_dvbSubSourceSettings - DVB Sub Source Settings

$sel:trackSourceSettings:CaptionSourceSettings', captionSourceSettings_trackSourceSettings - Settings specific to caption sources that are specified by track number. Currently, this is only IMSC captions in an IMF package. If your caption source is IMSC 1.1 in a separate xml file, use FileSourceSettings instead of TrackSourceSettings.

$sel:ancillarySourceSettings:CaptionSourceSettings', captionSourceSettings_ancillarySourceSettings - Settings for ancillary captions source.

$sel:embeddedSourceSettings:CaptionSourceSettings', captionSourceSettings_embeddedSourceSettings - Settings for embedded captions Source

captionSourceSettings_teletextSourceSettings :: Lens' CaptionSourceSettings (Maybe TeletextSourceSettings) Source #

Settings specific to Teletext caption sources, including Page number.

captionSourceSettings_sourceType :: Lens' CaptionSourceSettings (Maybe CaptionSourceType) Source #

Use Source (SourceType) to identify the format of your input captions. The service cannot auto-detect caption format.

captionSourceSettings_fileSourceSettings :: Lens' CaptionSourceSettings (Maybe FileSourceSettings) Source #

If your input captions are SCC, SMI, SRT, STL, TTML, WebVTT, or IMSC 1.1 in an xml file, specify the URI of the input caption source file. If your caption source is IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

captionSourceSettings_webvttHlsSourceSettings :: Lens' CaptionSourceSettings (Maybe WebvttHlsSourceSettings) Source #

Settings specific to WebVTT sources in HLS alternative rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique subtitle track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the specified properties, the job fails. If there is only one subtitle track in the rendition group, the settings can be left empty and the default subtitle track will be chosen. If your caption source is a sidecar file, use FileSourceSettings instead of WebvttHlsSourceSettings.

captionSourceSettings_trackSourceSettings :: Lens' CaptionSourceSettings (Maybe TrackSourceSettings) Source #

Settings specific to caption sources that are specified by track number. Currently, this is only IMSC captions in an IMF package. If your caption source is IMSC 1.1 in a separate xml file, use FileSourceSettings instead of TrackSourceSettings.

ChannelMapping

data ChannelMapping Source #

Channel mapping (ChannelMapping) contains the group of fields that hold the remixing value for each channel, in dB. Specify remix values to indicate how much of the content from your input audio channel you want in your output audio channels. Each instance of the InputChannels or InputChannelsFineTune array specifies these values for one output channel. Use one instance of this array for each output channel. In the console, each array corresponds to a column in the graphical depiction of the mapping matrix. The rows of the graphical matrix correspond to input channels. Valid values are within the range from -60 (mute) through 6. A setting of 0 passes the input channel unchanged to the output channel (no attenuation or amplification). Use InputChannels or InputChannelsFineTune to specify your remix values. Don't use both.

See: newChannelMapping smart constructor.

Constructors

ChannelMapping' 

Fields

  • outputChannels :: Maybe [OutputChannelMapping]

    In your JSON job specification, include one child of OutputChannels for each audio channel that you want in your output. Each child should contain one instance of InputChannels or InputChannelsFineTune.

Instances

Instances details
Eq ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

Read ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

Show ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

Generic ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

Associated Types

type Rep ChannelMapping :: Type -> Type #

NFData ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

Methods

rnf :: ChannelMapping -> () #

Hashable ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

ToJSON ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

FromJSON ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

type Rep ChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ChannelMapping

type Rep ChannelMapping = D1 ('MetaData "ChannelMapping" "Amazonka.MediaConvert.Types.ChannelMapping" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ChannelMapping'" 'PrefixI 'True) (S1 ('MetaSel ('Just "outputChannels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [OutputChannelMapping]))))

newChannelMapping :: ChannelMapping Source #

Create a value of ChannelMapping with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:outputChannels:ChannelMapping', channelMapping_outputChannels - In your JSON job specification, include one child of OutputChannels for each audio channel that you want in your output. Each child should contain one instance of InputChannels or InputChannelsFineTune.

channelMapping_outputChannels :: Lens' ChannelMapping (Maybe [OutputChannelMapping]) Source #

In your JSON job specification, include one child of OutputChannels for each audio channel that you want in your output. Each child should contain one instance of InputChannels or InputChannelsFineTune.

CmafAdditionalManifest

data CmafAdditionalManifest Source #

Specify the details for each pair of HLS and DASH additional manifests that you want the service to generate for this CMAF output group. Each pair of manifests can reference a different subset of outputs in the group.

See: newCmafAdditionalManifest smart constructor.

Constructors

CmafAdditionalManifest' 

Fields

  • manifestNameModifier :: Maybe Text

    Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

  • selectedOutputs :: Maybe [Text]

    Specify the outputs that you want this additional top-level manifest to reference.

Instances

Instances details
Eq CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

Read CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

Show CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

Generic CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

Associated Types

type Rep CmafAdditionalManifest :: Type -> Type #

NFData CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

Methods

rnf :: CmafAdditionalManifest -> () #

Hashable CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

ToJSON CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

FromJSON CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

type Rep CmafAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafAdditionalManifest

type Rep CmafAdditionalManifest = D1 ('MetaData "CmafAdditionalManifest" "Amazonka.MediaConvert.Types.CmafAdditionalManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CmafAdditionalManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "manifestNameModifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "selectedOutputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newCmafAdditionalManifest :: CmafAdditionalManifest Source #

Create a value of CmafAdditionalManifest with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:manifestNameModifier:CmafAdditionalManifest', cmafAdditionalManifest_manifestNameModifier - Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

$sel:selectedOutputs:CmafAdditionalManifest', cmafAdditionalManifest_selectedOutputs - Specify the outputs that you want this additional top-level manifest to reference.

cmafAdditionalManifest_manifestNameModifier :: Lens' CmafAdditionalManifest (Maybe Text) Source #

Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

cmafAdditionalManifest_selectedOutputs :: Lens' CmafAdditionalManifest (Maybe [Text]) Source #

Specify the outputs that you want this additional top-level manifest to reference.

CmafEncryptionSettings

data CmafEncryptionSettings Source #

Settings for CMAF encryption

See: newCmafEncryptionSettings smart constructor.

Constructors

CmafEncryptionSettings' 

Fields

  • encryptionMethod :: Maybe CmafEncryptionType

    Specify the encryption scheme that you want the service to use when encrypting your CMAF segments. Choose AES-CBC subsample (SAMPLE-AES) or AES_CTR (AES-CTR).

  • constantInitializationVector :: Maybe Text

    This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

  • type' :: Maybe CmafKeyProviderType

    Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

  • staticKeyProvider :: Maybe StaticKeyProvider

    Use these settings to set up encryption with a static key provider.

  • spekeKeyProvider :: Maybe SpekeKeyProviderCmaf

    If your output group type is CMAF, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is HLS, DASH, or Microsoft Smooth, use the SpekeKeyProvider settings instead.

  • initializationVectorInManifest :: Maybe CmafInitializationVectorInManifest

    When you use DRM with CMAF outputs, choose whether the service writes the 128-bit encryption initialization vector in the HLS and DASH manifests.

Instances

Instances details
Eq CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

Read CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

Show CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

Generic CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

Associated Types

type Rep CmafEncryptionSettings :: Type -> Type #

NFData CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

Methods

rnf :: CmafEncryptionSettings -> () #

Hashable CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

ToJSON CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

FromJSON CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

type Rep CmafEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafEncryptionSettings

type Rep CmafEncryptionSettings = D1 ('MetaData "CmafEncryptionSettings" "Amazonka.MediaConvert.Types.CmafEncryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CmafEncryptionSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "encryptionMethod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafEncryptionType)) :*: (S1 ('MetaSel ('Just "constantInitializationVector") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafKeyProviderType)))) :*: (S1 ('MetaSel ('Just "staticKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe StaticKeyProvider)) :*: (S1 ('MetaSel ('Just "spekeKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SpekeKeyProviderCmaf)) :*: S1 ('MetaSel ('Just "initializationVectorInManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafInitializationVectorInManifest))))))

newCmafEncryptionSettings :: CmafEncryptionSettings Source #

Create a value of CmafEncryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:encryptionMethod:CmafEncryptionSettings', cmafEncryptionSettings_encryptionMethod - Specify the encryption scheme that you want the service to use when encrypting your CMAF segments. Choose AES-CBC subsample (SAMPLE-AES) or AES_CTR (AES-CTR).

$sel:constantInitializationVector:CmafEncryptionSettings', cmafEncryptionSettings_constantInitializationVector - This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

$sel:type':CmafEncryptionSettings', cmafEncryptionSettings_type - Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

$sel:staticKeyProvider:CmafEncryptionSettings', cmafEncryptionSettings_staticKeyProvider - Use these settings to set up encryption with a static key provider.

$sel:spekeKeyProvider:CmafEncryptionSettings', cmafEncryptionSettings_spekeKeyProvider - If your output group type is CMAF, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is HLS, DASH, or Microsoft Smooth, use the SpekeKeyProvider settings instead.

$sel:initializationVectorInManifest:CmafEncryptionSettings', cmafEncryptionSettings_initializationVectorInManifest - When you use DRM with CMAF outputs, choose whether the service writes the 128-bit encryption initialization vector in the HLS and DASH manifests.

cmafEncryptionSettings_encryptionMethod :: Lens' CmafEncryptionSettings (Maybe CmafEncryptionType) Source #

Specify the encryption scheme that you want the service to use when encrypting your CMAF segments. Choose AES-CBC subsample (SAMPLE-AES) or AES_CTR (AES-CTR).

cmafEncryptionSettings_constantInitializationVector :: Lens' CmafEncryptionSettings (Maybe Text) Source #

This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

cmafEncryptionSettings_type :: Lens' CmafEncryptionSettings (Maybe CmafKeyProviderType) Source #

Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

cmafEncryptionSettings_staticKeyProvider :: Lens' CmafEncryptionSettings (Maybe StaticKeyProvider) Source #

Use these settings to set up encryption with a static key provider.

cmafEncryptionSettings_spekeKeyProvider :: Lens' CmafEncryptionSettings (Maybe SpekeKeyProviderCmaf) Source #

If your output group type is CMAF, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is HLS, DASH, or Microsoft Smooth, use the SpekeKeyProvider settings instead.

cmafEncryptionSettings_initializationVectorInManifest :: Lens' CmafEncryptionSettings (Maybe CmafInitializationVectorInManifest) Source #

When you use DRM with CMAF outputs, choose whether the service writes the 128-bit encryption initialization vector in the HLS and DASH manifests.

CmafGroupSettings

data CmafGroupSettings Source #

Settings related to your CMAF output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to CMAF_GROUP_SETTINGS.

See: newCmafGroupSettings smart constructor.

Constructors

CmafGroupSettings' 

Fields

  • fragmentLength :: Maybe Natural

    Specify the length, in whole seconds, of the mp4 fragments. When you don't specify a value, MediaConvert defaults to 2. Related setting: Use Fragment length control (FragmentLengthControl) to specify whether the encoder enforces this value strictly.

  • segmentControl :: Maybe CmafSegmentControl

    When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

  • destination :: Maybe Text

    Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

  • minBufferTime :: Maybe Natural

    Minimum time of initially buffered media that is needed to ensure smooth playout.

  • mpdProfile :: Maybe CmafMpdProfile

    Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

  • targetDurationCompatibilityMode :: Maybe CmafTargetDurationCompatibilityMode

    When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

  • imageBasedTrickPlay :: Maybe CmafImageBasedTrickPlay

    Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. When you enable Write HLS manifest (WriteHlsManifest), MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. When you enable Write DASH manifest (WriteDashManifest), MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

  • writeHlsManifest :: Maybe CmafWriteHLSManifest

    When set to ENABLED, an Apple HLS manifest will be generated for this output.

  • additionalManifests :: Maybe [CmafAdditionalManifest]

    By default, the service creates one top-level .m3u8 HLS manifest and one top -level .mpd DASH manifest for each CMAF output group in your job. These default manifests reference every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here. For each additional manifest that you specify, the service creates one HLS manifest and one DASH manifest.

  • segmentLengthControl :: Maybe CmafSegmentLengthControl

    Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

  • imageBasedTrickPlaySettings :: Maybe CmafImageBasedTrickPlaySettings

    Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

  • codecSpecification :: Maybe CmafCodecSpecification

    Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

  • baseUrl :: Maybe Text

    A partial URI prefix that will be put in the manifest file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

  • destinationSettings :: Maybe DestinationSettings

    Settings associated with the destination. Will vary based on the type of destination

  • minFinalSegmentLength :: Maybe Double

    Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

  • writeDashManifest :: Maybe CmafWriteDASHManifest

    When set to ENABLED, a DASH MPD manifest will be generated for this output.

  • encryption :: Maybe CmafEncryptionSettings

    DRM settings.

  • segmentLength :: Maybe Natural

    Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (CmafSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

  • ptsOffsetHandlingForBFrames :: Maybe CmafPtsOffsetHandlingForBFrames

    Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

  • manifestDurationFormat :: Maybe CmafManifestDurationFormat

    Indicates whether the output manifest should use floating point values for segment duration.

  • clientCache :: Maybe CmafClientCache

    Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

  • writeSegmentTimelineInRepresentation :: Maybe CmafWriteSegmentTimelineInRepresentation

    When you enable Precise segment duration in DASH manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.

  • streamInfResolution :: Maybe CmafStreamInfResolution

    Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

  • manifestCompression :: Maybe CmafManifestCompression

    When set to GZIP, compresses HLS playlist.

Instances

Instances details
Eq CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

Read CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

Show CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

Generic CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

Associated Types

type Rep CmafGroupSettings :: Type -> Type #

NFData CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

Methods

rnf :: CmafGroupSettings -> () #

Hashable CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

ToJSON CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

FromJSON CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

type Rep CmafGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafGroupSettings

type Rep CmafGroupSettings = D1 ('MetaData "CmafGroupSettings" "Amazonka.MediaConvert.Types.CmafGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CmafGroupSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "fragmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "segmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafSegmentControl)) :*: S1 ('MetaSel ('Just "destination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: (S1 ('MetaSel ('Just "minBufferTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "mpdProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafMpdProfile)) :*: S1 ('MetaSel ('Just "targetDurationCompatibilityMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafTargetDurationCompatibilityMode))))) :*: ((S1 ('MetaSel ('Just "imageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafImageBasedTrickPlay)) :*: (S1 ('MetaSel ('Just "writeHlsManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafWriteHLSManifest)) :*: S1 ('MetaSel ('Just "additionalManifests") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [CmafAdditionalManifest])))) :*: (S1 ('MetaSel ('Just "segmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafSegmentLengthControl)) :*: (S1 ('MetaSel ('Just "imageBasedTrickPlaySettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafImageBasedTrickPlaySettings)) :*: S1 ('MetaSel ('Just "codecSpecification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafCodecSpecification)))))) :*: (((S1 ('MetaSel ('Just "baseUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DestinationSettings)) :*: S1 ('MetaSel ('Just "minFinalSegmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))) :*: (S1 ('MetaSel ('Just "writeDashManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafWriteDASHManifest)) :*: (S1 ('MetaSel ('Just "encryption") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafEncryptionSettings)) :*: S1 ('MetaSel ('Just "segmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "ptsOffsetHandlingForBFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafPtsOffsetHandlingForBFrames)) :*: (S1 ('MetaSel ('Just "manifestDurationFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafManifestDurationFormat)) :*: S1 ('MetaSel ('Just "clientCache") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafClientCache)))) :*: (S1 ('MetaSel ('Just "writeSegmentTimelineInRepresentation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafWriteSegmentTimelineInRepresentation)) :*: (S1 ('MetaSel ('Just "streamInfResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafStreamInfResolution)) :*: S1 ('MetaSel ('Just "manifestCompression") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafManifestCompression))))))))

newCmafGroupSettings :: CmafGroupSettings Source #

Create a value of CmafGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:fragmentLength:CmafGroupSettings', cmafGroupSettings_fragmentLength - Specify the length, in whole seconds, of the mp4 fragments. When you don't specify a value, MediaConvert defaults to 2. Related setting: Use Fragment length control (FragmentLengthControl) to specify whether the encoder enforces this value strictly.

$sel:segmentControl:CmafGroupSettings', cmafGroupSettings_segmentControl - When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

$sel:destination:CmafGroupSettings', cmafGroupSettings_destination - Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

$sel:minBufferTime:CmafGroupSettings', cmafGroupSettings_minBufferTime - Minimum time of initially buffered media that is needed to ensure smooth playout.

$sel:mpdProfile:CmafGroupSettings', cmafGroupSettings_mpdProfile - Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

$sel:targetDurationCompatibilityMode:CmafGroupSettings', cmafGroupSettings_targetDurationCompatibilityMode - When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

$sel:imageBasedTrickPlay:CmafGroupSettings', cmafGroupSettings_imageBasedTrickPlay - Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. When you enable Write HLS manifest (WriteHlsManifest), MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. When you enable Write DASH manifest (WriteDashManifest), MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

$sel:writeHlsManifest:CmafGroupSettings', cmafGroupSettings_writeHlsManifest - When set to ENABLED, an Apple HLS manifest will be generated for this output.

$sel:additionalManifests:CmafGroupSettings', cmafGroupSettings_additionalManifests - By default, the service creates one top-level .m3u8 HLS manifest and one top -level .mpd DASH manifest for each CMAF output group in your job. These default manifests reference every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here. For each additional manifest that you specify, the service creates one HLS manifest and one DASH manifest.

$sel:segmentLengthControl:CmafGroupSettings', cmafGroupSettings_segmentLengthControl - Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

$sel:imageBasedTrickPlaySettings:CmafGroupSettings', cmafGroupSettings_imageBasedTrickPlaySettings - Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

$sel:codecSpecification:CmafGroupSettings', cmafGroupSettings_codecSpecification - Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

$sel:baseUrl:CmafGroupSettings', cmafGroupSettings_baseUrl - A partial URI prefix that will be put in the manifest file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

$sel:destinationSettings:CmafGroupSettings', cmafGroupSettings_destinationSettings - Settings associated with the destination. Will vary based on the type of destination

$sel:minFinalSegmentLength:CmafGroupSettings', cmafGroupSettings_minFinalSegmentLength - Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

$sel:writeDashManifest:CmafGroupSettings', cmafGroupSettings_writeDashManifest - When set to ENABLED, a DASH MPD manifest will be generated for this output.

$sel:encryption:CmafGroupSettings', cmafGroupSettings_encryption - DRM settings.

$sel:segmentLength:CmafGroupSettings', cmafGroupSettings_segmentLength - Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (CmafSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

$sel:ptsOffsetHandlingForBFrames:CmafGroupSettings', cmafGroupSettings_ptsOffsetHandlingForBFrames - Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

$sel:manifestDurationFormat:CmafGroupSettings', cmafGroupSettings_manifestDurationFormat - Indicates whether the output manifest should use floating point values for segment duration.

$sel:clientCache:CmafGroupSettings', cmafGroupSettings_clientCache - Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

$sel:writeSegmentTimelineInRepresentation:CmafGroupSettings', cmafGroupSettings_writeSegmentTimelineInRepresentation - When you enable Precise segment duration in DASH manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.

$sel:streamInfResolution:CmafGroupSettings', cmafGroupSettings_streamInfResolution - Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

$sel:manifestCompression:CmafGroupSettings', cmafGroupSettings_manifestCompression - When set to GZIP, compresses HLS playlist.

cmafGroupSettings_fragmentLength :: Lens' CmafGroupSettings (Maybe Natural) Source #

Specify the length, in whole seconds, of the mp4 fragments. When you don't specify a value, MediaConvert defaults to 2. Related setting: Use Fragment length control (FragmentLengthControl) to specify whether the encoder enforces this value strictly.

cmafGroupSettings_segmentControl :: Lens' CmafGroupSettings (Maybe CmafSegmentControl) Source #

When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

cmafGroupSettings_destination :: Lens' CmafGroupSettings (Maybe Text) Source #

Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

cmafGroupSettings_minBufferTime :: Lens' CmafGroupSettings (Maybe Natural) Source #

Minimum time of initially buffered media that is needed to ensure smooth playout.

cmafGroupSettings_mpdProfile :: Lens' CmafGroupSettings (Maybe CmafMpdProfile) Source #

Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

cmafGroupSettings_targetDurationCompatibilityMode :: Lens' CmafGroupSettings (Maybe CmafTargetDurationCompatibilityMode) Source #

When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

cmafGroupSettings_imageBasedTrickPlay :: Lens' CmafGroupSettings (Maybe CmafImageBasedTrickPlay) Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. When you enable Write HLS manifest (WriteHlsManifest), MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. When you enable Write DASH manifest (WriteDashManifest), MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

cmafGroupSettings_writeHlsManifest :: Lens' CmafGroupSettings (Maybe CmafWriteHLSManifest) Source #

When set to ENABLED, an Apple HLS manifest will be generated for this output.

cmafGroupSettings_additionalManifests :: Lens' CmafGroupSettings (Maybe [CmafAdditionalManifest]) Source #

By default, the service creates one top-level .m3u8 HLS manifest and one top -level .mpd DASH manifest for each CMAF output group in your job. These default manifests reference every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here. For each additional manifest that you specify, the service creates one HLS manifest and one DASH manifest.

cmafGroupSettings_segmentLengthControl :: Lens' CmafGroupSettings (Maybe CmafSegmentLengthControl) Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

cmafGroupSettings_imageBasedTrickPlaySettings :: Lens' CmafGroupSettings (Maybe CmafImageBasedTrickPlaySettings) Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

cmafGroupSettings_codecSpecification :: Lens' CmafGroupSettings (Maybe CmafCodecSpecification) Source #

Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

cmafGroupSettings_baseUrl :: Lens' CmafGroupSettings (Maybe Text) Source #

A partial URI prefix that will be put in the manifest file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

cmafGroupSettings_destinationSettings :: Lens' CmafGroupSettings (Maybe DestinationSettings) Source #

Settings associated with the destination. Will vary based on the type of destination

cmafGroupSettings_minFinalSegmentLength :: Lens' CmafGroupSettings (Maybe Double) Source #

Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

cmafGroupSettings_writeDashManifest :: Lens' CmafGroupSettings (Maybe CmafWriteDASHManifest) Source #

When set to ENABLED, a DASH MPD manifest will be generated for this output.

cmafGroupSettings_segmentLength :: Lens' CmafGroupSettings (Maybe Natural) Source #

Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (CmafSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

cmafGroupSettings_ptsOffsetHandlingForBFrames :: Lens' CmafGroupSettings (Maybe CmafPtsOffsetHandlingForBFrames) Source #

Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

cmafGroupSettings_manifestDurationFormat :: Lens' CmafGroupSettings (Maybe CmafManifestDurationFormat) Source #

Indicates whether the output manifest should use floating point values for segment duration.

cmafGroupSettings_clientCache :: Lens' CmafGroupSettings (Maybe CmafClientCache) Source #

Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

cmafGroupSettings_writeSegmentTimelineInRepresentation :: Lens' CmafGroupSettings (Maybe CmafWriteSegmentTimelineInRepresentation) Source #

When you enable Precise segment duration in DASH manifests (writeSegmentTimelineInRepresentation), your DASH manifest shows precise segment durations. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When this feature isn't enabled, the segment durations in your DASH manifest are approximate. The segment duration information appears in the duration attribute of the SegmentTemplate element.

cmafGroupSettings_streamInfResolution :: Lens' CmafGroupSettings (Maybe CmafStreamInfResolution) Source #

Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

CmafImageBasedTrickPlaySettings

data CmafImageBasedTrickPlaySettings Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

See: newCmafImageBasedTrickPlaySettings smart constructor.

Constructors

CmafImageBasedTrickPlaySettings' 

Fields

  • tileWidth :: Maybe Natural

    Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

  • thumbnailHeight :: Maybe Natural

    Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

  • intervalCadence :: Maybe CmafIntervalCadence

    The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

  • thumbnailWidth :: Maybe Natural

    Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

  • thumbnailInterval :: Maybe Double

    Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

  • tileHeight :: Maybe Natural

    Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

Instances

Instances details
Eq CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

Read CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

Show CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

Generic CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

Associated Types

type Rep CmafImageBasedTrickPlaySettings :: Type -> Type #

NFData CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

Hashable CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

ToJSON CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

FromJSON CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

type Rep CmafImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings

type Rep CmafImageBasedTrickPlaySettings = D1 ('MetaData "CmafImageBasedTrickPlaySettings" "Amazonka.MediaConvert.Types.CmafImageBasedTrickPlaySettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "CmafImageBasedTrickPlaySettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "tileWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "intervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafIntervalCadence)))) :*: (S1 ('MetaSel ('Just "thumbnailWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "tileHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newCmafImageBasedTrickPlaySettings :: CmafImageBasedTrickPlaySettings Source #

Create a value of CmafImageBasedTrickPlaySettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tileWidth:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_tileWidth - Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

$sel:thumbnailHeight:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_thumbnailHeight - Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

$sel:intervalCadence:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_intervalCadence - The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

$sel:thumbnailWidth:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_thumbnailWidth - Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

$sel:thumbnailInterval:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_thumbnailInterval - Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

$sel:tileHeight:CmafImageBasedTrickPlaySettings', cmafImageBasedTrickPlaySettings_tileHeight - Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

cmafImageBasedTrickPlaySettings_tileWidth :: Lens' CmafImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

cmafImageBasedTrickPlaySettings_thumbnailHeight :: Lens' CmafImageBasedTrickPlaySettings (Maybe Natural) Source #

Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

cmafImageBasedTrickPlaySettings_intervalCadence :: Lens' CmafImageBasedTrickPlaySettings (Maybe CmafIntervalCadence) Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

cmafImageBasedTrickPlaySettings_thumbnailWidth :: Lens' CmafImageBasedTrickPlaySettings (Maybe Natural) Source #

Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

cmafImageBasedTrickPlaySettings_thumbnailInterval :: Lens' CmafImageBasedTrickPlaySettings (Maybe Double) Source #

Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

cmafImageBasedTrickPlaySettings_tileHeight :: Lens' CmafImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

CmfcSettings

data CmfcSettings Source #

These settings relate to the fragmented MP4 container for the segments in your CMAF outputs.

See: newCmfcSettings smart constructor.

Constructors

CmfcSettings' 

Fields

  • descriptiveVideoServiceFlag :: Maybe CmfcDescriptiveVideoServiceFlag

    Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

  • audioRenditionSets :: Maybe Text

    List the audio rendition groups that you want included with this video rendition. Use a comma-separated list. For example, say you want to include the audio rendition groups that have the audio group IDs "audio_aac_1" and "audio_dolby". Then you would specify this value: "audio_aac_1, audio_dolby". Related setting: The rendition groups that you include in your comma-separated list should all match values that you specify in the setting Audio group ID (AudioGroupId) for audio renditions in the same output group as this video rendition. Default behavior: If you don't specify anything here and for Audio group ID, MediaConvert puts each audio variant in its own audio rendition group and associates it with every video variant. Each value in your list appears in your HLS parent manifest in the EXT-X-STREAM-INF tag as the value for the AUDIO attribute. To continue the previous example, say that the file name for the child manifest for your video rendition is "amazing_video_1.m3u8". Then, in your parent manifest, each value will appear on separate lines, like this: #EXT-X-STREAM-INF:AUDIO="audio_aac_1"... amazing_video_1.m3u8 #EXT-X-STREAM-INF:AUDIO="audio_dolby"... amazing_video_1.m3u8

  • iFrameOnlyManifest :: Maybe CmfcIFrameOnlyManifest

    Choose Include (INCLUDE) to have MediaConvert generate an HLS child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

  • scte35Esam :: Maybe CmfcScte35Esam

    Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

  • audioDuration :: Maybe CmfcAudioDuration

    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

  • audioGroupId :: Maybe Text

    Specify the audio rendition group for this audio rendition. Specify up to one value for each audio output in your output group. This value appears in your HLS parent manifest in the EXT-X-MEDIA tag of TYPE=AUDIO, as the value for the GROUP-ID attribute. For example, if you specify "audio_aac_1" for Audio group ID, it appears in your manifest like this: #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio_aac_1". Related setting: To associate the rendition group that this audio track belongs to with a video rendition, include the same value that you provide here for that video output's setting Audio rendition sets (audioRenditionSets).

  • scte35Source :: Maybe CmfcScte35Source

    Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

  • audioTrackType :: Maybe CmfcAudioTrackType

    Use this setting to control the values that MediaConvert puts in your HLS parent playlist to control how the client player selects which audio track to play. The other options for this setting determine the values that MediaConvert writes for the DEFAULT and AUTOSELECT attributes of the EXT-X-MEDIA entry for the audio variant. For more information about these attributes, see the Apple documentation article https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming/adding_alternate_media_to_a_playlist. Choose Alternate audio, auto select, default (ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT) to set DEFAULT=YES and AUTOSELECT=YES. Choose this value for only one variant in your output group. Choose Alternate audio, auto select, not default (ALTERNATE_AUDIO_AUTO_SELECT) to set DEFAULT=NO and AUTOSELECT=YES. Choose Alternate Audio, Not Auto Select to set DEFAULT=NO and AUTOSELECT=NO. When you don't specify a value for this setting, MediaConvert defaults to Alternate audio, auto select, default. When there is more than one variant in your output group, you must explicitly choose a value for this setting.

Instances

Instances details
Eq CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

Read CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

Show CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

Generic CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

Associated Types

type Rep CmfcSettings :: Type -> Type #

NFData CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

Methods

rnf :: CmfcSettings -> () #

Hashable CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

ToJSON CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

FromJSON CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

type Rep CmfcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.CmfcSettings

newCmfcSettings :: CmfcSettings Source #

Create a value of CmfcSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:descriptiveVideoServiceFlag:CmfcSettings', cmfcSettings_descriptiveVideoServiceFlag - Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

$sel:audioRenditionSets:CmfcSettings', cmfcSettings_audioRenditionSets - List the audio rendition groups that you want included with this video rendition. Use a comma-separated list. For example, say you want to include the audio rendition groups that have the audio group IDs "audio_aac_1" and "audio_dolby". Then you would specify this value: "audio_aac_1, audio_dolby". Related setting: The rendition groups that you include in your comma-separated list should all match values that you specify in the setting Audio group ID (AudioGroupId) for audio renditions in the same output group as this video rendition. Default behavior: If you don't specify anything here and for Audio group ID, MediaConvert puts each audio variant in its own audio rendition group and associates it with every video variant. Each value in your list appears in your HLS parent manifest in the EXT-X-STREAM-INF tag as the value for the AUDIO attribute. To continue the previous example, say that the file name for the child manifest for your video rendition is "amazing_video_1.m3u8". Then, in your parent manifest, each value will appear on separate lines, like this: #EXT-X-STREAM-INF:AUDIO="audio_aac_1"... amazing_video_1.m3u8 #EXT-X-STREAM-INF:AUDIO="audio_dolby"... amazing_video_1.m3u8

$sel:iFrameOnlyManifest:CmfcSettings', cmfcSettings_iFrameOnlyManifest - Choose Include (INCLUDE) to have MediaConvert generate an HLS child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

$sel:scte35Esam:CmfcSettings', cmfcSettings_scte35Esam - Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

$sel:audioDuration:CmfcSettings', cmfcSettings_audioDuration - Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

$sel:audioGroupId:CmfcSettings', cmfcSettings_audioGroupId - Specify the audio rendition group for this audio rendition. Specify up to one value for each audio output in your output group. This value appears in your HLS parent manifest in the EXT-X-MEDIA tag of TYPE=AUDIO, as the value for the GROUP-ID attribute. For example, if you specify "audio_aac_1" for Audio group ID, it appears in your manifest like this: #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio_aac_1". Related setting: To associate the rendition group that this audio track belongs to with a video rendition, include the same value that you provide here for that video output's setting Audio rendition sets (audioRenditionSets).

$sel:scte35Source:CmfcSettings', cmfcSettings_scte35Source - Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

$sel:audioTrackType:CmfcSettings', cmfcSettings_audioTrackType - Use this setting to control the values that MediaConvert puts in your HLS parent playlist to control how the client player selects which audio track to play. The other options for this setting determine the values that MediaConvert writes for the DEFAULT and AUTOSELECT attributes of the EXT-X-MEDIA entry for the audio variant. For more information about these attributes, see the Apple documentation article https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming/adding_alternate_media_to_a_playlist. Choose Alternate audio, auto select, default (ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT) to set DEFAULT=YES and AUTOSELECT=YES. Choose this value for only one variant in your output group. Choose Alternate audio, auto select, not default (ALTERNATE_AUDIO_AUTO_SELECT) to set DEFAULT=NO and AUTOSELECT=YES. Choose Alternate Audio, Not Auto Select to set DEFAULT=NO and AUTOSELECT=NO. When you don't specify a value for this setting, MediaConvert defaults to Alternate audio, auto select, default. When there is more than one variant in your output group, you must explicitly choose a value for this setting.

cmfcSettings_descriptiveVideoServiceFlag :: Lens' CmfcSettings (Maybe CmfcDescriptiveVideoServiceFlag) Source #

Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

cmfcSettings_audioRenditionSets :: Lens' CmfcSettings (Maybe Text) Source #

List the audio rendition groups that you want included with this video rendition. Use a comma-separated list. For example, say you want to include the audio rendition groups that have the audio group IDs "audio_aac_1" and "audio_dolby". Then you would specify this value: "audio_aac_1, audio_dolby". Related setting: The rendition groups that you include in your comma-separated list should all match values that you specify in the setting Audio group ID (AudioGroupId) for audio renditions in the same output group as this video rendition. Default behavior: If you don't specify anything here and for Audio group ID, MediaConvert puts each audio variant in its own audio rendition group and associates it with every video variant. Each value in your list appears in your HLS parent manifest in the EXT-X-STREAM-INF tag as the value for the AUDIO attribute. To continue the previous example, say that the file name for the child manifest for your video rendition is "amazing_video_1.m3u8". Then, in your parent manifest, each value will appear on separate lines, like this: #EXT-X-STREAM-INF:AUDIO="audio_aac_1"... amazing_video_1.m3u8 #EXT-X-STREAM-INF:AUDIO="audio_dolby"... amazing_video_1.m3u8

cmfcSettings_iFrameOnlyManifest :: Lens' CmfcSettings (Maybe CmfcIFrameOnlyManifest) Source #

Choose Include (INCLUDE) to have MediaConvert generate an HLS child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

cmfcSettings_scte35Esam :: Lens' CmfcSettings (Maybe CmfcScte35Esam) Source #

Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

cmfcSettings_audioDuration :: Lens' CmfcSettings (Maybe CmfcAudioDuration) Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

cmfcSettings_audioGroupId :: Lens' CmfcSettings (Maybe Text) Source #

Specify the audio rendition group for this audio rendition. Specify up to one value for each audio output in your output group. This value appears in your HLS parent manifest in the EXT-X-MEDIA tag of TYPE=AUDIO, as the value for the GROUP-ID attribute. For example, if you specify "audio_aac_1" for Audio group ID, it appears in your manifest like this: #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio_aac_1". Related setting: To associate the rendition group that this audio track belongs to with a video rendition, include the same value that you provide here for that video output's setting Audio rendition sets (audioRenditionSets).

cmfcSettings_scte35Source :: Lens' CmfcSettings (Maybe CmfcScte35Source) Source #

Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

cmfcSettings_audioTrackType :: Lens' CmfcSettings (Maybe CmfcAudioTrackType) Source #

Use this setting to control the values that MediaConvert puts in your HLS parent playlist to control how the client player selects which audio track to play. The other options for this setting determine the values that MediaConvert writes for the DEFAULT and AUTOSELECT attributes of the EXT-X-MEDIA entry for the audio variant. For more information about these attributes, see the Apple documentation article https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming/adding_alternate_media_to_a_playlist. Choose Alternate audio, auto select, default (ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT) to set DEFAULT=YES and AUTOSELECT=YES. Choose this value for only one variant in your output group. Choose Alternate audio, auto select, not default (ALTERNATE_AUDIO_AUTO_SELECT) to set DEFAULT=NO and AUTOSELECT=YES. Choose Alternate Audio, Not Auto Select to set DEFAULT=NO and AUTOSELECT=NO. When you don't specify a value for this setting, MediaConvert defaults to Alternate audio, auto select, default. When there is more than one variant in your output group, you must explicitly choose a value for this setting.

ColorCorrector

data ColorCorrector Source #

Settings for color correction.

See: newColorCorrector smart constructor.

Constructors

ColorCorrector' 

Fields

  • saturation :: Maybe Natural

    Saturation level.

  • hue :: Maybe Int

    Hue in degrees.

  • sampleRangeConversion :: Maybe SampleRangeConversion

    Specify the video color sample range for this output. To create a full range output, you must start with a full range YUV input and keep the default value, None (NONE). To create a limited range output from a full range input, choose Limited range (LIMITED_RANGE_SQUEEZE). With RGB inputs, your output is always limited range, regardless of your choice here. When you create a limited range output from a full range input, MediaConvert limits the active pixel values in a way that depends on the output's bit depth: 8-bit outputs contain only values from 16 through 235 and 10-bit outputs contain only values from 64 through 940. With this conversion, MediaConvert also changes the output metadata to note the limited range.

  • colorSpaceConversion :: Maybe ColorSpaceConversion

    Specify the color space you want for this output. The service supports conversion between HDR formats, between SDR formats, from SDR to HDR, and from HDR to SDR. SDR to HDR conversion doesn't upgrade the dynamic range. The converted video has an HDR format, but visually appears the same as an unconverted output. HDR to SDR conversion uses Elemental tone mapping technology to approximate the outcome of manually regrading from HDR to SDR.

  • hdr10Metadata :: Maybe Hdr10Metadata

    Use these settings when you convert to the HDR 10 color space. Specify the SMPTE ST 2086 Mastering Display Color Volume static metadata that you want signaled in the output. These values don't affect the pixel values that are encoded in the video stream. They are intended to help the downstream video player display content in a way that reflects the intentions of the the content creator. When you set Color space conversion (ColorSpaceConversion) to HDR 10 (FORCE_HDR10), these settings are required. You must set values for Max frame average light level (maxFrameAverageLightLevel) and Max content light level (maxContentLightLevel); these settings don't have a default value. The default values for the other HDR 10 metadata settings are defined by the P3D65 color space. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

  • contrast :: Maybe Natural

    Contrast level.

  • brightness :: Maybe Natural

    Brightness level.

Instances

Instances details
Eq ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

Read ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

Show ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

Generic ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

Associated Types

type Rep ColorCorrector :: Type -> Type #

NFData ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

Methods

rnf :: ColorCorrector -> () #

Hashable ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

ToJSON ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

FromJSON ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

type Rep ColorCorrector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ColorCorrector

type Rep ColorCorrector = D1 ('MetaData "ColorCorrector" "Amazonka.MediaConvert.Types.ColorCorrector" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ColorCorrector'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "saturation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "hue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "sampleRangeConversion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SampleRangeConversion)))) :*: ((S1 ('MetaSel ('Just "colorSpaceConversion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ColorSpaceConversion)) :*: S1 ('MetaSel ('Just "hdr10Metadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Hdr10Metadata))) :*: (S1 ('MetaSel ('Just "contrast") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "brightness") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newColorCorrector :: ColorCorrector Source #

Create a value of ColorCorrector with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:saturation:ColorCorrector', colorCorrector_saturation - Saturation level.

$sel:hue:ColorCorrector', colorCorrector_hue - Hue in degrees.

$sel:sampleRangeConversion:ColorCorrector', colorCorrector_sampleRangeConversion - Specify the video color sample range for this output. To create a full range output, you must start with a full range YUV input and keep the default value, None (NONE). To create a limited range output from a full range input, choose Limited range (LIMITED_RANGE_SQUEEZE). With RGB inputs, your output is always limited range, regardless of your choice here. When you create a limited range output from a full range input, MediaConvert limits the active pixel values in a way that depends on the output's bit depth: 8-bit outputs contain only values from 16 through 235 and 10-bit outputs contain only values from 64 through 940. With this conversion, MediaConvert also changes the output metadata to note the limited range.

$sel:colorSpaceConversion:ColorCorrector', colorCorrector_colorSpaceConversion - Specify the color space you want for this output. The service supports conversion between HDR formats, between SDR formats, from SDR to HDR, and from HDR to SDR. SDR to HDR conversion doesn't upgrade the dynamic range. The converted video has an HDR format, but visually appears the same as an unconverted output. HDR to SDR conversion uses Elemental tone mapping technology to approximate the outcome of manually regrading from HDR to SDR.

$sel:hdr10Metadata:ColorCorrector', colorCorrector_hdr10Metadata - Use these settings when you convert to the HDR 10 color space. Specify the SMPTE ST 2086 Mastering Display Color Volume static metadata that you want signaled in the output. These values don't affect the pixel values that are encoded in the video stream. They are intended to help the downstream video player display content in a way that reflects the intentions of the the content creator. When you set Color space conversion (ColorSpaceConversion) to HDR 10 (FORCE_HDR10), these settings are required. You must set values for Max frame average light level (maxFrameAverageLightLevel) and Max content light level (maxContentLightLevel); these settings don't have a default value. The default values for the other HDR 10 metadata settings are defined by the P3D65 color space. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

$sel:contrast:ColorCorrector', colorCorrector_contrast - Contrast level.

$sel:brightness:ColorCorrector', colorCorrector_brightness - Brightness level.

colorCorrector_sampleRangeConversion :: Lens' ColorCorrector (Maybe SampleRangeConversion) Source #

Specify the video color sample range for this output. To create a full range output, you must start with a full range YUV input and keep the default value, None (NONE). To create a limited range output from a full range input, choose Limited range (LIMITED_RANGE_SQUEEZE). With RGB inputs, your output is always limited range, regardless of your choice here. When you create a limited range output from a full range input, MediaConvert limits the active pixel values in a way that depends on the output's bit depth: 8-bit outputs contain only values from 16 through 235 and 10-bit outputs contain only values from 64 through 940. With this conversion, MediaConvert also changes the output metadata to note the limited range.

colorCorrector_colorSpaceConversion :: Lens' ColorCorrector (Maybe ColorSpaceConversion) Source #

Specify the color space you want for this output. The service supports conversion between HDR formats, between SDR formats, from SDR to HDR, and from HDR to SDR. SDR to HDR conversion doesn't upgrade the dynamic range. The converted video has an HDR format, but visually appears the same as an unconverted output. HDR to SDR conversion uses Elemental tone mapping technology to approximate the outcome of manually regrading from HDR to SDR.

colorCorrector_hdr10Metadata :: Lens' ColorCorrector (Maybe Hdr10Metadata) Source #

Use these settings when you convert to the HDR 10 color space. Specify the SMPTE ST 2086 Mastering Display Color Volume static metadata that you want signaled in the output. These values don't affect the pixel values that are encoded in the video stream. They are intended to help the downstream video player display content in a way that reflects the intentions of the the content creator. When you set Color space conversion (ColorSpaceConversion) to HDR 10 (FORCE_HDR10), these settings are required. You must set values for Max frame average light level (maxFrameAverageLightLevel) and Max content light level (maxContentLightLevel); these settings don't have a default value. The default values for the other HDR 10 metadata settings are defined by the P3D65 color space. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

ContainerSettings

data ContainerSettings Source #

Container specific settings.

See: newContainerSettings smart constructor.

Constructors

ContainerSettings' 

Fields

  • m2tsSettings :: Maybe M2tsSettings

    MPEG-2 TS container settings. These apply to outputs in a File output group when the output's container (ContainerType) is MPEG-2 Transport Stream (M2TS). In these assets, data is organized by the program map table (PMT). Each transport stream program contains subsets of data, including audio, video, and metadata. Each of these subsets of data has a numerical label called a packet identifier (PID). Each transport stream program corresponds to one MediaConvert output. The PMT lists the types of data in a program along with their PID. Downstream systems and players use the program map table to look up the PID for each type of data it accesses and then uses the PIDs to locate specific data within the asset.

  • mxfSettings :: Maybe MxfSettings

    These settings relate to your MXF output container.

  • m3u8Settings :: Maybe M3u8Settings

    These settings relate to the MPEG-2 transport stream (MPEG2-TS) container for the MPEG2-TS segments in your HLS outputs.

  • cmfcSettings :: Maybe CmfcSettings

    These settings relate to the fragmented MP4 container for the segments in your CMAF outputs.

  • movSettings :: Maybe MovSettings

    These settings relate to your QuickTime MOV output container.

  • mp4Settings :: Maybe Mp4Settings

    These settings relate to your MP4 output container. You can create audio only outputs with this container. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/supported-codecs-containers-audio-only.html#output-codecs-and-containers-supported-for-audio-only.

  • mpdSettings :: Maybe MpdSettings

    These settings relate to the fragmented MP4 container for the segments in your DASH outputs.

  • container :: Maybe ContainerType

    Container for this output. Some containers require a container settings object. If not specified, the default object will be created.

  • f4vSettings :: Maybe F4vSettings

    Settings for F4v container

Instances

Instances details
Eq ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

Read ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

Show ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

Generic ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

Associated Types

type Rep ContainerSettings :: Type -> Type #

NFData ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

Methods

rnf :: ContainerSettings -> () #

Hashable ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

ToJSON ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

FromJSON ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

type Rep ContainerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ContainerSettings

newContainerSettings :: ContainerSettings Source #

Create a value of ContainerSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:m2tsSettings:ContainerSettings', containerSettings_m2tsSettings - MPEG-2 TS container settings. These apply to outputs in a File output group when the output's container (ContainerType) is MPEG-2 Transport Stream (M2TS). In these assets, data is organized by the program map table (PMT). Each transport stream program contains subsets of data, including audio, video, and metadata. Each of these subsets of data has a numerical label called a packet identifier (PID). Each transport stream program corresponds to one MediaConvert output. The PMT lists the types of data in a program along with their PID. Downstream systems and players use the program map table to look up the PID for each type of data it accesses and then uses the PIDs to locate specific data within the asset.

$sel:mxfSettings:ContainerSettings', containerSettings_mxfSettings - These settings relate to your MXF output container.

$sel:m3u8Settings:ContainerSettings', containerSettings_m3u8Settings - These settings relate to the MPEG-2 transport stream (MPEG2-TS) container for the MPEG2-TS segments in your HLS outputs.

$sel:cmfcSettings:ContainerSettings', containerSettings_cmfcSettings - These settings relate to the fragmented MP4 container for the segments in your CMAF outputs.

$sel:movSettings:ContainerSettings', containerSettings_movSettings - These settings relate to your QuickTime MOV output container.

$sel:mp4Settings:ContainerSettings', containerSettings_mp4Settings - These settings relate to your MP4 output container. You can create audio only outputs with this container. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/supported-codecs-containers-audio-only.html#output-codecs-and-containers-supported-for-audio-only.

$sel:mpdSettings:ContainerSettings', containerSettings_mpdSettings - These settings relate to the fragmented MP4 container for the segments in your DASH outputs.

$sel:container:ContainerSettings', containerSettings_container - Container for this output. Some containers require a container settings object. If not specified, the default object will be created.

$sel:f4vSettings:ContainerSettings', containerSettings_f4vSettings - Settings for F4v container

containerSettings_m2tsSettings :: Lens' ContainerSettings (Maybe M2tsSettings) Source #

MPEG-2 TS container settings. These apply to outputs in a File output group when the output's container (ContainerType) is MPEG-2 Transport Stream (M2TS). In these assets, data is organized by the program map table (PMT). Each transport stream program contains subsets of data, including audio, video, and metadata. Each of these subsets of data has a numerical label called a packet identifier (PID). Each transport stream program corresponds to one MediaConvert output. The PMT lists the types of data in a program along with their PID. Downstream systems and players use the program map table to look up the PID for each type of data it accesses and then uses the PIDs to locate specific data within the asset.

containerSettings_mxfSettings :: Lens' ContainerSettings (Maybe MxfSettings) Source #

These settings relate to your MXF output container.

containerSettings_m3u8Settings :: Lens' ContainerSettings (Maybe M3u8Settings) Source #

These settings relate to the MPEG-2 transport stream (MPEG2-TS) container for the MPEG2-TS segments in your HLS outputs.

containerSettings_cmfcSettings :: Lens' ContainerSettings (Maybe CmfcSettings) Source #

These settings relate to the fragmented MP4 container for the segments in your CMAF outputs.

containerSettings_movSettings :: Lens' ContainerSettings (Maybe MovSettings) Source #

These settings relate to your QuickTime MOV output container.

containerSettings_mp4Settings :: Lens' ContainerSettings (Maybe Mp4Settings) Source #

These settings relate to your MP4 output container. You can create audio only outputs with this container. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/supported-codecs-containers-audio-only.html#output-codecs-and-containers-supported-for-audio-only.

containerSettings_mpdSettings :: Lens' ContainerSettings (Maybe MpdSettings) Source #

These settings relate to the fragmented MP4 container for the segments in your DASH outputs.

containerSettings_container :: Lens' ContainerSettings (Maybe ContainerType) Source #

Container for this output. Some containers require a container settings object. If not specified, the default object will be created.

DashAdditionalManifest

data DashAdditionalManifest Source #

Specify the details for each additional DASH manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.

See: newDashAdditionalManifest smart constructor.

Constructors

DashAdditionalManifest' 

Fields

  • manifestNameModifier :: Maybe Text

    Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your DASH group is film-name.mpd. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.mpd.

  • selectedOutputs :: Maybe [Text]

    Specify the outputs that you want this additional top-level manifest to reference.

Instances

Instances details
Eq DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

Read DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

Show DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

Generic DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

Associated Types

type Rep DashAdditionalManifest :: Type -> Type #

NFData DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

Methods

rnf :: DashAdditionalManifest -> () #

Hashable DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

ToJSON DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

FromJSON DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

type Rep DashAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashAdditionalManifest

type Rep DashAdditionalManifest = D1 ('MetaData "DashAdditionalManifest" "Amazonka.MediaConvert.Types.DashAdditionalManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DashAdditionalManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "manifestNameModifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "selectedOutputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newDashAdditionalManifest :: DashAdditionalManifest Source #

Create a value of DashAdditionalManifest with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:manifestNameModifier:DashAdditionalManifest', dashAdditionalManifest_manifestNameModifier - Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your DASH group is film-name.mpd. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.mpd.

$sel:selectedOutputs:DashAdditionalManifest', dashAdditionalManifest_selectedOutputs - Specify the outputs that you want this additional top-level manifest to reference.

dashAdditionalManifest_manifestNameModifier :: Lens' DashAdditionalManifest (Maybe Text) Source #

Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your DASH group is film-name.mpd. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.mpd.

dashAdditionalManifest_selectedOutputs :: Lens' DashAdditionalManifest (Maybe [Text]) Source #

Specify the outputs that you want this additional top-level manifest to reference.

DashIsoEncryptionSettings

data DashIsoEncryptionSettings Source #

Specifies DRM settings for DASH outputs.

See: newDashIsoEncryptionSettings smart constructor.

Constructors

DashIsoEncryptionSettings' 

Fields

  • playbackDeviceCompatibility :: Maybe DashIsoPlaybackDeviceCompatibility

    This setting can improve the compatibility of your output with video players on obsolete devices. It applies only to DASH H.264 outputs with DRM encryption. Choose Unencrypted SEI (UNENCRYPTED_SEI) only to correct problems with playback on older devices. Otherwise, keep the default setting CENC v1 (CENC_V1). If you choose Unencrypted SEI, for that output, the service will exclude the access unit delimiter and will leave the SEI NAL units unencrypted.

  • spekeKeyProvider :: Maybe SpekeKeyProvider

    If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

Instances

Instances details
Eq DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

Read DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

Show DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

Generic DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

Associated Types

type Rep DashIsoEncryptionSettings :: Type -> Type #

NFData DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

Hashable DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

ToJSON DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

FromJSON DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

type Rep DashIsoEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoEncryptionSettings

type Rep DashIsoEncryptionSettings = D1 ('MetaData "DashIsoEncryptionSettings" "Amazonka.MediaConvert.Types.DashIsoEncryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DashIsoEncryptionSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "playbackDeviceCompatibility") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoPlaybackDeviceCompatibility)) :*: S1 ('MetaSel ('Just "spekeKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SpekeKeyProvider))))

newDashIsoEncryptionSettings :: DashIsoEncryptionSettings Source #

Create a value of DashIsoEncryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:playbackDeviceCompatibility:DashIsoEncryptionSettings', dashIsoEncryptionSettings_playbackDeviceCompatibility - This setting can improve the compatibility of your output with video players on obsolete devices. It applies only to DASH H.264 outputs with DRM encryption. Choose Unencrypted SEI (UNENCRYPTED_SEI) only to correct problems with playback on older devices. Otherwise, keep the default setting CENC v1 (CENC_V1). If you choose Unencrypted SEI, for that output, the service will exclude the access unit delimiter and will leave the SEI NAL units unencrypted.

$sel:spekeKeyProvider:DashIsoEncryptionSettings', dashIsoEncryptionSettings_spekeKeyProvider - If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

dashIsoEncryptionSettings_playbackDeviceCompatibility :: Lens' DashIsoEncryptionSettings (Maybe DashIsoPlaybackDeviceCompatibility) Source #

This setting can improve the compatibility of your output with video players on obsolete devices. It applies only to DASH H.264 outputs with DRM encryption. Choose Unencrypted SEI (UNENCRYPTED_SEI) only to correct problems with playback on older devices. Otherwise, keep the default setting CENC v1 (CENC_V1). If you choose Unencrypted SEI, for that output, the service will exclude the access unit delimiter and will leave the SEI NAL units unencrypted.

dashIsoEncryptionSettings_spekeKeyProvider :: Lens' DashIsoEncryptionSettings (Maybe SpekeKeyProvider) Source #

If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

DashIsoGroupSettings

data DashIsoGroupSettings Source #

Settings related to your DASH output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to DASH_ISO_GROUP_SETTINGS.

See: newDashIsoGroupSettings smart constructor.

Constructors

DashIsoGroupSettings' 

Fields

  • fragmentLength :: Maybe Natural

    Length of fragments to generate (in seconds). Fragment length must be compatible with GOP size and Framerate. Note that fragments will end on the next keyframe after this number of seconds, so actual fragment length may be longer. When Emit Single File is checked, the fragmentation is internal to a single output file and it does not cause the creation of many output files as in other output types.

  • segmentControl :: Maybe DashIsoSegmentControl

    When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

  • destination :: Maybe Text

    Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

  • hbbtvCompliance :: Maybe DashIsoHbbtvCompliance

    Supports HbbTV specification as indicated

  • minBufferTime :: Maybe Natural

    Minimum time of initially buffered media that is needed to ensure smooth playout.

  • mpdProfile :: Maybe DashIsoMpdProfile

    Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

  • imageBasedTrickPlay :: Maybe DashIsoImageBasedTrickPlay

    Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

  • additionalManifests :: Maybe [DashAdditionalManifest]

    By default, the service creates one .mpd DASH manifest for each DASH ISO output group in your job. This default manifest references every output in the output group. To create additional DASH manifests that reference a subset of the outputs in the output group, specify a list of them here.

  • segmentLengthControl :: Maybe DashIsoSegmentLengthControl

    Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

  • imageBasedTrickPlaySettings :: Maybe DashIsoImageBasedTrickPlaySettings

    Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

  • baseUrl :: Maybe Text

    A partial URI prefix that will be put in the manifest (.mpd) file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

  • destinationSettings :: Maybe DestinationSettings

    Settings associated with the destination. Will vary based on the type of destination

  • minFinalSegmentLength :: Maybe Double

    Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

  • audioChannelConfigSchemeIdUri :: Maybe DashIsoGroupAudioChannelConfigSchemeIdUri

    Use this setting only when your audio codec is a Dolby one (AC3, EAC3, or Atmos) and your downstream workflow requires that your DASH manifest use the Dolby channel configuration tag, rather than the MPEG one. For example, you might need to use this to make dynamic ad insertion work. Specify which audio channel configuration scheme ID URI MediaConvert writes in your DASH manifest. Keep the default value, MPEG channel configuration (MPEG_CHANNEL_CONFIGURATION), to have MediaConvert write this: urn:mpeg:mpegB:cicp:ChannelConfiguration. Choose Dolby channel configuration (DOLBY_CHANNEL_CONFIGURATION) to have MediaConvert write this instead: tag:dolby.com,2014:dash:audio_channel_configuration:2011.

  • encryption :: Maybe DashIsoEncryptionSettings

    DRM settings.

  • segmentLength :: Maybe Natural

    Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 30. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (DashIsoSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

  • ptsOffsetHandlingForBFrames :: Maybe DashIsoPtsOffsetHandlingForBFrames

    Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

  • writeSegmentTimelineInRepresentation :: Maybe DashIsoWriteSegmentTimelineInRepresentation

    If you get an HTTP error in the 400 range when you play back your DASH output, enable this setting and run your transcoding job again. When you enable this setting, the service writes precise segment durations in the DASH manifest. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When you don't enable this setting, the service writes approximate segment durations in your DASH manifest.

Instances

Instances details
Eq DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

Read DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

Show DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

Generic DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

Associated Types

type Rep DashIsoGroupSettings :: Type -> Type #

NFData DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

Methods

rnf :: DashIsoGroupSettings -> () #

Hashable DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

ToJSON DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

FromJSON DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

type Rep DashIsoGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoGroupSettings

type Rep DashIsoGroupSettings = D1 ('MetaData "DashIsoGroupSettings" "Amazonka.MediaConvert.Types.DashIsoGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DashIsoGroupSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "fragmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "segmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoSegmentControl))) :*: (S1 ('MetaSel ('Just "destination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "hbbtvCompliance") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoHbbtvCompliance)))) :*: ((S1 ('MetaSel ('Just "minBufferTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "mpdProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoMpdProfile))) :*: (S1 ('MetaSel ('Just "imageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoImageBasedTrickPlay)) :*: (S1 ('MetaSel ('Just "additionalManifests") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [DashAdditionalManifest])) :*: S1 ('MetaSel ('Just "segmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoSegmentLengthControl)))))) :*: (((S1 ('MetaSel ('Just "imageBasedTrickPlaySettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoImageBasedTrickPlaySettings)) :*: S1 ('MetaSel ('Just "baseUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DestinationSettings)) :*: S1 ('MetaSel ('Just "minFinalSegmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))) :*: ((S1 ('MetaSel ('Just "audioChannelConfigSchemeIdUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoGroupAudioChannelConfigSchemeIdUri)) :*: S1 ('MetaSel ('Just "encryption") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoEncryptionSettings))) :*: (S1 ('MetaSel ('Just "segmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "ptsOffsetHandlingForBFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoPtsOffsetHandlingForBFrames)) :*: S1 ('MetaSel ('Just "writeSegmentTimelineInRepresentation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoWriteSegmentTimelineInRepresentation))))))))

newDashIsoGroupSettings :: DashIsoGroupSettings Source #

Create a value of DashIsoGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:fragmentLength:DashIsoGroupSettings', dashIsoGroupSettings_fragmentLength - Length of fragments to generate (in seconds). Fragment length must be compatible with GOP size and Framerate. Note that fragments will end on the next keyframe after this number of seconds, so actual fragment length may be longer. When Emit Single File is checked, the fragmentation is internal to a single output file and it does not cause the creation of many output files as in other output types.

$sel:segmentControl:DashIsoGroupSettings', dashIsoGroupSettings_segmentControl - When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

$sel:destination:DashIsoGroupSettings', dashIsoGroupSettings_destination - Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

$sel:hbbtvCompliance:DashIsoGroupSettings', dashIsoGroupSettings_hbbtvCompliance - Supports HbbTV specification as indicated

$sel:minBufferTime:DashIsoGroupSettings', dashIsoGroupSettings_minBufferTime - Minimum time of initially buffered media that is needed to ensure smooth playout.

$sel:mpdProfile:DashIsoGroupSettings', dashIsoGroupSettings_mpdProfile - Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

$sel:imageBasedTrickPlay:DashIsoGroupSettings', dashIsoGroupSettings_imageBasedTrickPlay - Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

$sel:additionalManifests:DashIsoGroupSettings', dashIsoGroupSettings_additionalManifests - By default, the service creates one .mpd DASH manifest for each DASH ISO output group in your job. This default manifest references every output in the output group. To create additional DASH manifests that reference a subset of the outputs in the output group, specify a list of them here.

$sel:segmentLengthControl:DashIsoGroupSettings', dashIsoGroupSettings_segmentLengthControl - Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

$sel:imageBasedTrickPlaySettings:DashIsoGroupSettings', dashIsoGroupSettings_imageBasedTrickPlaySettings - Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

$sel:baseUrl:DashIsoGroupSettings', dashIsoGroupSettings_baseUrl - A partial URI prefix that will be put in the manifest (.mpd) file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

$sel:destinationSettings:DashIsoGroupSettings', dashIsoGroupSettings_destinationSettings - Settings associated with the destination. Will vary based on the type of destination

$sel:minFinalSegmentLength:DashIsoGroupSettings', dashIsoGroupSettings_minFinalSegmentLength - Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

$sel:audioChannelConfigSchemeIdUri:DashIsoGroupSettings', dashIsoGroupSettings_audioChannelConfigSchemeIdUri - Use this setting only when your audio codec is a Dolby one (AC3, EAC3, or Atmos) and your downstream workflow requires that your DASH manifest use the Dolby channel configuration tag, rather than the MPEG one. For example, you might need to use this to make dynamic ad insertion work. Specify which audio channel configuration scheme ID URI MediaConvert writes in your DASH manifest. Keep the default value, MPEG channel configuration (MPEG_CHANNEL_CONFIGURATION), to have MediaConvert write this: urn:mpeg:mpegB:cicp:ChannelConfiguration. Choose Dolby channel configuration (DOLBY_CHANNEL_CONFIGURATION) to have MediaConvert write this instead: tag:dolby.com,2014:dash:audio_channel_configuration:2011.

$sel:encryption:DashIsoGroupSettings', dashIsoGroupSettings_encryption - DRM settings.

$sel:segmentLength:DashIsoGroupSettings', dashIsoGroupSettings_segmentLength - Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 30. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (DashIsoSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

$sel:ptsOffsetHandlingForBFrames:DashIsoGroupSettings', dashIsoGroupSettings_ptsOffsetHandlingForBFrames - Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

$sel:writeSegmentTimelineInRepresentation:DashIsoGroupSettings', dashIsoGroupSettings_writeSegmentTimelineInRepresentation - If you get an HTTP error in the 400 range when you play back your DASH output, enable this setting and run your transcoding job again. When you enable this setting, the service writes precise segment durations in the DASH manifest. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When you don't enable this setting, the service writes approximate segment durations in your DASH manifest.

dashIsoGroupSettings_fragmentLength :: Lens' DashIsoGroupSettings (Maybe Natural) Source #

Length of fragments to generate (in seconds). Fragment length must be compatible with GOP size and Framerate. Note that fragments will end on the next keyframe after this number of seconds, so actual fragment length may be longer. When Emit Single File is checked, the fragmentation is internal to a single output file and it does not cause the creation of many output files as in other output types.

dashIsoGroupSettings_segmentControl :: Lens' DashIsoGroupSettings (Maybe DashIsoSegmentControl) Source #

When set to SINGLE_FILE, a single output file is generated, which is internally segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, separate segment files will be created.

dashIsoGroupSettings_destination :: Lens' DashIsoGroupSettings (Maybe Text) Source #

Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

dashIsoGroupSettings_minBufferTime :: Lens' DashIsoGroupSettings (Maybe Natural) Source #

Minimum time of initially buffered media that is needed to ensure smooth playout.

dashIsoGroupSettings_mpdProfile :: Lens' DashIsoGroupSettings (Maybe DashIsoMpdProfile) Source #

Specify whether your DASH profile is on-demand or main. When you choose Main profile (MAIN_PROFILE), the service signals urn:mpeg:dash:profile:isoff-main:2011 in your .mpd DASH manifest. When you choose On-demand (ON_DEMAND_PROFILE), the service signals urn:mpeg:dash:profile:isoff-on-demand:2011 in your .mpd. When you choose On-demand, you must also set the output group setting Segment control (SegmentControl) to Single file (SINGLE_FILE).

dashIsoGroupSettings_imageBasedTrickPlay :: Lens' DashIsoGroupSettings (Maybe DashIsoImageBasedTrickPlay) Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert adds an entry in the .mpd manifest for each set of images that you generate. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

dashIsoGroupSettings_additionalManifests :: Lens' DashIsoGroupSettings (Maybe [DashAdditionalManifest]) Source #

By default, the service creates one .mpd DASH manifest for each DASH ISO output group in your job. This default manifest references every output in the output group. To create additional DASH manifests that reference a subset of the outputs in the output group, specify a list of them here.

dashIsoGroupSettings_segmentLengthControl :: Lens' DashIsoGroupSettings (Maybe DashIsoSegmentLengthControl) Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

dashIsoGroupSettings_imageBasedTrickPlaySettings :: Lens' DashIsoGroupSettings (Maybe DashIsoImageBasedTrickPlaySettings) Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

dashIsoGroupSettings_baseUrl :: Lens' DashIsoGroupSettings (Maybe Text) Source #

A partial URI prefix that will be put in the manifest (.mpd) file at the top level BaseURL element. Can be used if streams are delivered from a different URL than the manifest file.

dashIsoGroupSettings_destinationSettings :: Lens' DashIsoGroupSettings (Maybe DestinationSettings) Source #

Settings associated with the destination. Will vary based on the type of destination

dashIsoGroupSettings_minFinalSegmentLength :: Lens' DashIsoGroupSettings (Maybe Double) Source #

Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

dashIsoGroupSettings_audioChannelConfigSchemeIdUri :: Lens' DashIsoGroupSettings (Maybe DashIsoGroupAudioChannelConfigSchemeIdUri) Source #

Use this setting only when your audio codec is a Dolby one (AC3, EAC3, or Atmos) and your downstream workflow requires that your DASH manifest use the Dolby channel configuration tag, rather than the MPEG one. For example, you might need to use this to make dynamic ad insertion work. Specify which audio channel configuration scheme ID URI MediaConvert writes in your DASH manifest. Keep the default value, MPEG channel configuration (MPEG_CHANNEL_CONFIGURATION), to have MediaConvert write this: urn:mpeg:mpegB:cicp:ChannelConfiguration. Choose Dolby channel configuration (DOLBY_CHANNEL_CONFIGURATION) to have MediaConvert write this instead: tag:dolby.com,2014:dash:audio_channel_configuration:2011.

dashIsoGroupSettings_segmentLength :: Lens' DashIsoGroupSettings (Maybe Natural) Source #

Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 30. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (DashIsoSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

dashIsoGroupSettings_ptsOffsetHandlingForBFrames :: Lens' DashIsoGroupSettings (Maybe DashIsoPtsOffsetHandlingForBFrames) Source #

Use this setting only when your output video stream has B-frames, which causes the initial presentation time stamp (PTS) to be offset from the initial decode time stamp (DTS). Specify how MediaConvert handles PTS when writing time stamps in output DASH manifests. Choose Match initial PTS (MATCH_INITIAL_PTS) when you want MediaConvert to use the initial PTS as the first time stamp in the manifest. Choose Zero-based (ZERO_BASED) to have MediaConvert ignore the initial PTS in the video stream and instead write the initial time stamp as zero in the manifest. For outputs that don't have B-frames, the time stamps in your DASH manifests start at zero regardless of your choice here.

dashIsoGroupSettings_writeSegmentTimelineInRepresentation :: Lens' DashIsoGroupSettings (Maybe DashIsoWriteSegmentTimelineInRepresentation) Source #

If you get an HTTP error in the 400 range when you play back your DASH output, enable this setting and run your transcoding job again. When you enable this setting, the service writes precise segment durations in the DASH manifest. The segment duration information appears inside the SegmentTimeline element, inside SegmentTemplate at the Representation level. When you don't enable this setting, the service writes approximate segment durations in your DASH manifest.

DashIsoImageBasedTrickPlaySettings

data DashIsoImageBasedTrickPlaySettings Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

See: newDashIsoImageBasedTrickPlaySettings smart constructor.

Constructors

DashIsoImageBasedTrickPlaySettings' 

Fields

  • tileWidth :: Maybe Natural

    Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

  • thumbnailHeight :: Maybe Natural

    Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

  • intervalCadence :: Maybe DashIsoIntervalCadence

    The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

  • thumbnailWidth :: Maybe Natural

    Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

  • thumbnailInterval :: Maybe Double

    Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

  • tileHeight :: Maybe Natural

    Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

Instances

Instances details
Eq DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

Read DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

Show DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

Generic DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

Associated Types

type Rep DashIsoImageBasedTrickPlaySettings :: Type -> Type #

NFData DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

Hashable DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

ToJSON DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

FromJSON DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

type Rep DashIsoImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings

type Rep DashIsoImageBasedTrickPlaySettings = D1 ('MetaData "DashIsoImageBasedTrickPlaySettings" "Amazonka.MediaConvert.Types.DashIsoImageBasedTrickPlaySettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DashIsoImageBasedTrickPlaySettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "tileWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "intervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoIntervalCadence)))) :*: (S1 ('MetaSel ('Just "thumbnailWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "tileHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newDashIsoImageBasedTrickPlaySettings :: DashIsoImageBasedTrickPlaySettings Source #

Create a value of DashIsoImageBasedTrickPlaySettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tileWidth:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_tileWidth - Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

$sel:thumbnailHeight:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_thumbnailHeight - Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

$sel:intervalCadence:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_intervalCadence - The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

$sel:thumbnailWidth:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_thumbnailWidth - Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

$sel:thumbnailInterval:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_thumbnailInterval - Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

$sel:tileHeight:DashIsoImageBasedTrickPlaySettings', dashIsoImageBasedTrickPlaySettings_tileHeight - Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

dashIsoImageBasedTrickPlaySettings_tileWidth :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

dashIsoImageBasedTrickPlaySettings_thumbnailHeight :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe Natural) Source #

Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

dashIsoImageBasedTrickPlaySettings_intervalCadence :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe DashIsoIntervalCadence) Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

dashIsoImageBasedTrickPlaySettings_thumbnailWidth :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe Natural) Source #

Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

dashIsoImageBasedTrickPlaySettings_thumbnailInterval :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe Double) Source #

Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

dashIsoImageBasedTrickPlaySettings_tileHeight :: Lens' DashIsoImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

Deinterlacer

data Deinterlacer Source #

Settings for deinterlacer

See: newDeinterlacer smart constructor.

Constructors

Deinterlacer' 

Fields

  • control :: Maybe DeinterlacerControl
    • When set to NORMAL (default), the deinterlacer does not convert frames that are tagged in metadata as progressive. It will only convert those that are tagged as some other type. - When set to FORCE_ALL_FRAMES, the deinterlacer converts every frame to progressive - even those that are already tagged as progressive. Turn Force mode on only if there is a good chance that the metadata has tagged frames as progressive when they are not progressive. Do not turn on otherwise; processing frames that are already progressive into progressive will probably result in lower quality video.
  • mode :: Maybe DeinterlacerMode

    Use Deinterlacer (DeinterlaceMode) to choose how the service will do deinterlacing. Default is Deinterlace. - Deinterlace converts interlaced to progressive. - Inverse telecine converts Hard Telecine 29.97i to progressive 23.976p. - Adaptive auto-detects and converts to progressive.

  • algorithm :: Maybe DeinterlaceAlgorithm

    Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) OR (BLEND_TICKER) if your source file includes a ticker, such as a scrolling headline at the bottom of the frame.

Instances

Instances details
Eq Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

Read Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

Show Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

Generic Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

Associated Types

type Rep Deinterlacer :: Type -> Type #

NFData Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

Methods

rnf :: Deinterlacer -> () #

Hashable Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

ToJSON Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

FromJSON Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

type Rep Deinterlacer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Deinterlacer

type Rep Deinterlacer = D1 ('MetaData "Deinterlacer" "Amazonka.MediaConvert.Types.Deinterlacer" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Deinterlacer'" 'PrefixI 'True) (S1 ('MetaSel ('Just "control") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DeinterlacerControl)) :*: (S1 ('MetaSel ('Just "mode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DeinterlacerMode)) :*: S1 ('MetaSel ('Just "algorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DeinterlaceAlgorithm)))))

newDeinterlacer :: Deinterlacer Source #

Create a value of Deinterlacer with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:control:Deinterlacer', deinterlacer_control - - When set to NORMAL (default), the deinterlacer does not convert frames that are tagged in metadata as progressive. It will only convert those that are tagged as some other type. - When set to FORCE_ALL_FRAMES, the deinterlacer converts every frame to progressive - even those that are already tagged as progressive. Turn Force mode on only if there is a good chance that the metadata has tagged frames as progressive when they are not progressive. Do not turn on otherwise; processing frames that are already progressive into progressive will probably result in lower quality video.

$sel:mode:Deinterlacer', deinterlacer_mode - Use Deinterlacer (DeinterlaceMode) to choose how the service will do deinterlacing. Default is Deinterlace. - Deinterlace converts interlaced to progressive. - Inverse telecine converts Hard Telecine 29.97i to progressive 23.976p. - Adaptive auto-detects and converts to progressive.

$sel:algorithm:Deinterlacer', deinterlacer_algorithm - Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) OR (BLEND_TICKER) if your source file includes a ticker, such as a scrolling headline at the bottom of the frame.

deinterlacer_control :: Lens' Deinterlacer (Maybe DeinterlacerControl) Source #

  • When set to NORMAL (default), the deinterlacer does not convert frames that are tagged in metadata as progressive. It will only convert those that are tagged as some other type. - When set to FORCE_ALL_FRAMES, the deinterlacer converts every frame to progressive - even those that are already tagged as progressive. Turn Force mode on only if there is a good chance that the metadata has tagged frames as progressive when they are not progressive. Do not turn on otherwise; processing frames that are already progressive into progressive will probably result in lower quality video.

deinterlacer_mode :: Lens' Deinterlacer (Maybe DeinterlacerMode) Source #

Use Deinterlacer (DeinterlaceMode) to choose how the service will do deinterlacing. Default is Deinterlace. - Deinterlace converts interlaced to progressive. - Inverse telecine converts Hard Telecine 29.97i to progressive 23.976p. - Adaptive auto-detects and converts to progressive.

deinterlacer_algorithm :: Lens' Deinterlacer (Maybe DeinterlaceAlgorithm) Source #

Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) OR (BLEND_TICKER) if your source file includes a ticker, such as a scrolling headline at the bottom of the frame.

DestinationSettings

data DestinationSettings Source #

Settings associated with the destination. Will vary based on the type of destination

See: newDestinationSettings smart constructor.

Constructors

DestinationSettings' 

Fields

Instances

Instances details
Eq DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

Read DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

Show DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

Generic DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

Associated Types

type Rep DestinationSettings :: Type -> Type #

NFData DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

Methods

rnf :: DestinationSettings -> () #

Hashable DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

ToJSON DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

FromJSON DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

type Rep DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DestinationSettings

type Rep DestinationSettings = D1 ('MetaData "DestinationSettings" "Amazonka.MediaConvert.Types.DestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "s3Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe S3DestinationSettings))))

newDestinationSettings :: DestinationSettings Source #

Create a value of DestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:s3Settings:DestinationSettings', destinationSettings_s3Settings - Settings associated with S3 destination

DolbyVision

data DolbyVision Source #

With AWS Elemental MediaConvert, you can create profile 5 Dolby Vision outputs from MXF and IMF sources that contain mastering information as frame-interleaved Dolby Vision metadata.

See: newDolbyVision smart constructor.

Constructors

DolbyVision' 

Fields

  • profile :: Maybe DolbyVisionProfile

    In the current MediaConvert implementation, the Dolby Vision profile is always 5 (PROFILE_5). Therefore, all of your inputs must contain Dolby Vision frame interleaved data.

  • l6Mode :: Maybe DolbyVisionLevel6Mode

    Use Dolby Vision Mode to choose how the service will handle Dolby Vision MaxCLL and MaxFALL properies.

  • l6Metadata :: Maybe DolbyVisionLevel6Metadata

    Use these settings when you set DolbyVisionLevel6Mode to SPECIFY to override the MaxCLL and MaxFALL values in your input with new values.

Instances

Instances details
Eq DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

Read DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

Show DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

Generic DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

Associated Types

type Rep DolbyVision :: Type -> Type #

NFData DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

Methods

rnf :: DolbyVision -> () #

Hashable DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

ToJSON DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

FromJSON DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

type Rep DolbyVision Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVision

type Rep DolbyVision = D1 ('MetaData "DolbyVision" "Amazonka.MediaConvert.Types.DolbyVision" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DolbyVision'" 'PrefixI 'True) (S1 ('MetaSel ('Just "profile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DolbyVisionProfile)) :*: (S1 ('MetaSel ('Just "l6Mode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DolbyVisionLevel6Mode)) :*: S1 ('MetaSel ('Just "l6Metadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DolbyVisionLevel6Metadata)))))

newDolbyVision :: DolbyVision Source #

Create a value of DolbyVision with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:profile:DolbyVision', dolbyVision_profile - In the current MediaConvert implementation, the Dolby Vision profile is always 5 (PROFILE_5). Therefore, all of your inputs must contain Dolby Vision frame interleaved data.

$sel:l6Mode:DolbyVision', dolbyVision_l6Mode - Use Dolby Vision Mode to choose how the service will handle Dolby Vision MaxCLL and MaxFALL properies.

$sel:l6Metadata:DolbyVision', dolbyVision_l6Metadata - Use these settings when you set DolbyVisionLevel6Mode to SPECIFY to override the MaxCLL and MaxFALL values in your input with new values.

dolbyVision_profile :: Lens' DolbyVision (Maybe DolbyVisionProfile) Source #

In the current MediaConvert implementation, the Dolby Vision profile is always 5 (PROFILE_5). Therefore, all of your inputs must contain Dolby Vision frame interleaved data.

dolbyVision_l6Mode :: Lens' DolbyVision (Maybe DolbyVisionLevel6Mode) Source #

Use Dolby Vision Mode to choose how the service will handle Dolby Vision MaxCLL and MaxFALL properies.

dolbyVision_l6Metadata :: Lens' DolbyVision (Maybe DolbyVisionLevel6Metadata) Source #

Use these settings when you set DolbyVisionLevel6Mode to SPECIFY to override the MaxCLL and MaxFALL values in your input with new values.

DolbyVisionLevel6Metadata

data DolbyVisionLevel6Metadata Source #

Use these settings when you set DolbyVisionLevel6Mode to SPECIFY to override the MaxCLL and MaxFALL values in your input with new values.

See: newDolbyVisionLevel6Metadata smart constructor.

Constructors

DolbyVisionLevel6Metadata' 

Fields

  • maxFall :: Maybe Natural

    Maximum Frame-Average Light Level. Static HDR metadata that corresponds to the highest frame-average brightness in the entire stream. Measured in nits.

  • maxCll :: Maybe Natural

    Maximum Content Light Level. Static HDR metadata that corresponds to the brightest pixel in the entire stream. Measured in nits.

Instances

Instances details
Eq DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

Read DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

Show DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

Generic DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

Associated Types

type Rep DolbyVisionLevel6Metadata :: Type -> Type #

NFData DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

Hashable DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

ToJSON DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

FromJSON DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

type Rep DolbyVisionLevel6Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata

type Rep DolbyVisionLevel6Metadata = D1 ('MetaData "DolbyVisionLevel6Metadata" "Amazonka.MediaConvert.Types.DolbyVisionLevel6Metadata" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DolbyVisionLevel6Metadata'" 'PrefixI 'True) (S1 ('MetaSel ('Just "maxFall") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxCll") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newDolbyVisionLevel6Metadata :: DolbyVisionLevel6Metadata Source #

Create a value of DolbyVisionLevel6Metadata with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:maxFall:DolbyVisionLevel6Metadata', dolbyVisionLevel6Metadata_maxFall - Maximum Frame-Average Light Level. Static HDR metadata that corresponds to the highest frame-average brightness in the entire stream. Measured in nits.

$sel:maxCll:DolbyVisionLevel6Metadata', dolbyVisionLevel6Metadata_maxCll - Maximum Content Light Level. Static HDR metadata that corresponds to the brightest pixel in the entire stream. Measured in nits.

dolbyVisionLevel6Metadata_maxFall :: Lens' DolbyVisionLevel6Metadata (Maybe Natural) Source #

Maximum Frame-Average Light Level. Static HDR metadata that corresponds to the highest frame-average brightness in the entire stream. Measured in nits.

dolbyVisionLevel6Metadata_maxCll :: Lens' DolbyVisionLevel6Metadata (Maybe Natural) Source #

Maximum Content Light Level. Static HDR metadata that corresponds to the brightest pixel in the entire stream. Measured in nits.

DvbNitSettings

data DvbNitSettings Source #

Use these settings to insert a DVB Network Information Table (NIT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

See: newDvbNitSettings smart constructor.

Constructors

DvbNitSettings' 

Fields

  • networkId :: Maybe Natural

    The numeric value placed in the Network Information Table (NIT).

  • networkName :: Maybe Text

    The network name text placed in the network_name_descriptor inside the Network Information Table. Maximum length is 256 characters.

  • nitInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

Instances

Instances details
Eq DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

Read DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

Show DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

Generic DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

Associated Types

type Rep DvbNitSettings :: Type -> Type #

NFData DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

Methods

rnf :: DvbNitSettings -> () #

Hashable DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

ToJSON DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

FromJSON DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

type Rep DvbNitSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbNitSettings

type Rep DvbNitSettings = D1 ('MetaData "DvbNitSettings" "Amazonka.MediaConvert.Types.DvbNitSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DvbNitSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "networkId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "networkName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "nitInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newDvbNitSettings :: DvbNitSettings Source #

Create a value of DvbNitSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:networkId:DvbNitSettings', dvbNitSettings_networkId - The numeric value placed in the Network Information Table (NIT).

$sel:networkName:DvbNitSettings', dvbNitSettings_networkName - The network name text placed in the network_name_descriptor inside the Network Information Table. Maximum length is 256 characters.

$sel:nitInterval:DvbNitSettings', dvbNitSettings_nitInterval - The number of milliseconds between instances of this table in the output transport stream.

dvbNitSettings_networkId :: Lens' DvbNitSettings (Maybe Natural) Source #

The numeric value placed in the Network Information Table (NIT).

dvbNitSettings_networkName :: Lens' DvbNitSettings (Maybe Text) Source #

The network name text placed in the network_name_descriptor inside the Network Information Table. Maximum length is 256 characters.

dvbNitSettings_nitInterval :: Lens' DvbNitSettings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

DvbSdtSettings

data DvbSdtSettings Source #

Use these settings to insert a DVB Service Description Table (SDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

See: newDvbSdtSettings smart constructor.

Constructors

DvbSdtSettings' 

Fields

  • sdtInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

  • serviceProviderName :: Maybe Text

    The service provider name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

  • outputSdt :: Maybe OutputSdt

    Selects method of inserting SDT information into output stream. "Follow input SDT" copies SDT information from input stream to output stream. "Follow input SDT if present" copies SDT information from input stream to output stream if SDT information is present in the input, otherwise it will fall back on the user-defined values. Enter "SDT Manually" means user will enter the SDT information. "No SDT" means output stream will not contain SDT information.

  • serviceName :: Maybe Text

    The service name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

Instances

Instances details
Eq DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

Read DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

Show DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

Generic DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

Associated Types

type Rep DvbSdtSettings :: Type -> Type #

NFData DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

Methods

rnf :: DvbSdtSettings -> () #

Hashable DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

ToJSON DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

FromJSON DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

type Rep DvbSdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSdtSettings

type Rep DvbSdtSettings = D1 ('MetaData "DvbSdtSettings" "Amazonka.MediaConvert.Types.DvbSdtSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DvbSdtSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "sdtInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "serviceProviderName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "outputSdt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OutputSdt)) :*: S1 ('MetaSel ('Just "serviceName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newDvbSdtSettings :: DvbSdtSettings Source #

Create a value of DvbSdtSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:sdtInterval:DvbSdtSettings', dvbSdtSettings_sdtInterval - The number of milliseconds between instances of this table in the output transport stream.

$sel:serviceProviderName:DvbSdtSettings', dvbSdtSettings_serviceProviderName - The service provider name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

$sel:outputSdt:DvbSdtSettings', dvbSdtSettings_outputSdt - Selects method of inserting SDT information into output stream. "Follow input SDT" copies SDT information from input stream to output stream. "Follow input SDT if present" copies SDT information from input stream to output stream if SDT information is present in the input, otherwise it will fall back on the user-defined values. Enter "SDT Manually" means user will enter the SDT information. "No SDT" means output stream will not contain SDT information.

$sel:serviceName:DvbSdtSettings', dvbSdtSettings_serviceName - The service name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

dvbSdtSettings_sdtInterval :: Lens' DvbSdtSettings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

dvbSdtSettings_serviceProviderName :: Lens' DvbSdtSettings (Maybe Text) Source #

The service provider name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

dvbSdtSettings_outputSdt :: Lens' DvbSdtSettings (Maybe OutputSdt) Source #

Selects method of inserting SDT information into output stream. "Follow input SDT" copies SDT information from input stream to output stream. "Follow input SDT if present" copies SDT information from input stream to output stream if SDT information is present in the input, otherwise it will fall back on the user-defined values. Enter "SDT Manually" means user will enter the SDT information. "No SDT" means output stream will not contain SDT information.

dvbSdtSettings_serviceName :: Lens' DvbSdtSettings (Maybe Text) Source #

The service name placed in the service_descriptor in the Service Description Table. Maximum length is 256 characters.

DvbSubDestinationSettings

data DvbSubDestinationSettings Source #

Settings related to DVB-Sub captions. Set up DVB-Sub captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/dvb-sub-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to DVB_SUB.

See: newDvbSubDestinationSettings smart constructor.

Constructors

DvbSubDestinationSettings' 

Fields

  • ddsHandling :: Maybe DvbddsHandling

    Specify how MediaConvert handles the display definition segment (DDS). Keep the default, None (NONE), to exclude the DDS from this set of captions. Choose No display window (NO_DISPLAY_WINDOW) to have MediaConvert include the DDS but not include display window data. In this case, MediaConvert writes that information to the page composition segment (PCS) instead. Choose Specify (SPECIFIED) to have MediaConvert set up the display window based on the values that you specify in related job settings. For video resolutions that are 576 pixels or smaller in height, MediaConvert doesn't include the DDS, regardless of the value you choose for DDS handling (ddsHandling). In this case, it doesn't write the display window data to the PCS either. Related settings: Use the settings DDS x-coordinate (ddsXCoordinate) and DDS y-coordinate (ddsYCoordinate) to specify the offset between the top left corner of the display window and the top left corner of the video frame. All burn-in and DVB-Sub font settings must match.

  • backgroundOpacity :: Maybe Natural

    Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

  • fallbackFont :: Maybe DvbSubSubtitleFallbackFont

    Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

  • height :: Maybe Natural

    Specify the height, in pixels, of this set of DVB-Sub captions. The default value is 576 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

  • fontOpacity :: Maybe Natural

    Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent. Within your job settings, all of your DVB-Sub settings must be identical.

  • shadowYOffset :: Maybe Int

    Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • fontResolution :: Maybe Natural

    Specify the Font resolution (FontResolution) in DPI (dots per inch). Within your job settings, all of your DVB-Sub settings must be identical.

  • yPosition :: Maybe Natural

    Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output. Within your job settings, all of your DVB-Sub settings must be identical.

  • ddsYCoordinate :: Maybe Natural

    Use this setting, along with DDS x-coordinate (ddsXCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the top of the frame and the top of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

  • backgroundColor :: Maybe DvbSubtitleBackgroundColor

    Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present.

  • shadowXOffset :: Maybe Int

    Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left. Within your job settings, all of your DVB-Sub settings must be identical.

  • fontSize :: Maybe Natural

    Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size. Within your job settings, all of your DVB-Sub settings must be identical.

  • width :: Maybe Natural

    Specify the width, in pixels, of this set of DVB-Sub captions. The default value is 720 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

  • ddsXCoordinate :: Maybe Natural

    Use this setting, along with DDS y-coordinate (ddsYCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the left side of the frame and the left side of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

  • xPosition :: Maybe Natural

    Specify the horizontal position (XPosition) of the captions, relative to the left side of the outputin pixels. A value of 10 would result in the captions starting 10 pixels from the left ofthe output. If no explicit x_position is provided, the horizontal caption position will bedetermined by the alignment parameter. Within your job settings, all of your DVB-Sub settings must be identical.

  • teletextSpacing :: Maybe DvbSubtitleTeletextSpacing

    Specify whether the Text spacing (TextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions. Within your job settings, all of your DVB-Sub settings must be identical.

  • fontScript :: Maybe FontScript

    Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese. Within your job settings, all of your DVB-Sub settings must be identical.

  • alignment :: Maybe DvbSubtitleAlignment

    Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. Within your job settings, all of your DVB-Sub settings must be identical.

  • shadowOpacity :: Maybe Natural

    Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

  • applyFontColor :: Maybe DvbSubtitleApplyFontColor

    Ignore this setting unless Style Passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

  • stylePassthrough :: Maybe DvbSubtitleStylePassthrough

    Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

  • outlineColor :: Maybe DvbSubtitleOutlineColor

    Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • outlineSize :: Maybe Natural

    Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • shadowColor :: Maybe DvbSubtitleShadowColor

    Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • hexFontColor :: Maybe Text

    Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

  • fontColor :: Maybe DvbSubtitleFontColor

    Specify the color of the captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

  • subtitlingType :: Maybe DvbSubtitlingType

    Specify whether your DVB subtitles are standard or for hearing impaired. Choose hearing impaired if your subtitles include audio descriptions and dialogue. Choose standard if your subtitles include only dialogue.

Instances

Instances details
Eq DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

Read DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

Show DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

Generic DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

Associated Types

type Rep DvbSubDestinationSettings :: Type -> Type #

NFData DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

Hashable DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

ToJSON DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

FromJSON DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

type Rep DvbSubDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubDestinationSettings

type Rep DvbSubDestinationSettings = D1 ('MetaData "DvbSubDestinationSettings" "Amazonka.MediaConvert.Types.DvbSubDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DvbSubDestinationSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "ddsHandling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbddsHandling)) :*: (S1 ('MetaSel ('Just "backgroundOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "fallbackFont") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubSubtitleFallbackFont)))) :*: (S1 ('MetaSel ('Just "height") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "fontOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "shadowYOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))))) :*: ((S1 ('MetaSel ('Just "fontResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "yPosition") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "ddsYCoordinate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "backgroundColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleBackgroundColor)) :*: S1 ('MetaSel ('Just "shadowXOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))) :*: (S1 ('MetaSel ('Just "fontSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "width") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))) :*: (((S1 ('MetaSel ('Just "ddsXCoordinate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "xPosition") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "teletextSpacing") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleTeletextSpacing)))) :*: ((S1 ('MetaSel ('Just "fontScript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FontScript)) :*: S1 ('MetaSel ('Just "alignment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleAlignment))) :*: (S1 ('MetaSel ('Just "shadowOpacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "applyFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleApplyFontColor))))) :*: ((S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleStylePassthrough)) :*: (S1 ('MetaSel ('Just "outlineColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleOutlineColor)) :*: S1 ('MetaSel ('Just "outlineSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "shadowColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleShadowColor)) :*: S1 ('MetaSel ('Just "hexFontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "fontColor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitleFontColor)) :*: S1 ('MetaSel ('Just "subtitlingType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSubtitlingType))))))))

newDvbSubDestinationSettings :: DvbSubDestinationSettings Source #

Create a value of DvbSubDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:ddsHandling:DvbSubDestinationSettings', dvbSubDestinationSettings_ddsHandling - Specify how MediaConvert handles the display definition segment (DDS). Keep the default, None (NONE), to exclude the DDS from this set of captions. Choose No display window (NO_DISPLAY_WINDOW) to have MediaConvert include the DDS but not include display window data. In this case, MediaConvert writes that information to the page composition segment (PCS) instead. Choose Specify (SPECIFIED) to have MediaConvert set up the display window based on the values that you specify in related job settings. For video resolutions that are 576 pixels or smaller in height, MediaConvert doesn't include the DDS, regardless of the value you choose for DDS handling (ddsHandling). In this case, it doesn't write the display window data to the PCS either. Related settings: Use the settings DDS x-coordinate (ddsXCoordinate) and DDS y-coordinate (ddsYCoordinate) to specify the offset between the top left corner of the display window and the top left corner of the video frame. All burn-in and DVB-Sub font settings must match.

$sel:backgroundOpacity:DvbSubDestinationSettings', dvbSubDestinationSettings_backgroundOpacity - Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:fallbackFont:DvbSubDestinationSettings', dvbSubDestinationSettings_fallbackFont - Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

$sel:height:DvbSubDestinationSettings', dvbSubDestinationSettings_height - Specify the height, in pixels, of this set of DVB-Sub captions. The default value is 576 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

$sel:fontOpacity:DvbSubDestinationSettings', dvbSubDestinationSettings_fontOpacity - Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:shadowYOffset:DvbSubDestinationSettings', dvbSubDestinationSettings_shadowYOffset - Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:fontResolution:DvbSubDestinationSettings', dvbSubDestinationSettings_fontResolution - Specify the Font resolution (FontResolution) in DPI (dots per inch). Within your job settings, all of your DVB-Sub settings must be identical.

$sel:yPosition:DvbSubDestinationSettings', dvbSubDestinationSettings_yPosition - Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:ddsYCoordinate:DvbSubDestinationSettings', dvbSubDestinationSettings_ddsYCoordinate - Use this setting, along with DDS x-coordinate (ddsXCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the top of the frame and the top of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

$sel:backgroundColor:DvbSubDestinationSettings', dvbSubDestinationSettings_backgroundColor - Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present.

$sel:shadowXOffset:DvbSubDestinationSettings', dvbSubDestinationSettings_shadowXOffset - Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:fontSize:DvbSubDestinationSettings', dvbSubDestinationSettings_fontSize - Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:width:DvbSubDestinationSettings', dvbSubDestinationSettings_width - Specify the width, in pixels, of this set of DVB-Sub captions. The default value is 720 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

$sel:ddsXCoordinate:DvbSubDestinationSettings', dvbSubDestinationSettings_ddsXCoordinate - Use this setting, along with DDS y-coordinate (ddsYCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the left side of the frame and the left side of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

$sel:xPosition:DvbSubDestinationSettings', dvbSubDestinationSettings_xPosition - Specify the horizontal position (XPosition) of the captions, relative to the left side of the outputin pixels. A value of 10 would result in the captions starting 10 pixels from the left ofthe output. If no explicit x_position is provided, the horizontal caption position will bedetermined by the alignment parameter. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:teletextSpacing:DvbSubDestinationSettings', dvbSubDestinationSettings_teletextSpacing - Specify whether the Text spacing (TextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:fontScript:DvbSubDestinationSettings', dvbSubDestinationSettings_fontScript - Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:alignment:DvbSubDestinationSettings', dvbSubDestinationSettings_alignment - Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:shadowOpacity:DvbSubDestinationSettings', dvbSubDestinationSettings_shadowOpacity - Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:applyFontColor:DvbSubDestinationSettings', dvbSubDestinationSettings_applyFontColor - Ignore this setting unless Style Passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

$sel:stylePassthrough:DvbSubDestinationSettings', dvbSubDestinationSettings_stylePassthrough - Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

$sel:outlineColor:DvbSubDestinationSettings', dvbSubDestinationSettings_outlineColor - Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:outlineSize:DvbSubDestinationSettings', dvbSubDestinationSettings_outlineSize - Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:shadowColor:DvbSubDestinationSettings', dvbSubDestinationSettings_shadowColor - Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:hexFontColor:DvbSubDestinationSettings', dvbSubDestinationSettings_hexFontColor - Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

$sel:fontColor:DvbSubDestinationSettings', dvbSubDestinationSettings_fontColor - Specify the color of the captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

$sel:subtitlingType:DvbSubDestinationSettings', dvbSubDestinationSettings_subtitlingType - Specify whether your DVB subtitles are standard or for hearing impaired. Choose hearing impaired if your subtitles include audio descriptions and dialogue. Choose standard if your subtitles include only dialogue.

dvbSubDestinationSettings_ddsHandling :: Lens' DvbSubDestinationSettings (Maybe DvbddsHandling) Source #

Specify how MediaConvert handles the display definition segment (DDS). Keep the default, None (NONE), to exclude the DDS from this set of captions. Choose No display window (NO_DISPLAY_WINDOW) to have MediaConvert include the DDS but not include display window data. In this case, MediaConvert writes that information to the page composition segment (PCS) instead. Choose Specify (SPECIFIED) to have MediaConvert set up the display window based on the values that you specify in related job settings. For video resolutions that are 576 pixels or smaller in height, MediaConvert doesn't include the DDS, regardless of the value you choose for DDS handling (ddsHandling). In this case, it doesn't write the display window data to the PCS either. Related settings: Use the settings DDS x-coordinate (ddsXCoordinate) and DDS y-coordinate (ddsYCoordinate) to specify the offset between the top left corner of the display window and the top left corner of the video frame. All burn-in and DVB-Sub font settings must match.

dvbSubDestinationSettings_backgroundOpacity :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the opacity of the background rectangle. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to enabled, leave blank to pass through the background style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all backgrounds from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_fallbackFont :: Lens' DvbSubDestinationSettings (Maybe DvbSubSubtitleFallbackFont) Source #

Specify the font that you want the service to use for your burn in captions when your input captions specify a font that MediaConvert doesn't support. When you set Fallback font (FallbackFont) to best match (BEST_MATCH), or leave blank, MediaConvert uses a supported font that most closely matches the font that your input captions specify. When there are multiple unsupported fonts in your input captions, MediaConvert matches each font with the supported font that matches best. When you explicitly choose a replacement font, MediaConvert uses that font to replace all unsupported fonts from your input.

dvbSubDestinationSettings_height :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the height, in pixels, of this set of DVB-Sub captions. The default value is 576 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

dvbSubDestinationSettings_fontOpacity :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the opacity of the burned-in captions. 255 is opaque; 0 is transparent. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_shadowYOffset :: Lens' DvbSubDestinationSettings (Maybe Int) Source #

Specify the vertical offset of the shadow relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels above the text. Leave Shadow y-offset (ShadowYOffset) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow y-offset data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_fontResolution :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the Font resolution (FontResolution) in DPI (dots per inch). Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_yPosition :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the vertical position (YPosition) of the captions, relative to the top of the output in pixels. A value of 10 would result in the captions starting 10 pixels from the top of the output. If no explicit y_position is provided, the caption will be positioned towards the bottom of the output. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_ddsYCoordinate :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Use this setting, along with DDS x-coordinate (ddsXCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the top of the frame and the top of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

dvbSubDestinationSettings_backgroundColor :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleBackgroundColor) Source #

Specify the color of the rectangle behind the captions. Leave background color (BackgroundColor) blank and set Style passthrough (StylePassthrough) to enabled to use the background color data from your input captions, if present.

dvbSubDestinationSettings_shadowXOffset :: Lens' DvbSubDestinationSettings (Maybe Int) Source #

Specify the horizontal offset of the shadow, relative to the captions in pixels. A value of -2 would result in a shadow offset 2 pixels to the left. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_fontSize :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the Font size (FontSize) in pixels. Must be a positive integer. Set to 0, or leave blank, for automatic font size. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_width :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the width, in pixels, of this set of DVB-Sub captions. The default value is 720 pixels. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). All burn-in and DVB-Sub font settings must match.

dvbSubDestinationSettings_ddsXCoordinate :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Use this setting, along with DDS y-coordinate (ddsYCoordinate), to specify the upper left corner of the display definition segment (DDS) display window. With this setting, specify the distance, in pixels, between the left side of the frame and the left side of the DDS display window. Keep the default value, 0, to have MediaConvert automatically choose this offset. Related setting: When you use this setting, you must set DDS handling (ddsHandling) to a value other than None (NONE). MediaConvert uses these values to determine whether to write page position data to the DDS or to the page composition segment (PCS). All burn-in and DVB-Sub font settings must match.

dvbSubDestinationSettings_xPosition :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the horizontal position (XPosition) of the captions, relative to the left side of the outputin pixels. A value of 10 would result in the captions starting 10 pixels from the left ofthe output. If no explicit x_position is provided, the horizontal caption position will bedetermined by the alignment parameter. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_teletextSpacing :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleTeletextSpacing) Source #

Specify whether the Text spacing (TextSpacing) in your captions is set by the captions grid, or varies depending on letter width. Choose fixed grid (FIXED_GRID) to conform to the spacing specified in the captions file more accurately. Choose proportional (PROPORTIONAL) to make the text easier to read for closed captions. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_fontScript :: Lens' DvbSubDestinationSettings (Maybe FontScript) Source #

Set Font script (FontScript) to Automatically determined (AUTOMATIC), or leave blank, to automatically determine the font script in your input captions. Otherwise, set to Simplified Chinese (HANS) or Traditional Chinese (HANT) if your input font script uses Simplified or Traditional Chinese. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_alignment :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleAlignment) Source #

Specify the alignment of your captions. If no explicit x_position is provided, setting alignment to centered will placethe captions at the bottom center of the output. Similarly, setting a left alignment willalign captions to the bottom left of the output. If x and y positions are given in conjunction with the alignment parameter, the font will be justified (either left or centered) relative to those coordinates. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_shadowOpacity :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the opacity of the shadow. Enter a value from 0 to 255, where 0 is transparent and 255 is opaque. If Style passthrough (StylePassthrough) is set to Enabled, leave Shadow opacity (ShadowOpacity) blank to pass through the shadow style information in your input captions to your output captions. If Style passthrough is set to disabled, leave blank to use a value of 0 and remove all shadows from your output captions. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_applyFontColor :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleApplyFontColor) Source #

Ignore this setting unless Style Passthrough (StylePassthrough) is set to Enabled and Font color (FontColor) set to Black, Yellow, Red, Green, Blue, or Hex. Use Apply font color (ApplyFontColor) for additional font color controls. When you choose White text only (WHITE_TEXT_ONLY), or leave blank, your font color setting only applies to white text in your input captions. For example, if your font color setting is Yellow, and your input captions have red and white text, your output captions will have red and yellow text. When you choose ALL_TEXT, your font color setting applies to all of your output captions text.

dvbSubDestinationSettings_stylePassthrough :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleStylePassthrough) Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use default settings: white text with black outlining, bottom-center positioning, and automatic sizing. Whether you set Style passthrough to enabled or not, you can also choose to manually override any of the individual style and position settings.

dvbSubDestinationSettings_outlineColor :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleOutlineColor) Source #

Specify font outline color. Leave Outline color (OutlineColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font outline color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_outlineSize :: Lens' DvbSubDestinationSettings (Maybe Natural) Source #

Specify the Outline size (OutlineSize) of the caption text, in pixels. Leave Outline size blank and set Style passthrough (StylePassthrough) to enabled to use the outline size data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_shadowColor :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleShadowColor) Source #

Specify the color of the shadow cast by the captions. Leave Shadow color (ShadowColor) blank and set Style passthrough (StylePassthrough) to enabled to use the shadow color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_hexFontColor :: Lens' DvbSubDestinationSettings (Maybe Text) Source #

Ignore this setting unless your Font color is set to Hex. Enter either six or eight hexidecimal digits, representing red, green, and blue, with two optional extra digits for alpha. For example a value of 1122AABB is a red value of 0x11, a green value of 0x22, a blue value of 0xAA, and an alpha value of 0xBB.

dvbSubDestinationSettings_fontColor :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitleFontColor) Source #

Specify the color of the captions text. Leave Font color (FontColor) blank and set Style passthrough (StylePassthrough) to enabled to use the font color data from your input captions, if present. Within your job settings, all of your DVB-Sub settings must be identical.

dvbSubDestinationSettings_subtitlingType :: Lens' DvbSubDestinationSettings (Maybe DvbSubtitlingType) Source #

Specify whether your DVB subtitles are standard or for hearing impaired. Choose hearing impaired if your subtitles include audio descriptions and dialogue. Choose standard if your subtitles include only dialogue.

DvbSubSourceSettings

data DvbSubSourceSettings Source #

DVB Sub Source Settings

See: newDvbSubSourceSettings smart constructor.

Constructors

DvbSubSourceSettings' 

Fields

  • pid :: Maybe Natural

    When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, regardless of selectors.

Instances

Instances details
Eq DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

Read DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

Show DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

Generic DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

Associated Types

type Rep DvbSubSourceSettings :: Type -> Type #

NFData DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

Methods

rnf :: DvbSubSourceSettings -> () #

Hashable DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

ToJSON DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

FromJSON DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

type Rep DvbSubSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbSubSourceSettings

type Rep DvbSubSourceSettings = D1 ('MetaData "DvbSubSourceSettings" "Amazonka.MediaConvert.Types.DvbSubSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DvbSubSourceSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "pid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newDvbSubSourceSettings :: DvbSubSourceSettings Source #

Create a value of DvbSubSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:pid:DvbSubSourceSettings', dvbSubSourceSettings_pid - When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, regardless of selectors.

dvbSubSourceSettings_pid :: Lens' DvbSubSourceSettings (Maybe Natural) Source #

When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, regardless of selectors.

DvbTdtSettings

data DvbTdtSettings Source #

Use these settings to insert a DVB Time and Date Table (TDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

See: newDvbTdtSettings smart constructor.

Constructors

DvbTdtSettings' 

Fields

  • tdtInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

Instances

Instances details
Eq DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

Read DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

Show DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

Generic DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

Associated Types

type Rep DvbTdtSettings :: Type -> Type #

NFData DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

Methods

rnf :: DvbTdtSettings -> () #

Hashable DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

ToJSON DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

FromJSON DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

type Rep DvbTdtSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.DvbTdtSettings

type Rep DvbTdtSettings = D1 ('MetaData "DvbTdtSettings" "Amazonka.MediaConvert.Types.DvbTdtSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "DvbTdtSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "tdtInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newDvbTdtSettings :: DvbTdtSettings Source #

Create a value of DvbTdtSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tdtInterval:DvbTdtSettings', dvbTdtSettings_tdtInterval - The number of milliseconds between instances of this table in the output transport stream.

dvbTdtSettings_tdtInterval :: Lens' DvbTdtSettings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

Eac3AtmosSettings

data Eac3AtmosSettings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3_ATMOS.

See: newEac3AtmosSettings smart constructor.

Constructors

Eac3AtmosSettings' 

Fields

  • stereoDownmix :: Maybe Eac3AtmosStereoDownmix

    Choose how the service does stereo downmixing. Default value: Not indicated (ATMOS_STORAGE_DDP_DMIXMOD_NOT_INDICATED) Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Stereo downmix (StereoDownmix).

  • loRoCenterMixLevel :: Maybe Double

    Specify a value for the following Dolby Atmos setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only center (LoRoCenterMixLevel).

  • ltRtCenterMixLevel :: Maybe Double

    Specify a value for the following Dolby Atmos setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left total/Right total center (LtRtCenterMixLevel).

  • dynamicRangeCompressionLine :: Maybe Eac3AtmosDynamicRangeCompressionLine

    Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the line operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression line (DynamicRangeCompressionLine). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • ltRtSurroundMixLevel :: Maybe Double

    Specify a value for the following Dolby Atmos setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, the service ignores Left total/Right total surround (LtRtSurroundMixLevel).

  • loRoSurroundMixLevel :: Maybe Double

    Specify a value for the following Dolby Atmos setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only surround (LoRoSurroundMixLevel).

  • dynamicRangeControl :: Maybe Eac3AtmosDynamicRangeControl

    Specify whether MediaConvert should use any dynamic range control metadata from your input file. Keep the default value, Custom (SPECIFIED), to provide dynamic range control values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your dynamic range control values: Dynamic range compression line (DynamicRangeCompressionLine) and Dynamic range compression RF (DynamicRangeCompressionRf). When you keep the value Custom (SPECIFIED) for Dynamic range control (DynamicRangeControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

  • bitstreamMode :: Maybe Eac3AtmosBitstreamMode

    Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

  • dynamicRangeCompressionRf :: Maybe Eac3AtmosDynamicRangeCompressionRf

    Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the RF operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression RF (DynamicRangeCompressionRf). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • codingMode :: Maybe Eac3AtmosCodingMode

    The coding mode for Dolby Digital Plus JOC (Atmos).

  • sampleRate :: Maybe Natural

    This value is always 48000. It represents the sample rate in Hz.

  • speechThreshold :: Maybe Natural

    Specify the percentage of audio content, from 0% to 100%, that must be speech in order for the encoder to use the measured speech loudness as the overall program loudness. Default value: 15%

  • bitrate :: Maybe Natural

    Specify the average bitrate for this output in bits per second. Valid values: 384k, 448k, 576k, 640k, 768k, 1024k Default value: 448k Note that MediaConvert supports 384k only with channel-based immersive (CBI) 7.1.4 and 5.1.4 inputs. For CBI 9.1.6 and other input types, MediaConvert automatically increases your output bitrate to 448k.

  • dialogueIntelligence :: Maybe Eac3AtmosDialogueIntelligence

    Enable Dolby Dialogue Intelligence to adjust loudness based on dialogue analysis.

  • meteringMode :: Maybe Eac3AtmosMeteringMode

    Choose how the service meters the loudness of your audio.

  • surroundExMode :: Maybe Eac3AtmosSurroundExMode

    Specify whether your input audio has an additional center rear surround channel matrix encoded into your left and right surround channels.

  • downmixControl :: Maybe Eac3AtmosDownmixControl

    Specify whether MediaConvert should use any downmix metadata from your input file. Keep the default value, Custom (SPECIFIED) to provide downmix values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your downmix values: Left only/Right only surround (LoRoSurroundMixLevel), Left total/Right total surround (LtRtSurroundMixLevel), Left total/Right total center (LtRtCenterMixLevel), Left only/Right only center (LoRoCenterMixLevel), and Stereo downmix (StereoDownmix). When you keep Custom (SPECIFIED) for Downmix control (DownmixControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

Instances

Instances details
Eq Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

Read Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

Show Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

Generic Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

Associated Types

type Rep Eac3AtmosSettings :: Type -> Type #

NFData Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

Methods

rnf :: Eac3AtmosSettings -> () #

Hashable Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

ToJSON Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

FromJSON Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

type Rep Eac3AtmosSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3AtmosSettings

type Rep Eac3AtmosSettings = D1 ('MetaData "Eac3AtmosSettings" "Amazonka.MediaConvert.Types.Eac3AtmosSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Eac3AtmosSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "stereoDownmix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosStereoDownmix)) :*: S1 ('MetaSel ('Just "loRoCenterMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double))) :*: (S1 ('MetaSel ('Just "ltRtCenterMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "dynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosDynamicRangeCompressionLine)))) :*: ((S1 ('MetaSel ('Just "ltRtSurroundMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "loRoSurroundMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double))) :*: (S1 ('MetaSel ('Just "dynamicRangeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosDynamicRangeControl)) :*: S1 ('MetaSel ('Just "bitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosBitstreamMode))))) :*: (((S1 ('MetaSel ('Just "dynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosDynamicRangeCompressionRf)) :*: S1 ('MetaSel ('Just "codingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosCodingMode))) :*: (S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "speechThreshold") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "dialogueIntelligence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosDialogueIntelligence))) :*: (S1 ('MetaSel ('Just "meteringMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosMeteringMode)) :*: (S1 ('MetaSel ('Just "surroundExMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosSurroundExMode)) :*: S1 ('MetaSel ('Just "downmixControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AtmosDownmixControl))))))))

newEac3AtmosSettings :: Eac3AtmosSettings Source #

Create a value of Eac3AtmosSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stereoDownmix:Eac3AtmosSettings', eac3AtmosSettings_stereoDownmix - Choose how the service does stereo downmixing. Default value: Not indicated (ATMOS_STORAGE_DDP_DMIXMOD_NOT_INDICATED) Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Stereo downmix (StereoDownmix).

$sel:loRoCenterMixLevel:Eac3AtmosSettings', eac3AtmosSettings_loRoCenterMixLevel - Specify a value for the following Dolby Atmos setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only center (LoRoCenterMixLevel).

$sel:ltRtCenterMixLevel:Eac3AtmosSettings', eac3AtmosSettings_ltRtCenterMixLevel - Specify a value for the following Dolby Atmos setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left total/Right total center (LtRtCenterMixLevel).

$sel:dynamicRangeCompressionLine:Eac3AtmosSettings', eac3AtmosSettings_dynamicRangeCompressionLine - Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the line operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression line (DynamicRangeCompressionLine). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:ltRtSurroundMixLevel:Eac3AtmosSettings', eac3AtmosSettings_ltRtSurroundMixLevel - Specify a value for the following Dolby Atmos setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, the service ignores Left total/Right total surround (LtRtSurroundMixLevel).

$sel:loRoSurroundMixLevel:Eac3AtmosSettings', eac3AtmosSettings_loRoSurroundMixLevel - Specify a value for the following Dolby Atmos setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only surround (LoRoSurroundMixLevel).

$sel:dynamicRangeControl:Eac3AtmosSettings', eac3AtmosSettings_dynamicRangeControl - Specify whether MediaConvert should use any dynamic range control metadata from your input file. Keep the default value, Custom (SPECIFIED), to provide dynamic range control values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your dynamic range control values: Dynamic range compression line (DynamicRangeCompressionLine) and Dynamic range compression RF (DynamicRangeCompressionRf). When you keep the value Custom (SPECIFIED) for Dynamic range control (DynamicRangeControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

$sel:bitstreamMode:Eac3AtmosSettings', eac3AtmosSettings_bitstreamMode - Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

$sel:dynamicRangeCompressionRf:Eac3AtmosSettings', eac3AtmosSettings_dynamicRangeCompressionRf - Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the RF operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression RF (DynamicRangeCompressionRf). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:codingMode:Eac3AtmosSettings', eac3AtmosSettings_codingMode - The coding mode for Dolby Digital Plus JOC (Atmos).

$sel:sampleRate:Eac3AtmosSettings', eac3AtmosSettings_sampleRate - This value is always 48000. It represents the sample rate in Hz.

$sel:speechThreshold:Eac3AtmosSettings', eac3AtmosSettings_speechThreshold - Specify the percentage of audio content, from 0% to 100%, that must be speech in order for the encoder to use the measured speech loudness as the overall program loudness. Default value: 15%

$sel:bitrate:Eac3AtmosSettings', eac3AtmosSettings_bitrate - Specify the average bitrate for this output in bits per second. Valid values: 384k, 448k, 576k, 640k, 768k, 1024k Default value: 448k Note that MediaConvert supports 384k only with channel-based immersive (CBI) 7.1.4 and 5.1.4 inputs. For CBI 9.1.6 and other input types, MediaConvert automatically increases your output bitrate to 448k.

$sel:dialogueIntelligence:Eac3AtmosSettings', eac3AtmosSettings_dialogueIntelligence - Enable Dolby Dialogue Intelligence to adjust loudness based on dialogue analysis.

$sel:meteringMode:Eac3AtmosSettings', eac3AtmosSettings_meteringMode - Choose how the service meters the loudness of your audio.

$sel:surroundExMode:Eac3AtmosSettings', eac3AtmosSettings_surroundExMode - Specify whether your input audio has an additional center rear surround channel matrix encoded into your left and right surround channels.

$sel:downmixControl:Eac3AtmosSettings', eac3AtmosSettings_downmixControl - Specify whether MediaConvert should use any downmix metadata from your input file. Keep the default value, Custom (SPECIFIED) to provide downmix values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your downmix values: Left only/Right only surround (LoRoSurroundMixLevel), Left total/Right total surround (LtRtSurroundMixLevel), Left total/Right total center (LtRtCenterMixLevel), Left only/Right only center (LoRoCenterMixLevel), and Stereo downmix (StereoDownmix). When you keep Custom (SPECIFIED) for Downmix control (DownmixControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

eac3AtmosSettings_stereoDownmix :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosStereoDownmix) Source #

Choose how the service does stereo downmixing. Default value: Not indicated (ATMOS_STORAGE_DDP_DMIXMOD_NOT_INDICATED) Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Stereo downmix (StereoDownmix).

eac3AtmosSettings_loRoCenterMixLevel :: Lens' Eac3AtmosSettings (Maybe Double) Source #

Specify a value for the following Dolby Atmos setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only center (LoRoCenterMixLevel).

eac3AtmosSettings_ltRtCenterMixLevel :: Lens' Eac3AtmosSettings (Maybe Double) Source #

Specify a value for the following Dolby Atmos setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, and -6.0. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left total/Right total center (LtRtCenterMixLevel).

eac3AtmosSettings_dynamicRangeCompressionLine :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosDynamicRangeCompressionLine) Source #

Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the line operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression line (DynamicRangeCompressionLine). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

eac3AtmosSettings_ltRtSurroundMixLevel :: Lens' Eac3AtmosSettings (Maybe Double) Source #

Specify a value for the following Dolby Atmos setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB) Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, the service ignores Left total/Right total surround (LtRtSurroundMixLevel).

eac3AtmosSettings_loRoSurroundMixLevel :: Lens' Eac3AtmosSettings (Maybe Double) Source #

Specify a value for the following Dolby Atmos setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. Default value: -3 dB (ATMOS_STORAGE_DDP_MIXLEV_MINUS_3_DB). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. Related setting: How the service uses this value depends on the value that you choose for Stereo downmix (Eac3AtmosStereoDownmix). Related setting: To have MediaConvert use this value, keep the default value, Custom (SPECIFIED) for the setting Downmix control (DownmixControl). Otherwise, MediaConvert ignores Left only/Right only surround (LoRoSurroundMixLevel).

eac3AtmosSettings_dynamicRangeControl :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosDynamicRangeControl) Source #

Specify whether MediaConvert should use any dynamic range control metadata from your input file. Keep the default value, Custom (SPECIFIED), to provide dynamic range control values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your dynamic range control values: Dynamic range compression line (DynamicRangeCompressionLine) and Dynamic range compression RF (DynamicRangeCompressionRf). When you keep the value Custom (SPECIFIED) for Dynamic range control (DynamicRangeControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

eac3AtmosSettings_bitstreamMode :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosBitstreamMode) Source #

Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

eac3AtmosSettings_dynamicRangeCompressionRf :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosDynamicRangeCompressionRf) Source #

Choose the Dolby dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby stream for the RF operating mode. Default value: Film light (ATMOS_STORAGE_DDP_COMPR_FILM_LIGHT) Related setting: To have MediaConvert use the value you specify here, keep the default value, Custom (SPECIFIED) for the setting Dynamic range control (DynamicRangeControl). Otherwise, MediaConvert ignores Dynamic range compression RF (DynamicRangeCompressionRf). For information about the Dolby DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

eac3AtmosSettings_codingMode :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosCodingMode) Source #

The coding mode for Dolby Digital Plus JOC (Atmos).

eac3AtmosSettings_sampleRate :: Lens' Eac3AtmosSettings (Maybe Natural) Source #

This value is always 48000. It represents the sample rate in Hz.

eac3AtmosSettings_speechThreshold :: Lens' Eac3AtmosSettings (Maybe Natural) Source #

Specify the percentage of audio content, from 0% to 100%, that must be speech in order for the encoder to use the measured speech loudness as the overall program loudness. Default value: 15%

eac3AtmosSettings_bitrate :: Lens' Eac3AtmosSettings (Maybe Natural) Source #

Specify the average bitrate for this output in bits per second. Valid values: 384k, 448k, 576k, 640k, 768k, 1024k Default value: 448k Note that MediaConvert supports 384k only with channel-based immersive (CBI) 7.1.4 and 5.1.4 inputs. For CBI 9.1.6 and other input types, MediaConvert automatically increases your output bitrate to 448k.

eac3AtmosSettings_dialogueIntelligence :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosDialogueIntelligence) Source #

Enable Dolby Dialogue Intelligence to adjust loudness based on dialogue analysis.

eac3AtmosSettings_meteringMode :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosMeteringMode) Source #

Choose how the service meters the loudness of your audio.

eac3AtmosSettings_surroundExMode :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosSurroundExMode) Source #

Specify whether your input audio has an additional center rear surround channel matrix encoded into your left and right surround channels.

eac3AtmosSettings_downmixControl :: Lens' Eac3AtmosSettings (Maybe Eac3AtmosDownmixControl) Source #

Specify whether MediaConvert should use any downmix metadata from your input file. Keep the default value, Custom (SPECIFIED) to provide downmix values in your job settings. Choose Follow source (INITIALIZE_FROM_SOURCE) to use the metadata from your input. Related settings--Use these settings to specify your downmix values: Left only/Right only surround (LoRoSurroundMixLevel), Left total/Right total surround (LtRtSurroundMixLevel), Left total/Right total center (LtRtCenterMixLevel), Left only/Right only center (LoRoCenterMixLevel), and Stereo downmix (StereoDownmix). When you keep Custom (SPECIFIED) for Downmix control (DownmixControl) and you don't specify values for the related settings, MediaConvert uses default values for those settings.

Eac3Settings

data Eac3Settings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value EAC3.

See: newEac3Settings smart constructor.

Constructors

Eac3Settings' 

Fields

  • stereoDownmix :: Maybe Eac3StereoDownmix

    Choose how the service does stereo downmixing. This setting only applies if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Stereo downmix (Eac3StereoDownmix).

  • loRoCenterMixLevel :: Maybe Double

    Specify a value for the following Dolby Digital Plus setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only center (loRoCenterMixLevel).

  • ltRtCenterMixLevel :: Maybe Double

    Specify a value for the following Dolby Digital Plus setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total center (ltRtCenterMixLevel).

  • lfeFilter :: Maybe Eac3LfeFilter

    Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

  • dynamicRangeCompressionLine :: Maybe Eac3DynamicRangeCompressionLine

    Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • ltRtSurroundMixLevel :: Maybe Double

    Specify a value for the following Dolby Digital Plus setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total surround (ltRtSurroundMixLevel).

  • metadataControl :: Maybe Eac3MetadataControl

    When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

  • loRoSurroundMixLevel :: Maybe Double

    Specify a value for the following Dolby Digital Plus setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only surround (loRoSurroundMixLevel).

  • surroundMode :: Maybe Eac3SurroundMode

    When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into the two channels.

  • attenuationControl :: Maybe Eac3AttenuationControl

    If set to ATTENUATE_3_DB, applies a 3 dB attenuation to the surround channels. Only used for 3/2 coding mode.

  • passthroughControl :: Maybe Eac3PassthroughControl

    When set to WHEN_POSSIBLE, input DD+ audio will be passed through if it is present on the input. this detection is dynamic over the life of the transcode. Inputs that alternate between DD+ and non-DD+ content will have a consistent DD+ output as the system alternates between passthrough and encoding.

  • bitstreamMode :: Maybe Eac3BitstreamMode

    Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

  • lfeControl :: Maybe Eac3LfeControl

    When encoding 3/2 audio, controls whether the LFE channel is enabled

  • dynamicRangeCompressionRf :: Maybe Eac3DynamicRangeCompressionRf

    Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

  • codingMode :: Maybe Eac3CodingMode

    Dolby Digital Plus coding mode. Determines number of channels.

  • sampleRate :: Maybe Natural

    This value is always 48000. It represents the sample rate in Hz.

  • dcFilter :: Maybe Eac3DcFilter

    Activates a DC highpass filter for all input channels.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

  • phaseControl :: Maybe Eac3PhaseControl

    Controls the amount of phase-shift applied to the surround channels. Only used for 3/2 coding mode.

  • surroundExMode :: Maybe Eac3SurroundExMode

    When encoding 3/2 audio, sets whether an extra center back surround channel is matrix encoded into the left and right surround channels.

  • dialnorm :: Maybe Natural

    Sets the dialnorm for the output. If blank and input audio is Dolby Digital Plus, dialnorm will be passed through.

Instances

Instances details
Eq Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

Read Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

Show Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

Generic Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

Associated Types

type Rep Eac3Settings :: Type -> Type #

NFData Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

Methods

rnf :: Eac3Settings -> () #

Hashable Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

ToJSON Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

FromJSON Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

type Rep Eac3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Eac3Settings

type Rep Eac3Settings = D1 ('MetaData "Eac3Settings" "Amazonka.MediaConvert.Types.Eac3Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Eac3Settings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "stereoDownmix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3StereoDownmix)) :*: S1 ('MetaSel ('Just "loRoCenterMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double))) :*: (S1 ('MetaSel ('Just "ltRtCenterMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "lfeFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3LfeFilter)) :*: S1 ('MetaSel ('Just "dynamicRangeCompressionLine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3DynamicRangeCompressionLine))))) :*: ((S1 ('MetaSel ('Just "ltRtSurroundMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "metadataControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3MetadataControl))) :*: (S1 ('MetaSel ('Just "loRoSurroundMixLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "surroundMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3SurroundMode)) :*: S1 ('MetaSel ('Just "attenuationControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3AttenuationControl)))))) :*: (((S1 ('MetaSel ('Just "passthroughControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3PassthroughControl)) :*: S1 ('MetaSel ('Just "bitstreamMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3BitstreamMode))) :*: (S1 ('MetaSel ('Just "lfeControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3LfeControl)) :*: (S1 ('MetaSel ('Just "dynamicRangeCompressionRf") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3DynamicRangeCompressionRf)) :*: S1 ('MetaSel ('Just "codingMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3CodingMode))))) :*: ((S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "dcFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3DcFilter)) :*: S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "phaseControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3PhaseControl)) :*: (S1 ('MetaSel ('Just "surroundExMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Eac3SurroundExMode)) :*: S1 ('MetaSel ('Just "dialnorm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))))

newEac3Settings :: Eac3Settings Source #

Create a value of Eac3Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stereoDownmix:Eac3Settings', eac3Settings_stereoDownmix - Choose how the service does stereo downmixing. This setting only applies if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Stereo downmix (Eac3StereoDownmix).

$sel:loRoCenterMixLevel:Eac3Settings', eac3Settings_loRoCenterMixLevel - Specify a value for the following Dolby Digital Plus setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only center (loRoCenterMixLevel).

$sel:ltRtCenterMixLevel:Eac3Settings', eac3Settings_ltRtCenterMixLevel - Specify a value for the following Dolby Digital Plus setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total center (ltRtCenterMixLevel).

$sel:lfeFilter:Eac3Settings', eac3Settings_lfeFilter - Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

$sel:dynamicRangeCompressionLine:Eac3Settings', eac3Settings_dynamicRangeCompressionLine - Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:ltRtSurroundMixLevel:Eac3Settings', eac3Settings_ltRtSurroundMixLevel - Specify a value for the following Dolby Digital Plus setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total surround (ltRtSurroundMixLevel).

$sel:metadataControl:Eac3Settings', eac3Settings_metadataControl - When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

$sel:loRoSurroundMixLevel:Eac3Settings', eac3Settings_loRoSurroundMixLevel - Specify a value for the following Dolby Digital Plus setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only surround (loRoSurroundMixLevel).

$sel:surroundMode:Eac3Settings', eac3Settings_surroundMode - When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into the two channels.

$sel:attenuationControl:Eac3Settings', eac3Settings_attenuationControl - If set to ATTENUATE_3_DB, applies a 3 dB attenuation to the surround channels. Only used for 3/2 coding mode.

$sel:passthroughControl:Eac3Settings', eac3Settings_passthroughControl - When set to WHEN_POSSIBLE, input DD+ audio will be passed through if it is present on the input. this detection is dynamic over the life of the transcode. Inputs that alternate between DD+ and non-DD+ content will have a consistent DD+ output as the system alternates between passthrough and encoding.

$sel:bitstreamMode:Eac3Settings', eac3Settings_bitstreamMode - Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

$sel:lfeControl:Eac3Settings', eac3Settings_lfeControl - When encoding 3/2 audio, controls whether the LFE channel is enabled

$sel:dynamicRangeCompressionRf:Eac3Settings', eac3Settings_dynamicRangeCompressionRf - Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

$sel:codingMode:Eac3Settings', eac3Settings_codingMode - Dolby Digital Plus coding mode. Determines number of channels.

$sel:sampleRate:Eac3Settings', eac3Settings_sampleRate - This value is always 48000. It represents the sample rate in Hz.

$sel:dcFilter:Eac3Settings', eac3Settings_dcFilter - Activates a DC highpass filter for all input channels.

$sel:bitrate:Eac3Settings', eac3Settings_bitrate - Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

$sel:phaseControl:Eac3Settings', eac3Settings_phaseControl - Controls the amount of phase-shift applied to the surround channels. Only used for 3/2 coding mode.

$sel:surroundExMode:Eac3Settings', eac3Settings_surroundExMode - When encoding 3/2 audio, sets whether an extra center back surround channel is matrix encoded into the left and right surround channels.

$sel:dialnorm:Eac3Settings', eac3Settings_dialnorm - Sets the dialnorm for the output. If blank and input audio is Dolby Digital Plus, dialnorm will be passed through.

eac3Settings_stereoDownmix :: Lens' Eac3Settings (Maybe Eac3StereoDownmix) Source #

Choose how the service does stereo downmixing. This setting only applies if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Stereo downmix (Eac3StereoDownmix).

eac3Settings_loRoCenterMixLevel :: Lens' Eac3Settings (Maybe Double) Source #

Specify a value for the following Dolby Digital Plus setting: Left only/Right only center mix (Lo/Ro center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only center (loRoCenterMixLevel).

eac3Settings_ltRtCenterMixLevel :: Lens' Eac3Settings (Maybe Double) Source #

Specify a value for the following Dolby Digital Plus setting: Left total/Right total center mix (Lt/Rt center). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: 3.0, 1.5, 0.0, -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total center (ltRtCenterMixLevel).

eac3Settings_lfeFilter :: Lens' Eac3Settings (Maybe Eac3LfeFilter) Source #

Applies a 120Hz lowpass filter to the LFE channel prior to encoding. Only valid with 3_2_LFE coding mode.

eac3Settings_dynamicRangeCompressionLine :: Lens' Eac3Settings (Maybe Eac3DynamicRangeCompressionLine) Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the line operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

eac3Settings_ltRtSurroundMixLevel :: Lens' Eac3Settings (Maybe Double) Source #

Specify a value for the following Dolby Digital Plus setting: Left total/Right total surround mix (Lt/Rt surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left total/Right total surround (ltRtSurroundMixLevel).

eac3Settings_metadataControl :: Lens' Eac3Settings (Maybe Eac3MetadataControl) Source #

When set to FOLLOW_INPUT, encoder metadata will be sourced from the DD, DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied from one of these streams, then the static metadata settings will be used.

eac3Settings_loRoSurroundMixLevel :: Lens' Eac3Settings (Maybe Double) Source #

Specify a value for the following Dolby Digital Plus setting: Left only/Right only (Lo/Ro surround). MediaConvert uses this value for downmixing. How the service uses this value depends on the value that you choose for Stereo downmix (Eac3StereoDownmix). Valid values: -1.5, -3.0, -4.5, -6.0, and -60. The value -60 mutes the channel. This setting applies only if you keep the default value of 3/2 - L, R, C, Ls, Rs (CODING_MODE_3_2) for the setting Coding mode (Eac3CodingMode). If you choose a different value for Coding mode, the service ignores Left only/Right only surround (loRoSurroundMixLevel).

eac3Settings_surroundMode :: Lens' Eac3Settings (Maybe Eac3SurroundMode) Source #

When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into the two channels.

eac3Settings_attenuationControl :: Lens' Eac3Settings (Maybe Eac3AttenuationControl) Source #

If set to ATTENUATE_3_DB, applies a 3 dB attenuation to the surround channels. Only used for 3/2 coding mode.

eac3Settings_passthroughControl :: Lens' Eac3Settings (Maybe Eac3PassthroughControl) Source #

When set to WHEN_POSSIBLE, input DD+ audio will be passed through if it is present on the input. this detection is dynamic over the life of the transcode. Inputs that alternate between DD+ and non-DD+ content will have a consistent DD+ output as the system alternates between passthrough and encoding.

eac3Settings_bitstreamMode :: Lens' Eac3Settings (Maybe Eac3BitstreamMode) Source #

Specify the bitstream mode for the E-AC-3 stream that the encoder emits. For more information about the EAC3 bitstream mode, see ATSC A/52-2012 (Annex E).

eac3Settings_lfeControl :: Lens' Eac3Settings (Maybe Eac3LfeControl) Source #

When encoding 3/2 audio, controls whether the LFE channel is enabled

eac3Settings_dynamicRangeCompressionRf :: Lens' Eac3Settings (Maybe Eac3DynamicRangeCompressionRf) Source #

Choose the Dolby Digital dynamic range control (DRC) profile that MediaConvert uses when encoding the metadata in the Dolby Digital stream for the RF operating mode. Related setting: When you use this setting, MediaConvert ignores any value you provide for Dynamic range compression profile (DynamicRangeCompressionProfile). For information about the Dolby Digital DRC operating modes and profiles, see the Dynamic Range Control chapter of the Dolby Metadata Guide at https://developer.dolby.com/globalassets/professional/documents/dolby-metadata-guide.pdf.

eac3Settings_codingMode :: Lens' Eac3Settings (Maybe Eac3CodingMode) Source #

Dolby Digital Plus coding mode. Determines number of channels.

eac3Settings_sampleRate :: Lens' Eac3Settings (Maybe Natural) Source #

This value is always 48000. It represents the sample rate in Hz.

eac3Settings_dcFilter :: Lens' Eac3Settings (Maybe Eac3DcFilter) Source #

Activates a DC highpass filter for all input channels.

eac3Settings_bitrate :: Lens' Eac3Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second. Valid bitrates depend on the coding mode.

eac3Settings_phaseControl :: Lens' Eac3Settings (Maybe Eac3PhaseControl) Source #

Controls the amount of phase-shift applied to the surround channels. Only used for 3/2 coding mode.

eac3Settings_surroundExMode :: Lens' Eac3Settings (Maybe Eac3SurroundExMode) Source #

When encoding 3/2 audio, sets whether an extra center back surround channel is matrix encoded into the left and right surround channels.

eac3Settings_dialnorm :: Lens' Eac3Settings (Maybe Natural) Source #

Sets the dialnorm for the output. If blank and input audio is Dolby Digital Plus, dialnorm will be passed through.

EmbeddedDestinationSettings

data EmbeddedDestinationSettings Source #

Settings related to CEA/EIA-608 and CEA/EIA-708 (also called embedded or ancillary) captions. Set up embedded captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/embedded-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to EMBEDDED, EMBEDDED_PLUS_SCTE20, or SCTE20_PLUS_EMBEDDED.

See: newEmbeddedDestinationSettings smart constructor.

Constructors

EmbeddedDestinationSettings' 

Fields

  • destination608ChannelNumber :: Maybe Natural

    Ignore this setting unless your input captions are SCC format and your output captions are embedded in the video stream. Specify a CC number for each captions channel in this output. If you have two channels, choose CC numbers that aren't in the same field. For example, choose 1 and 3. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

  • destination708ServiceNumber :: Maybe Natural

    Ignore this setting unless your input captions are SCC format and you want both 608 and 708 captions embedded in your output stream. Optionally, specify the 708 service number for each output captions channel. Choose a different number for each channel. To use this setting, also set Force 608 to 708 upconvert (Convert608To708) to Upconvert (UPCONVERT) in your input captions selector settings. If you choose to upconvert but don't specify a 708 service number, MediaConvert uses the number that you specify for CC channel number (destination608ChannelNumber) for the 708 service number. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

Instances

Instances details
Eq EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

Read EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

Show EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

Generic EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

Associated Types

type Rep EmbeddedDestinationSettings :: Type -> Type #

NFData EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

Hashable EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

ToJSON EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

FromJSON EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

type Rep EmbeddedDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedDestinationSettings

type Rep EmbeddedDestinationSettings = D1 ('MetaData "EmbeddedDestinationSettings" "Amazonka.MediaConvert.Types.EmbeddedDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "EmbeddedDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "destination608ChannelNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "destination708ServiceNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newEmbeddedDestinationSettings :: EmbeddedDestinationSettings Source #

Create a value of EmbeddedDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:destination608ChannelNumber:EmbeddedDestinationSettings', embeddedDestinationSettings_destination608ChannelNumber - Ignore this setting unless your input captions are SCC format and your output captions are embedded in the video stream. Specify a CC number for each captions channel in this output. If you have two channels, choose CC numbers that aren't in the same field. For example, choose 1 and 3. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

$sel:destination708ServiceNumber:EmbeddedDestinationSettings', embeddedDestinationSettings_destination708ServiceNumber - Ignore this setting unless your input captions are SCC format and you want both 608 and 708 captions embedded in your output stream. Optionally, specify the 708 service number for each output captions channel. Choose a different number for each channel. To use this setting, also set Force 608 to 708 upconvert (Convert608To708) to Upconvert (UPCONVERT) in your input captions selector settings. If you choose to upconvert but don't specify a 708 service number, MediaConvert uses the number that you specify for CC channel number (destination608ChannelNumber) for the 708 service number. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

embeddedDestinationSettings_destination608ChannelNumber :: Lens' EmbeddedDestinationSettings (Maybe Natural) Source #

Ignore this setting unless your input captions are SCC format and your output captions are embedded in the video stream. Specify a CC number for each captions channel in this output. If you have two channels, choose CC numbers that aren't in the same field. For example, choose 1 and 3. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

embeddedDestinationSettings_destination708ServiceNumber :: Lens' EmbeddedDestinationSettings (Maybe Natural) Source #

Ignore this setting unless your input captions are SCC format and you want both 608 and 708 captions embedded in your output stream. Optionally, specify the 708 service number for each output captions channel. Choose a different number for each channel. To use this setting, also set Force 608 to 708 upconvert (Convert608To708) to Upconvert (UPCONVERT) in your input captions selector settings. If you choose to upconvert but don't specify a 708 service number, MediaConvert uses the number that you specify for CC channel number (destination608ChannelNumber) for the 708 service number. For more information, see https://docs.aws.amazon.com/console/mediaconvert/dual-scc-to-embedded.

EmbeddedSourceSettings

data EmbeddedSourceSettings Source #

Settings for embedded captions Source

See: newEmbeddedSourceSettings smart constructor.

Constructors

EmbeddedSourceSettings' 

Fields

  • convert608To708 :: Maybe EmbeddedConvert608To708

    Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

  • terminateCaptions :: Maybe EmbeddedTerminateCaptions

    By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

  • source608TrackNumber :: Maybe Natural

    Specifies the video track index used for extracting captions. The system only supports one input video track, so this should always be set to '1'.

  • source608ChannelNumber :: Maybe Natural

    Specifies the 608/708 channel number within the video track from which to extract captions. Unused for passthrough.

Instances

Instances details
Eq EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

Read EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

Show EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

Generic EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

Associated Types

type Rep EmbeddedSourceSettings :: Type -> Type #

NFData EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

Methods

rnf :: EmbeddedSourceSettings -> () #

Hashable EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

ToJSON EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

FromJSON EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

type Rep EmbeddedSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EmbeddedSourceSettings

type Rep EmbeddedSourceSettings = D1 ('MetaData "EmbeddedSourceSettings" "Amazonka.MediaConvert.Types.EmbeddedSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "EmbeddedSourceSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "convert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EmbeddedConvert608To708)) :*: S1 ('MetaSel ('Just "terminateCaptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EmbeddedTerminateCaptions))) :*: (S1 ('MetaSel ('Just "source608TrackNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "source608ChannelNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newEmbeddedSourceSettings :: EmbeddedSourceSettings Source #

Create a value of EmbeddedSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:convert608To708:EmbeddedSourceSettings', embeddedSourceSettings_convert608To708 - Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

$sel:terminateCaptions:EmbeddedSourceSettings', embeddedSourceSettings_terminateCaptions - By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

$sel:source608TrackNumber:EmbeddedSourceSettings', embeddedSourceSettings_source608TrackNumber - Specifies the video track index used for extracting captions. The system only supports one input video track, so this should always be set to '1'.

$sel:source608ChannelNumber:EmbeddedSourceSettings', embeddedSourceSettings_source608ChannelNumber - Specifies the 608/708 channel number within the video track from which to extract captions. Unused for passthrough.

embeddedSourceSettings_convert608To708 :: Lens' EmbeddedSourceSettings (Maybe EmbeddedConvert608To708) Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

embeddedSourceSettings_terminateCaptions :: Lens' EmbeddedSourceSettings (Maybe EmbeddedTerminateCaptions) Source #

By default, the service terminates any unterminated captions at the end of each input. If you want the caption to continue onto your next input, disable this setting.

embeddedSourceSettings_source608TrackNumber :: Lens' EmbeddedSourceSettings (Maybe Natural) Source #

Specifies the video track index used for extracting captions. The system only supports one input video track, so this should always be set to '1'.

embeddedSourceSettings_source608ChannelNumber :: Lens' EmbeddedSourceSettings (Maybe Natural) Source #

Specifies the 608/708 channel number within the video track from which to extract captions. Unused for passthrough.

Endpoint

data Endpoint Source #

Describes an account-specific API endpoint.

See: newEndpoint smart constructor.

Constructors

Endpoint' 

Fields

Instances

Instances details
Eq Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Read Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Show Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Generic Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Associated Types

type Rep Endpoint :: Type -> Type #

Methods

from :: Endpoint -> Rep Endpoint x #

to :: Rep Endpoint x -> Endpoint #

NFData Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Methods

rnf :: Endpoint -> () #

Hashable Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

Methods

hashWithSalt :: Int -> Endpoint -> Int #

hash :: Endpoint -> Int #

FromJSON Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

type Rep Endpoint Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Endpoint

type Rep Endpoint = D1 ('MetaData "Endpoint" "Amazonka.MediaConvert.Types.Endpoint" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Endpoint'" 'PrefixI 'True) (S1 ('MetaSel ('Just "url") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newEndpoint :: Endpoint Source #

Create a value of Endpoint with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:url:Endpoint', endpoint_url - URL of endpoint

EsamManifestConfirmConditionNotification

data EsamManifestConfirmConditionNotification Source #

ESAM ManifestConfirmConditionNotification defined by OC-SP-ESAM-API-I03-131025.

See: newEsamManifestConfirmConditionNotification smart constructor.

Constructors

EsamManifestConfirmConditionNotification' 

Fields

  • mccXml :: Maybe Text

    Provide your ESAM ManifestConfirmConditionNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the Manifest Conditioning instructions in the message that you supply.

Instances

Instances details
Eq EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

Read EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

Show EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

Generic EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

NFData EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

Hashable EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

ToJSON EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

FromJSON EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

type Rep EsamManifestConfirmConditionNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification

type Rep EsamManifestConfirmConditionNotification = D1 ('MetaData "EsamManifestConfirmConditionNotification" "Amazonka.MediaConvert.Types.EsamManifestConfirmConditionNotification" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "EsamManifestConfirmConditionNotification'" 'PrefixI 'True) (S1 ('MetaSel ('Just "mccXml") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newEsamManifestConfirmConditionNotification :: EsamManifestConfirmConditionNotification Source #

Create a value of EsamManifestConfirmConditionNotification with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:mccXml:EsamManifestConfirmConditionNotification', esamManifestConfirmConditionNotification_mccXml - Provide your ESAM ManifestConfirmConditionNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the Manifest Conditioning instructions in the message that you supply.

esamManifestConfirmConditionNotification_mccXml :: Lens' EsamManifestConfirmConditionNotification (Maybe Text) Source #

Provide your ESAM ManifestConfirmConditionNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the Manifest Conditioning instructions in the message that you supply.

EsamSettings

data EsamSettings Source #

Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

See: newEsamSettings smart constructor.

Constructors

EsamSettings' 

Fields

  • manifestConfirmConditionNotification :: Maybe EsamManifestConfirmConditionNotification

    Specifies an ESAM ManifestConfirmConditionNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the manifest conditioning instructions that you provide in the setting MCC XML (mccXml).

  • responseSignalPreroll :: Maybe Natural

    Specifies the stream distance, in milliseconds, between the SCTE 35 messages that the transcoder places and the splice points that they refer to. If the time between the start of the asset and the SCTE-35 message is less than this value, then the transcoder places the SCTE-35 marker at the beginning of the stream.

  • signalProcessingNotification :: Maybe EsamSignalProcessingNotification

    Specifies an ESAM SignalProcessingNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the signal processing instructions that you provide in the setting SCC XML (sccXml).

Instances

Instances details
Eq EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

Read EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

Show EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

Generic EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

Associated Types

type Rep EsamSettings :: Type -> Type #

NFData EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

Methods

rnf :: EsamSettings -> () #

Hashable EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

ToJSON EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

FromJSON EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

type Rep EsamSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSettings

type Rep EsamSettings = D1 ('MetaData "EsamSettings" "Amazonka.MediaConvert.Types.EsamSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "EsamSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "manifestConfirmConditionNotification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EsamManifestConfirmConditionNotification)) :*: (S1 ('MetaSel ('Just "responseSignalPreroll") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "signalProcessingNotification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EsamSignalProcessingNotification)))))

newEsamSettings :: EsamSettings Source #

Create a value of EsamSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:manifestConfirmConditionNotification:EsamSettings', esamSettings_manifestConfirmConditionNotification - Specifies an ESAM ManifestConfirmConditionNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the manifest conditioning instructions that you provide in the setting MCC XML (mccXml).

$sel:responseSignalPreroll:EsamSettings', esamSettings_responseSignalPreroll - Specifies the stream distance, in milliseconds, between the SCTE 35 messages that the transcoder places and the splice points that they refer to. If the time between the start of the asset and the SCTE-35 message is less than this value, then the transcoder places the SCTE-35 marker at the beginning of the stream.

$sel:signalProcessingNotification:EsamSettings', esamSettings_signalProcessingNotification - Specifies an ESAM SignalProcessingNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the signal processing instructions that you provide in the setting SCC XML (sccXml).

esamSettings_manifestConfirmConditionNotification :: Lens' EsamSettings (Maybe EsamManifestConfirmConditionNotification) Source #

Specifies an ESAM ManifestConfirmConditionNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the manifest conditioning instructions that you provide in the setting MCC XML (mccXml).

esamSettings_responseSignalPreroll :: Lens' EsamSettings (Maybe Natural) Source #

Specifies the stream distance, in milliseconds, between the SCTE 35 messages that the transcoder places and the splice points that they refer to. If the time between the start of the asset and the SCTE-35 message is less than this value, then the transcoder places the SCTE-35 marker at the beginning of the stream.

esamSettings_signalProcessingNotification :: Lens' EsamSettings (Maybe EsamSignalProcessingNotification) Source #

Specifies an ESAM SignalProcessingNotification XML as per OC-SP-ESAM-API-I03-131025. The transcoder uses the signal processing instructions that you provide in the setting SCC XML (sccXml).

EsamSignalProcessingNotification

data EsamSignalProcessingNotification Source #

ESAM SignalProcessingNotification data defined by OC-SP-ESAM-API-I03-131025.

See: newEsamSignalProcessingNotification smart constructor.

Constructors

EsamSignalProcessingNotification' 

Fields

  • sccXml :: Maybe Text

    Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the signal processing instructions in the message that you supply. Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. For your MPEG2-TS file outputs, if you want the service to place SCTE-35 markers at the insertion points you specify in the XML document, you must also enable SCTE-35 ESAM (scte35Esam). Note that you can either specify an ESAM XML document or enable SCTE-35 passthrough. You can't do both.

Instances

Instances details
Eq EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

Read EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

Show EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

Generic EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

Associated Types

type Rep EsamSignalProcessingNotification :: Type -> Type #

NFData EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

Hashable EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

ToJSON EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

FromJSON EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

type Rep EsamSignalProcessingNotification Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.EsamSignalProcessingNotification

type Rep EsamSignalProcessingNotification = D1 ('MetaData "EsamSignalProcessingNotification" "Amazonka.MediaConvert.Types.EsamSignalProcessingNotification" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "EsamSignalProcessingNotification'" 'PrefixI 'True) (S1 ('MetaSel ('Just "sccXml") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newEsamSignalProcessingNotification :: EsamSignalProcessingNotification Source #

Create a value of EsamSignalProcessingNotification with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:sccXml:EsamSignalProcessingNotification', esamSignalProcessingNotification_sccXml - Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the signal processing instructions in the message that you supply. Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. For your MPEG2-TS file outputs, if you want the service to place SCTE-35 markers at the insertion points you specify in the XML document, you must also enable SCTE-35 ESAM (scte35Esam). Note that you can either specify an ESAM XML document or enable SCTE-35 passthrough. You can't do both.

esamSignalProcessingNotification_sccXml :: Lens' EsamSignalProcessingNotification (Maybe Text) Source #

Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. Form the XML document as per OC-SP-ESAM-API-I03-131025. The transcoder will use the signal processing instructions in the message that you supply. Provide your ESAM SignalProcessingNotification XML document inside your JSON job settings. For your MPEG2-TS file outputs, if you want the service to place SCTE-35 markers at the insertion points you specify in the XML document, you must also enable SCTE-35 ESAM (scte35Esam). Note that you can either specify an ESAM XML document or enable SCTE-35 passthrough. You can't do both.

ExtendedDataServices

data ExtendedDataServices Source #

If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

See: newExtendedDataServices smart constructor.

Constructors

ExtendedDataServices' 

Fields

  • vchipAction :: Maybe VchipAction

    The action to take on content advisory XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

  • copyProtectionAction :: Maybe CopyProtectionAction

    The action to take on copy and redistribution control XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

Instances

Instances details
Eq ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

Read ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

Show ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

Generic ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

Associated Types

type Rep ExtendedDataServices :: Type -> Type #

NFData ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

Methods

rnf :: ExtendedDataServices -> () #

Hashable ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

ToJSON ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

FromJSON ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

type Rep ExtendedDataServices Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ExtendedDataServices

type Rep ExtendedDataServices = D1 ('MetaData "ExtendedDataServices" "Amazonka.MediaConvert.Types.ExtendedDataServices" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ExtendedDataServices'" 'PrefixI 'True) (S1 ('MetaSel ('Just "vchipAction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VchipAction)) :*: S1 ('MetaSel ('Just "copyProtectionAction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CopyProtectionAction))))

newExtendedDataServices :: ExtendedDataServices Source #

Create a value of ExtendedDataServices with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:vchipAction:ExtendedDataServices', extendedDataServices_vchipAction - The action to take on content advisory XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

$sel:copyProtectionAction:ExtendedDataServices', extendedDataServices_copyProtectionAction - The action to take on copy and redistribution control XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

extendedDataServices_vchipAction :: Lens' ExtendedDataServices (Maybe VchipAction) Source #

The action to take on content advisory XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

extendedDataServices_copyProtectionAction :: Lens' ExtendedDataServices (Maybe CopyProtectionAction) Source #

The action to take on copy and redistribution control XDS packets. If you select PASSTHROUGH, packets will not be changed. If you select STRIP, any packets will be removed in output captions.

F4vSettings

data F4vSettings Source #

Settings for F4v container

See: newF4vSettings smart constructor.

Constructors

F4vSettings' 

Fields

  • moovPlacement :: Maybe F4vMoovPlacement

    If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

Instances

Instances details
Eq F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

Read F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

Show F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

Generic F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

Associated Types

type Rep F4vSettings :: Type -> Type #

NFData F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

Methods

rnf :: F4vSettings -> () #

Hashable F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

ToJSON F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

FromJSON F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

type Rep F4vSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.F4vSettings

type Rep F4vSettings = D1 ('MetaData "F4vSettings" "Amazonka.MediaConvert.Types.F4vSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "F4vSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "moovPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe F4vMoovPlacement))))

newF4vSettings :: F4vSettings Source #

Create a value of F4vSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:moovPlacement:F4vSettings', f4vSettings_moovPlacement - If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

f4vSettings_moovPlacement :: Lens' F4vSettings (Maybe F4vMoovPlacement) Source #

If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

FileGroupSettings

data FileGroupSettings Source #

Settings related to your File output group. MediaConvert uses this group of settings to generate a single standalone file, rather than a streaming package. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to FILE_GROUP_SETTINGS.

See: newFileGroupSettings smart constructor.

Constructors

FileGroupSettings' 

Fields

  • destination :: Maybe Text

    Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

  • destinationSettings :: Maybe DestinationSettings

    Settings associated with the destination. Will vary based on the type of destination

Instances

Instances details
Eq FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

Read FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

Show FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

Generic FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

Associated Types

type Rep FileGroupSettings :: Type -> Type #

NFData FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

Methods

rnf :: FileGroupSettings -> () #

Hashable FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

ToJSON FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

FromJSON FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

type Rep FileGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileGroupSettings

type Rep FileGroupSettings = D1 ('MetaData "FileGroupSettings" "Amazonka.MediaConvert.Types.FileGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "FileGroupSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "destination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DestinationSettings))))

newFileGroupSettings :: FileGroupSettings Source #

Create a value of FileGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:destination:FileGroupSettings', fileGroupSettings_destination - Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

$sel:destinationSettings:FileGroupSettings', fileGroupSettings_destinationSettings - Settings associated with the destination. Will vary based on the type of destination

fileGroupSettings_destination :: Lens' FileGroupSettings (Maybe Text) Source #

Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

fileGroupSettings_destinationSettings :: Lens' FileGroupSettings (Maybe DestinationSettings) Source #

Settings associated with the destination. Will vary based on the type of destination

FileSourceSettings

data FileSourceSettings Source #

If your input captions are SCC, SMI, SRT, STL, TTML, WebVTT, or IMSC 1.1 in an xml file, specify the URI of the input caption source file. If your caption source is IMSC in an IMF package, use TrackSourceSettings instead of FileSoureSettings.

See: newFileSourceSettings smart constructor.

Constructors

FileSourceSettings' 

Fields

  • framerate :: Maybe CaptionSourceFramerate

    Ignore this setting unless your input captions format is SCC. To have the service compensate for differing frame rates between your input captions and input video, specify the frame rate of the captions file. Specify this value as a fraction. When you work directly in your JSON job specification, use the settings framerateNumerator and framerateDenominator. For example, you might specify 24 / 1 for 24 fps, 25 / 1 for 25 fps, 24000 / 1001 for 23.976 fps, or 30000 / 1001 for 29.97 fps.

  • convert608To708 :: Maybe FileSourceConvert608To708

    Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

  • timeDelta :: Maybe Int

    Optional. Use this setting when you need to adjust the sync between your sidecar captions and your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/time-delta-use-cases.html. Enter a positive or negative number to modify the times in the captions file. For example, type 15 to add 15 seconds to all the times in the captions file. Type -5 to subtract 5 seconds from the times in the captions file. You can optionally specify your time delta in milliseconds instead of seconds. When you do so, set the related setting, Time delta units (TimeDeltaUnits) to Milliseconds (MILLISECONDS). Note that, when you specify a time delta for timecode-based caption sources, such as SCC and STL, and your time delta isn't a multiple of the input frame rate, MediaConvert snaps the captions to the nearest frame. For example, when your input video frame rate is 25 fps and you specify 1010ms for time delta, MediaConvert delays your captions by 1000 ms.

  • timeDeltaUnits :: Maybe FileSourceTimeDeltaUnits

    When you use the setting Time delta (TimeDelta) to adjust the sync between your sidecar captions and your video, use this setting to specify the units for the delta that you specify. When you don't specify a value for Time delta units (TimeDeltaUnits), MediaConvert uses seconds by default.

  • sourceFile :: Maybe Text

    External caption file used for loading captions. Accepted file extensions are 'scc', 'ttml', 'dfxp', 'stl', 'srt', 'xml', 'smi', 'webvtt', and 'vtt'.

Instances

Instances details
Eq FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

Read FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

Show FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

Generic FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

Associated Types

type Rep FileSourceSettings :: Type -> Type #

NFData FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

Methods

rnf :: FileSourceSettings -> () #

Hashable FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

ToJSON FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

FromJSON FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

type Rep FileSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FileSourceSettings

type Rep FileSourceSettings = D1 ('MetaData "FileSourceSettings" "Amazonka.MediaConvert.Types.FileSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "FileSourceSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "framerate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CaptionSourceFramerate)) :*: S1 ('MetaSel ('Just "convert608To708") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FileSourceConvert608To708))) :*: (S1 ('MetaSel ('Just "timeDelta") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "timeDeltaUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FileSourceTimeDeltaUnits)) :*: S1 ('MetaSel ('Just "sourceFile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newFileSourceSettings :: FileSourceSettings Source #

Create a value of FileSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:framerate:FileSourceSettings', fileSourceSettings_framerate - Ignore this setting unless your input captions format is SCC. To have the service compensate for differing frame rates between your input captions and input video, specify the frame rate of the captions file. Specify this value as a fraction. When you work directly in your JSON job specification, use the settings framerateNumerator and framerateDenominator. For example, you might specify 24 / 1 for 24 fps, 25 / 1 for 25 fps, 24000 / 1001 for 23.976 fps, or 30000 / 1001 for 29.97 fps.

$sel:convert608To708:FileSourceSettings', fileSourceSettings_convert608To708 - Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

$sel:timeDelta:FileSourceSettings', fileSourceSettings_timeDelta - Optional. Use this setting when you need to adjust the sync between your sidecar captions and your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/time-delta-use-cases.html. Enter a positive or negative number to modify the times in the captions file. For example, type 15 to add 15 seconds to all the times in the captions file. Type -5 to subtract 5 seconds from the times in the captions file. You can optionally specify your time delta in milliseconds instead of seconds. When you do so, set the related setting, Time delta units (TimeDeltaUnits) to Milliseconds (MILLISECONDS). Note that, when you specify a time delta for timecode-based caption sources, such as SCC and STL, and your time delta isn't a multiple of the input frame rate, MediaConvert snaps the captions to the nearest frame. For example, when your input video frame rate is 25 fps and you specify 1010ms for time delta, MediaConvert delays your captions by 1000 ms.

$sel:timeDeltaUnits:FileSourceSettings', fileSourceSettings_timeDeltaUnits - When you use the setting Time delta (TimeDelta) to adjust the sync between your sidecar captions and your video, use this setting to specify the units for the delta that you specify. When you don't specify a value for Time delta units (TimeDeltaUnits), MediaConvert uses seconds by default.

$sel:sourceFile:FileSourceSettings', fileSourceSettings_sourceFile - External caption file used for loading captions. Accepted file extensions are 'scc', 'ttml', 'dfxp', 'stl', 'srt', 'xml', 'smi', 'webvtt', and 'vtt'.

fileSourceSettings_framerate :: Lens' FileSourceSettings (Maybe CaptionSourceFramerate) Source #

Ignore this setting unless your input captions format is SCC. To have the service compensate for differing frame rates between your input captions and input video, specify the frame rate of the captions file. Specify this value as a fraction. When you work directly in your JSON job specification, use the settings framerateNumerator and framerateDenominator. For example, you might specify 24 / 1 for 24 fps, 25 / 1 for 25 fps, 24000 / 1001 for 23.976 fps, or 30000 / 1001 for 29.97 fps.

fileSourceSettings_convert608To708 :: Lens' FileSourceSettings (Maybe FileSourceConvert608To708) Source #

Specify whether this set of input captions appears in your outputs in both 608 and 708 format. If you choose Upconvert (UPCONVERT), MediaConvert includes the captions data in two ways: it passes the 608 data through using the 608 compatibility bytes fields of the 708 wrapper, and it also translates the 608 data into 708.

fileSourceSettings_timeDelta :: Lens' FileSourceSettings (Maybe Int) Source #

Optional. Use this setting when you need to adjust the sync between your sidecar captions and your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/time-delta-use-cases.html. Enter a positive or negative number to modify the times in the captions file. For example, type 15 to add 15 seconds to all the times in the captions file. Type -5 to subtract 5 seconds from the times in the captions file. You can optionally specify your time delta in milliseconds instead of seconds. When you do so, set the related setting, Time delta units (TimeDeltaUnits) to Milliseconds (MILLISECONDS). Note that, when you specify a time delta for timecode-based caption sources, such as SCC and STL, and your time delta isn't a multiple of the input frame rate, MediaConvert snaps the captions to the nearest frame. For example, when your input video frame rate is 25 fps and you specify 1010ms for time delta, MediaConvert delays your captions by 1000 ms.

fileSourceSettings_timeDeltaUnits :: Lens' FileSourceSettings (Maybe FileSourceTimeDeltaUnits) Source #

When you use the setting Time delta (TimeDelta) to adjust the sync between your sidecar captions and your video, use this setting to specify the units for the delta that you specify. When you don't specify a value for Time delta units (TimeDeltaUnits), MediaConvert uses seconds by default.

fileSourceSettings_sourceFile :: Lens' FileSourceSettings (Maybe Text) Source #

External caption file used for loading captions. Accepted file extensions are 'scc', 'ttml', 'dfxp', 'stl', 'srt', 'xml', 'smi', 'webvtt', and 'vtt'.

FrameCaptureSettings

data FrameCaptureSettings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value FRAME_CAPTURE.

See: newFrameCaptureSettings smart constructor.

Constructors

FrameCaptureSettings' 

Fields

  • quality :: Maybe Natural

    JPEG Quality - a higher value equals higher quality.

  • framerateDenominator :: Maybe Natural

    Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.n.jpg where n is the 0-based sequence number of each Capture.

  • maxCaptures :: Maybe Natural

    Maximum number of captures (encoded jpg output files).

  • framerateNumerator :: Maybe Natural

    Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.NNNNNNN.jpg where N is the 0-based frame sequence number zero padded to 7 decimal places.

Instances

Instances details
Eq FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

Read FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

Show FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

Generic FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

Associated Types

type Rep FrameCaptureSettings :: Type -> Type #

NFData FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

Methods

rnf :: FrameCaptureSettings -> () #

Hashable FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

ToJSON FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

FromJSON FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

type Rep FrameCaptureSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.FrameCaptureSettings

type Rep FrameCaptureSettings = D1 ('MetaData "FrameCaptureSettings" "Amazonka.MediaConvert.Types.FrameCaptureSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "FrameCaptureSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "quality") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "maxCaptures") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newFrameCaptureSettings :: FrameCaptureSettings Source #

Create a value of FrameCaptureSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:quality:FrameCaptureSettings', frameCaptureSettings_quality - JPEG Quality - a higher value equals higher quality.

$sel:framerateDenominator:FrameCaptureSettings', frameCaptureSettings_framerateDenominator - Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.n.jpg where n is the 0-based sequence number of each Capture.

$sel:maxCaptures:FrameCaptureSettings', frameCaptureSettings_maxCaptures - Maximum number of captures (encoded jpg output files).

$sel:framerateNumerator:FrameCaptureSettings', frameCaptureSettings_framerateNumerator - Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.NNNNNNN.jpg where N is the 0-based frame sequence number zero padded to 7 decimal places.

frameCaptureSettings_quality :: Lens' FrameCaptureSettings (Maybe Natural) Source #

JPEG Quality - a higher value equals higher quality.

frameCaptureSettings_framerateDenominator :: Lens' FrameCaptureSettings (Maybe Natural) Source #

Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.n.jpg where n is the 0-based sequence number of each Capture.

frameCaptureSettings_maxCaptures :: Lens' FrameCaptureSettings (Maybe Natural) Source #

Maximum number of captures (encoded jpg output files).

frameCaptureSettings_framerateNumerator :: Lens' FrameCaptureSettings (Maybe Natural) Source #

Frame capture will encode the first frame of the output stream, then one frame every framerateDenominator/framerateNumerator seconds. For example, settings of framerateNumerator = 1 and framerateDenominator = 3 (a rate of 1/3 frame per second) will capture the first frame, then 1 frame every 3s. Files will be named as filename.NNNNNNN.jpg where N is the 0-based frame sequence number zero padded to 7 decimal places.

H264QvbrSettings

data H264QvbrSettings Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

See: newH264QvbrSettings smart constructor.

Constructors

H264QvbrSettings' 

Fields

  • qvbrQualityLevelFineTune :: Maybe Double

    Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

  • maxAverageBitrate :: Maybe Natural

    Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

  • qvbrQualityLevel :: Maybe Natural

    Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

Instances

Instances details
Eq H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

Read H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

Show H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

Generic H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

Associated Types

type Rep H264QvbrSettings :: Type -> Type #

NFData H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

Methods

rnf :: H264QvbrSettings -> () #

Hashable H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

ToJSON H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

FromJSON H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

type Rep H264QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264QvbrSettings

type Rep H264QvbrSettings = D1 ('MetaData "H264QvbrSettings" "Amazonka.MediaConvert.Types.H264QvbrSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "H264QvbrSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "qvbrQualityLevelFineTune") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "maxAverageBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "qvbrQualityLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newH264QvbrSettings :: H264QvbrSettings Source #

Create a value of H264QvbrSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qvbrQualityLevelFineTune:H264QvbrSettings', h264QvbrSettings_qvbrQualityLevelFineTune - Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

$sel:maxAverageBitrate:H264QvbrSettings', h264QvbrSettings_maxAverageBitrate - Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

$sel:qvbrQualityLevel:H264QvbrSettings', h264QvbrSettings_qvbrQualityLevel - Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

h264QvbrSettings_qvbrQualityLevelFineTune :: Lens' H264QvbrSettings (Maybe Double) Source #

Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

h264QvbrSettings_maxAverageBitrate :: Lens' H264QvbrSettings (Maybe Natural) Source #

Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

h264QvbrSettings_qvbrQualityLevel :: Lens' H264QvbrSettings (Maybe Natural) Source #

Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

H264Settings

data H264Settings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value H_264.

See: newH264Settings smart constructor.

Constructors

H264Settings' 

Fields

  • unregisteredSeiTimecode :: Maybe H264UnregisteredSeiTimecode

    Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

  • qualityTuningLevel :: Maybe H264QualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

  • temporalAdaptiveQuantization :: Maybe H264TemporalAdaptiveQuantization

    Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264TemporalAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to set H264TemporalAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization). To manually enable or disable H264TemporalAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

  • sceneChangeDetect :: Maybe H264SceneChangeDetect

    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

  • hrdBufferInitialFillPercentage :: Maybe Natural

    Percentage of the buffer that should initially be filled (HRD buffer model).

  • slowPal :: Maybe H264SlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • gopSize :: Maybe Double

    GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

  • numberBFramesBetweenReferenceFrames :: Maybe Natural

    Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

  • gopSizeUnits :: Maybe H264GopSizeUnits

    Indicates if the GOP Size in H264 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

  • hrdBufferSize :: Maybe Natural

    Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

  • slices :: Maybe Natural

    Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

  • rateControlMode :: Maybe H264RateControlMode

    Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

  • numberReferenceFrames :: Maybe Natural

    Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

  • telecine :: Maybe H264Telecine

    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

  • dynamicSubGop :: Maybe H264DynamicSubGop

    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

  • minIInterval :: Maybe Natural

    Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

  • interlaceMode :: Maybe H264InterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • parControl :: Maybe H264ParControl

    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

  • repeatPps :: Maybe H264RepeatPps

    Places a PPS header on each encoded picture, even if repeated.

  • scanTypeConversionMode :: Maybe H264ScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • flickerAdaptiveQuantization :: Maybe H264FlickerAdaptiveQuantization

    Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264FlickerAdaptiveQuantization is Disabled (DISABLED). Change this value to Enabled (ENABLED) to reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. To manually enable or disable H264FlickerAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

  • qvbrSettings :: Maybe H264QvbrSettings

    Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

  • softness :: Maybe Natural

    Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

  • codecProfile :: Maybe H264CodecProfile

    H.264 Profile. High 4:2:2 and 10-bit profiles are only available with the AVC-I License.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe H264FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • codecLevel :: Maybe H264CodecLevel

    Specify an H.264 level that is consistent with your output video settings. If you aren't sure what level to specify, choose Auto (AUTO).

  • entropyEncoding :: Maybe H264EntropyEncoding

    Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC.

  • framerateControl :: Maybe H264FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • adaptiveQuantization :: Maybe H264AdaptiveQuantization

    Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set H264AdaptiveQuantization to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization (H264AdaptiveQuantization) to Off (OFF). Related settings: The value that you choose here applies to the following settings: H264FlickerAdaptiveQuantization, H264SpatialAdaptiveQuantization, and H264TemporalAdaptiveQuantization.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • gopBReference :: Maybe H264GopBReference

    If enable, use reference B frames for GOP structures that have B frames > 1.

  • maxBitrate :: Maybe Natural

    Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

  • syntax :: Maybe H264Syntax

    Produces a bitstream compliant with SMPTE RP-2027.

  • fieldEncoding :: Maybe H264FieldEncoding

    The video encoding method for your MPEG-4 AVC output. Keep the default value, PAFF, to have MediaConvert use PAFF encoding for interlaced outputs. Choose Force field (FORCE_FIELD) to disable PAFF encoding and create separate interlaced fields. Choose MBAFF to disable PAFF and have MediaConvert use MBAFF encoding for interlaced outputs.

  • gopClosedCadence :: Maybe Natural

    Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

  • spatialAdaptiveQuantization :: Maybe H264SpatialAdaptiveQuantization

    Only use this setting when you change the default value, Auto (AUTO), for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264SpatialAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to set H264SpatialAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (H264AdaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher. To manually enable or disable H264SpatialAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

Instances

Instances details
Eq H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

Read H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

Show H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

Generic H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

Associated Types

type Rep H264Settings :: Type -> Type #

NFData H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

Methods

rnf :: H264Settings -> () #

Hashable H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

ToJSON H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

FromJSON H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

type Rep H264Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H264Settings

type Rep H264Settings = D1 ('MetaData "H264Settings" "Amazonka.MediaConvert.Types.H264Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "H264Settings'" 'PrefixI 'True) (((((S1 ('MetaSel ('Just "unregisteredSeiTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264UnregisteredSeiTimecode)) :*: S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264QualityTuningLevel))) :*: (S1 ('MetaSel ('Just "temporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264TemporalAdaptiveQuantization)) :*: (S1 ('MetaSel ('Just "sceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264SceneChangeDetect)) :*: S1 ('MetaSel ('Just "hrdBufferInitialFillPercentage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264SlowPal)) :*: S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "numberBFramesBetweenReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "gopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264GopSizeUnits)))))) :*: (((S1 ('MetaSel ('Just "hrdBufferSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "slices") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264RateControlMode)) :*: (S1 ('MetaSel ('Just "numberReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264Telecine))))) :*: ((S1 ('MetaSel ('Just "dynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264DynamicSubGop)) :*: S1 ('MetaSel ('Just "minIInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "interlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264InterlaceMode)) :*: (S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264ParControl)) :*: S1 ('MetaSel ('Just "repeatPps") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264RepeatPps))))))) :*: ((((S1 ('MetaSel ('Just "scanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264ScanTypeConversionMode)) :*: S1 ('MetaSel ('Just "flickerAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264FlickerAdaptiveQuantization))) :*: (S1 ('MetaSel ('Just "qvbrSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264QvbrSettings)) :*: (S1 ('MetaSel ('Just "softness") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "codecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264CodecProfile))))) :*: ((S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264FramerateConversionAlgorithm)) :*: (S1 ('MetaSel ('Just "codecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264CodecLevel)) :*: S1 ('MetaSel ('Just "entropyEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264EntropyEncoding)))))) :*: (((S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264FramerateControl)) :*: S1 ('MetaSel ('Just "adaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264AdaptiveQuantization))) :*: (S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "gopBReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264GopBReference)) :*: S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "syntax") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264Syntax)) :*: S1 ('MetaSel ('Just "fieldEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264FieldEncoding))) :*: (S1 ('MetaSel ('Just "gopClosedCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "spatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264SpatialAdaptiveQuantization)))))))))

newH264Settings :: H264Settings Source #

Create a value of H264Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:unregisteredSeiTimecode:H264Settings', h264Settings_unregisteredSeiTimecode - Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

$sel:qualityTuningLevel:H264Settings', h264Settings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

$sel:temporalAdaptiveQuantization:H264Settings', h264Settings_temporalAdaptiveQuantization - Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264TemporalAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to set H264TemporalAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization). To manually enable or disable H264TemporalAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

$sel:sceneChangeDetect:H264Settings', h264Settings_sceneChangeDetect - Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

$sel:hrdBufferInitialFillPercentage:H264Settings', h264Settings_hrdBufferInitialFillPercentage - Percentage of the buffer that should initially be filled (HRD buffer model).

$sel:slowPal:H264Settings', h264Settings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:parNumerator:H264Settings', h264Settings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:gopSize:H264Settings', h264Settings_gopSize - GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

$sel:numberBFramesBetweenReferenceFrames:H264Settings', h264Settings_numberBFramesBetweenReferenceFrames - Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

$sel:gopSizeUnits:H264Settings', h264Settings_gopSizeUnits - Indicates if the GOP Size in H264 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

$sel:hrdBufferSize:H264Settings', h264Settings_hrdBufferSize - Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

$sel:slices:H264Settings', h264Settings_slices - Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

$sel:rateControlMode:H264Settings', h264Settings_rateControlMode - Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

$sel:numberReferenceFrames:H264Settings', h264Settings_numberReferenceFrames - Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

$sel:telecine:H264Settings', h264Settings_telecine - When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

$sel:dynamicSubGop:H264Settings', h264Settings_dynamicSubGop - Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

$sel:minIInterval:H264Settings', h264Settings_minIInterval - Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

$sel:interlaceMode:H264Settings', h264Settings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:parControl:H264Settings', h264Settings_parControl - Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

$sel:repeatPps:H264Settings', h264Settings_repeatPps - Places a PPS header on each encoded picture, even if repeated.

$sel:scanTypeConversionMode:H264Settings', h264Settings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:flickerAdaptiveQuantization:H264Settings', h264Settings_flickerAdaptiveQuantization - Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264FlickerAdaptiveQuantization is Disabled (DISABLED). Change this value to Enabled (ENABLED) to reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. To manually enable or disable H264FlickerAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

$sel:qvbrSettings:H264Settings', h264Settings_qvbrSettings - Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

$sel:softness:H264Settings', h264Settings_softness - Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

$sel:codecProfile:H264Settings', h264Settings_codecProfile - H.264 Profile. High 4:2:2 and 10-bit profiles are only available with the AVC-I License.

$sel:bitrate:H264Settings', h264Settings_bitrate - Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

$sel:framerateDenominator:H264Settings', h264Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:H264Settings', h264Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:codecLevel:H264Settings', h264Settings_codecLevel - Specify an H.264 level that is consistent with your output video settings. If you aren't sure what level to specify, choose Auto (AUTO).

$sel:entropyEncoding:H264Settings', h264Settings_entropyEncoding - Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC.

$sel:framerateControl:H264Settings', h264Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:adaptiveQuantization:H264Settings', h264Settings_adaptiveQuantization - Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set H264AdaptiveQuantization to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization (H264AdaptiveQuantization) to Off (OFF). Related settings: The value that you choose here applies to the following settings: H264FlickerAdaptiveQuantization, H264SpatialAdaptiveQuantization, and H264TemporalAdaptiveQuantization.

$sel:framerateNumerator:H264Settings', h264Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:gopBReference:H264Settings', h264Settings_gopBReference - If enable, use reference B frames for GOP structures that have B frames > 1.

$sel:maxBitrate:H264Settings', h264Settings_maxBitrate - Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

$sel:syntax:H264Settings', h264Settings_syntax - Produces a bitstream compliant with SMPTE RP-2027.

$sel:fieldEncoding:H264Settings', h264Settings_fieldEncoding - The video encoding method for your MPEG-4 AVC output. Keep the default value, PAFF, to have MediaConvert use PAFF encoding for interlaced outputs. Choose Force field (FORCE_FIELD) to disable PAFF encoding and create separate interlaced fields. Choose MBAFF to disable PAFF and have MediaConvert use MBAFF encoding for interlaced outputs.

$sel:gopClosedCadence:H264Settings', h264Settings_gopClosedCadence - Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

$sel:parDenominator:H264Settings', h264Settings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

$sel:spatialAdaptiveQuantization:H264Settings', h264Settings_spatialAdaptiveQuantization - Only use this setting when you change the default value, Auto (AUTO), for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264SpatialAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to set H264SpatialAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (H264AdaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher. To manually enable or disable H264SpatialAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

h264Settings_unregisteredSeiTimecode :: Lens' H264Settings (Maybe H264UnregisteredSeiTimecode) Source #

Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

h264Settings_qualityTuningLevel :: Lens' H264Settings (Maybe H264QualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

h264Settings_temporalAdaptiveQuantization :: Lens' H264Settings (Maybe H264TemporalAdaptiveQuantization) Source #

Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264TemporalAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to set H264TemporalAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization). To manually enable or disable H264TemporalAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

h264Settings_sceneChangeDetect :: Lens' H264Settings (Maybe H264SceneChangeDetect) Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

h264Settings_hrdBufferInitialFillPercentage :: Lens' H264Settings (Maybe Natural) Source #

Percentage of the buffer that should initially be filled (HRD buffer model).

h264Settings_slowPal :: Lens' H264Settings (Maybe H264SlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

h264Settings_parNumerator :: Lens' H264Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

h264Settings_gopSize :: Lens' H264Settings (Maybe Double) Source #

GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

h264Settings_numberBFramesBetweenReferenceFrames :: Lens' H264Settings (Maybe Natural) Source #

Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

h264Settings_gopSizeUnits :: Lens' H264Settings (Maybe H264GopSizeUnits) Source #

Indicates if the GOP Size in H264 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

h264Settings_hrdBufferSize :: Lens' H264Settings (Maybe Natural) Source #

Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

h264Settings_slices :: Lens' H264Settings (Maybe Natural) Source #

Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

h264Settings_rateControlMode :: Lens' H264Settings (Maybe H264RateControlMode) Source #

Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

h264Settings_numberReferenceFrames :: Lens' H264Settings (Maybe Natural) Source #

Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

h264Settings_telecine :: Lens' H264Settings (Maybe H264Telecine) Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

h264Settings_dynamicSubGop :: Lens' H264Settings (Maybe H264DynamicSubGop) Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

h264Settings_minIInterval :: Lens' H264Settings (Maybe Natural) Source #

Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

h264Settings_interlaceMode :: Lens' H264Settings (Maybe H264InterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

h264Settings_parControl :: Lens' H264Settings (Maybe H264ParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

h264Settings_repeatPps :: Lens' H264Settings (Maybe H264RepeatPps) Source #

Places a PPS header on each encoded picture, even if repeated.

h264Settings_scanTypeConversionMode :: Lens' H264Settings (Maybe H264ScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

h264Settings_flickerAdaptiveQuantization :: Lens' H264Settings (Maybe H264FlickerAdaptiveQuantization) Source #

Only use this setting when you change the default value, AUTO, for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264FlickerAdaptiveQuantization is Disabled (DISABLED). Change this value to Enabled (ENABLED) to reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. To manually enable or disable H264FlickerAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

h264Settings_qvbrSettings :: Lens' H264Settings (Maybe H264QvbrSettings) Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

h264Settings_softness :: Lens' H264Settings (Maybe Natural) Source #

Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

h264Settings_codecProfile :: Lens' H264Settings (Maybe H264CodecProfile) Source #

H.264 Profile. High 4:2:2 and 10-bit profiles are only available with the AVC-I License.

h264Settings_bitrate :: Lens' H264Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

h264Settings_framerateDenominator :: Lens' H264Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

h264Settings_framerateConversionAlgorithm :: Lens' H264Settings (Maybe H264FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

h264Settings_codecLevel :: Lens' H264Settings (Maybe H264CodecLevel) Source #

Specify an H.264 level that is consistent with your output video settings. If you aren't sure what level to specify, choose Auto (AUTO).

h264Settings_entropyEncoding :: Lens' H264Settings (Maybe H264EntropyEncoding) Source #

Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC.

h264Settings_framerateControl :: Lens' H264Settings (Maybe H264FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

h264Settings_adaptiveQuantization :: Lens' H264Settings (Maybe H264AdaptiveQuantization) Source #

Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set H264AdaptiveQuantization to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization (H264AdaptiveQuantization) to Off (OFF). Related settings: The value that you choose here applies to the following settings: H264FlickerAdaptiveQuantization, H264SpatialAdaptiveQuantization, and H264TemporalAdaptiveQuantization.

h264Settings_framerateNumerator :: Lens' H264Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

h264Settings_gopBReference :: Lens' H264Settings (Maybe H264GopBReference) Source #

If enable, use reference B frames for GOP structures that have B frames > 1.

h264Settings_maxBitrate :: Lens' H264Settings (Maybe Natural) Source #

Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

h264Settings_syntax :: Lens' H264Settings (Maybe H264Syntax) Source #

Produces a bitstream compliant with SMPTE RP-2027.

h264Settings_fieldEncoding :: Lens' H264Settings (Maybe H264FieldEncoding) Source #

The video encoding method for your MPEG-4 AVC output. Keep the default value, PAFF, to have MediaConvert use PAFF encoding for interlaced outputs. Choose Force field (FORCE_FIELD) to disable PAFF encoding and create separate interlaced fields. Choose MBAFF to disable PAFF and have MediaConvert use MBAFF encoding for interlaced outputs.

h264Settings_gopClosedCadence :: Lens' H264Settings (Maybe Natural) Source #

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

h264Settings_parDenominator :: Lens' H264Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

h264Settings_spatialAdaptiveQuantization :: Lens' H264Settings (Maybe H264SpatialAdaptiveQuantization) Source #

Only use this setting when you change the default value, Auto (AUTO), for the setting H264AdaptiveQuantization. When you keep all defaults, excluding H264AdaptiveQuantization and all other adaptive quantization from your JSON job specification, MediaConvert automatically applies the best types of quantization for your video content. When you set H264AdaptiveQuantization to a value other than AUTO, the default value for H264SpatialAdaptiveQuantization is Enabled (ENABLED). Keep this default value to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to set H264SpatialAdaptiveQuantization to Disabled (DISABLED). Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (H264AdaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher. To manually enable or disable H264SpatialAdaptiveQuantization, you must set Adaptive quantization (H264AdaptiveQuantization) to a value other than AUTO.

H265QvbrSettings

data H265QvbrSettings Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

See: newH265QvbrSettings smart constructor.

Constructors

H265QvbrSettings' 

Fields

  • qvbrQualityLevelFineTune :: Maybe Double

    Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

  • maxAverageBitrate :: Maybe Natural

    Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

  • qvbrQualityLevel :: Maybe Natural

    Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

Instances

Instances details
Eq H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

Read H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

Show H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

Generic H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

Associated Types

type Rep H265QvbrSettings :: Type -> Type #

NFData H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

Methods

rnf :: H265QvbrSettings -> () #

Hashable H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

ToJSON H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

FromJSON H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

type Rep H265QvbrSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265QvbrSettings

type Rep H265QvbrSettings = D1 ('MetaData "H265QvbrSettings" "Amazonka.MediaConvert.Types.H265QvbrSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "H265QvbrSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "qvbrQualityLevelFineTune") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "maxAverageBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "qvbrQualityLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newH265QvbrSettings :: H265QvbrSettings Source #

Create a value of H265QvbrSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qvbrQualityLevelFineTune:H265QvbrSettings', h265QvbrSettings_qvbrQualityLevelFineTune - Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

$sel:maxAverageBitrate:H265QvbrSettings', h265QvbrSettings_maxAverageBitrate - Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

$sel:qvbrQualityLevel:H265QvbrSettings', h265QvbrSettings_qvbrQualityLevel - Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

h265QvbrSettings_qvbrQualityLevelFineTune :: Lens' H265QvbrSettings (Maybe Double) Source #

Optional. Specify a value here to set the QVBR quality to a level that is between whole numbers. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33. MediaConvert rounds your QVBR quality level to the nearest third of a whole number. For example, if you set qvbrQualityLevel to 7 and you set qvbrQualityLevelFineTune to .25, your actual QVBR quality level is 7.33.

h265QvbrSettings_maxAverageBitrate :: Lens' H265QvbrSettings (Maybe Natural) Source #

Use this setting only when Rate control mode is QVBR and Quality tuning level is Multi-pass HQ. For Max average bitrate values suited to the complexity of your input video, the service limits the average bitrate of the video part of this output to the value that you choose. That is, the total size of the video element is less than or equal to the value you set multiplied by the number of seconds of encoded output.

h265QvbrSettings_qvbrQualityLevel :: Lens' H265QvbrSettings (Maybe Natural) Source #

Use this setting only when you set Rate control mode (RateControlMode) to QVBR. Specify the target quality level for this output. MediaConvert determines the right number of bits to use for each part of the video to maintain the video quality that you specify. When you keep the default value, AUTO, MediaConvert picks a quality level for you, based on characteristics of your input video. If you prefer to specify a quality level, specify a number from 1 through 10. Use higher numbers for greater quality. Level 10 results in nearly lossless compression. The quality level for most broadcast-quality transcodes is between 6 and 9. Optionally, to specify a value between whole numbers, also provide a value for the setting qvbrQualityLevelFineTune. For example, if you want your QVBR quality level to be 7.33, set qvbrQualityLevel to 7 and set qvbrQualityLevelFineTune to .33.

H265Settings

data H265Settings Source #

Settings for H265 codec

See: newH265Settings smart constructor.

Constructors

H265Settings' 

Fields

  • unregisteredSeiTimecode :: Maybe H265UnregisteredSeiTimecode

    Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

  • qualityTuningLevel :: Maybe H265QualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

  • temporalAdaptiveQuantization :: Maybe H265TemporalAdaptiveQuantization

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

  • sceneChangeDetect :: Maybe H265SceneChangeDetect

    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

  • hrdBufferInitialFillPercentage :: Maybe Natural

    Percentage of the buffer that should initially be filled (HRD buffer model).

  • tiles :: Maybe H265Tiles

    Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

  • slowPal :: Maybe H265SlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • temporalIds :: Maybe H265TemporalIds

    Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • gopSize :: Maybe Double

    GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

  • numberBFramesBetweenReferenceFrames :: Maybe Natural

    Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

  • gopSizeUnits :: Maybe H265GopSizeUnits

    Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

  • hrdBufferSize :: Maybe Natural

    Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

  • slices :: Maybe Natural

    Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

  • alternateTransferFunctionSei :: Maybe H265AlternateTransferFunctionSei

    Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

  • rateControlMode :: Maybe H265RateControlMode

    Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

  • numberReferenceFrames :: Maybe Natural

    Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

  • telecine :: Maybe H265Telecine

    This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.

  • dynamicSubGop :: Maybe H265DynamicSubGop

    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

  • minIInterval :: Maybe Natural

    Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

  • interlaceMode :: Maybe H265InterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • parControl :: Maybe H265ParControl

    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

  • scanTypeConversionMode :: Maybe H265ScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • flickerAdaptiveQuantization :: Maybe H265FlickerAdaptiveQuantization

    Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).

  • qvbrSettings :: Maybe H265QvbrSettings

    Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

  • sampleAdaptiveOffsetFilterMode :: Maybe H265SampleAdaptiveOffsetFilterMode

    Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content

  • codecProfile :: Maybe H265CodecProfile

    Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as [Profile] / [Tier], so "Main/High" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe H265FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • codecLevel :: Maybe H265CodecLevel

    H.265 Level.

  • framerateControl :: Maybe H265FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • writeMp4PackagingType :: Maybe H265WriteMp4PackagingType

    If the location of parameter set NAL units doesn't matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.

  • adaptiveQuantization :: Maybe H265AdaptiveQuantization

    Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • gopBReference :: Maybe H265GopBReference

    If enable, use reference B frames for GOP structures that have B frames > 1.

  • maxBitrate :: Maybe Natural

    Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

  • gopClosedCadence :: Maybe Natural

    Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

  • spatialAdaptiveQuantization :: Maybe H265SpatialAdaptiveQuantization

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

Read H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

Show H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

Generic H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

Associated Types

type Rep H265Settings :: Type -> Type #

NFData H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

Methods

rnf :: H265Settings -> () #

Hashable H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

ToJSON H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

FromJSON H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

type Rep H265Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.H265Settings

type Rep H265Settings = D1 ('MetaData "H265Settings" "Amazonka.MediaConvert.Types.H265Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "H265Settings'" 'PrefixI 'True) (((((S1 ('MetaSel ('Just "unregisteredSeiTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265UnregisteredSeiTimecode)) :*: S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265QualityTuningLevel))) :*: (S1 ('MetaSel ('Just "temporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265TemporalAdaptiveQuantization)) :*: (S1 ('MetaSel ('Just "sceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265SceneChangeDetect)) :*: S1 ('MetaSel ('Just "hrdBufferInitialFillPercentage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "tiles") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265Tiles)) :*: S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265SlowPal))) :*: (S1 ('MetaSel ('Just "temporalIds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265TemporalIds)) :*: (S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))) :*: (((S1 ('MetaSel ('Just "numberBFramesBetweenReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "gopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265GopSizeUnits))) :*: (S1 ('MetaSel ('Just "hrdBufferSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "slices") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "alternateTransferFunctionSei") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265AlternateTransferFunctionSei))))) :*: ((S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265RateControlMode)) :*: S1 ('MetaSel ('Just "numberReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265Telecine)) :*: (S1 ('MetaSel ('Just "dynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265DynamicSubGop)) :*: S1 ('MetaSel ('Just "minIInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))) :*: ((((S1 ('MetaSel ('Just "interlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265InterlaceMode)) :*: S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265ParControl))) :*: (S1 ('MetaSel ('Just "scanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265ScanTypeConversionMode)) :*: (S1 ('MetaSel ('Just "flickerAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265FlickerAdaptiveQuantization)) :*: S1 ('MetaSel ('Just "qvbrSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265QvbrSettings))))) :*: ((S1 ('MetaSel ('Just "sampleAdaptiveOffsetFilterMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265SampleAdaptiveOffsetFilterMode)) :*: S1 ('MetaSel ('Just "codecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265CodecProfile))) :*: (S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265FramerateConversionAlgorithm)))))) :*: (((S1 ('MetaSel ('Just "codecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265CodecLevel)) :*: S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265FramerateControl))) :*: (S1 ('MetaSel ('Just "writeMp4PackagingType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265WriteMp4PackagingType)) :*: (S1 ('MetaSel ('Just "adaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265AdaptiveQuantization)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "gopBReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265GopBReference)) :*: S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "gopClosedCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "spatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265SpatialAdaptiveQuantization)))))))))

newH265Settings :: H265Settings Source #

Create a value of H265Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:unregisteredSeiTimecode:H265Settings', h265Settings_unregisteredSeiTimecode - Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

$sel:qualityTuningLevel:H265Settings', h265Settings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

$sel:temporalAdaptiveQuantization:H265Settings', h265Settings_temporalAdaptiveQuantization - Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

$sel:sceneChangeDetect:H265Settings', h265Settings_sceneChangeDetect - Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

$sel:hrdBufferInitialFillPercentage:H265Settings', h265Settings_hrdBufferInitialFillPercentage - Percentage of the buffer that should initially be filled (HRD buffer model).

$sel:tiles:H265Settings', h265Settings_tiles - Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

$sel:slowPal:H265Settings', h265Settings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:temporalIds:H265Settings', h265Settings_temporalIds - Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.

$sel:parNumerator:H265Settings', h265Settings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:gopSize:H265Settings', h265Settings_gopSize - GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

$sel:numberBFramesBetweenReferenceFrames:H265Settings', h265Settings_numberBFramesBetweenReferenceFrames - Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

$sel:gopSizeUnits:H265Settings', h265Settings_gopSizeUnits - Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

$sel:hrdBufferSize:H265Settings', h265Settings_hrdBufferSize - Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

$sel:slices:H265Settings', h265Settings_slices - Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

$sel:alternateTransferFunctionSei:H265Settings', h265Settings_alternateTransferFunctionSei - Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

$sel:rateControlMode:H265Settings', h265Settings_rateControlMode - Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

$sel:numberReferenceFrames:H265Settings', h265Settings_numberReferenceFrames - Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

$sel:telecine:H265Settings', h265Settings_telecine - This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.

$sel:dynamicSubGop:H265Settings', h265Settings_dynamicSubGop - Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

$sel:minIInterval:H265Settings', h265Settings_minIInterval - Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

$sel:interlaceMode:H265Settings', h265Settings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:parControl:H265Settings', h265Settings_parControl - Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

$sel:scanTypeConversionMode:H265Settings', h265Settings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:flickerAdaptiveQuantization:H265Settings', h265Settings_flickerAdaptiveQuantization - Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).

$sel:qvbrSettings:H265Settings', h265Settings_qvbrSettings - Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

$sel:sampleAdaptiveOffsetFilterMode:H265Settings', h265Settings_sampleAdaptiveOffsetFilterMode - Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content

$sel:codecProfile:H265Settings', h265Settings_codecProfile - Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as [Profile] / [Tier], so "Main/High" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.

$sel:bitrate:H265Settings', h265Settings_bitrate - Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

$sel:framerateDenominator:H265Settings', h265Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:H265Settings', h265Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:codecLevel:H265Settings', h265Settings_codecLevel - H.265 Level.

$sel:framerateControl:H265Settings', h265Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:writeMp4PackagingType:H265Settings', h265Settings_writeMp4PackagingType - If the location of parameter set NAL units doesn't matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.

$sel:adaptiveQuantization:H265Settings', h265Settings_adaptiveQuantization - Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

$sel:framerateNumerator:H265Settings', h265Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:gopBReference:H265Settings', h265Settings_gopBReference - If enable, use reference B frames for GOP structures that have B frames > 1.

$sel:maxBitrate:H265Settings', h265Settings_maxBitrate - Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

$sel:gopClosedCadence:H265Settings', h265Settings_gopClosedCadence - Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

$sel:parDenominator:H265Settings', h265Settings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

$sel:spatialAdaptiveQuantization:H265Settings', h265Settings_spatialAdaptiveQuantization - Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

h265Settings_unregisteredSeiTimecode :: Lens' H265Settings (Maybe H265UnregisteredSeiTimecode) Source #

Inserts timecode for each frame as 4 bytes of an unregistered SEI message.

h265Settings_qualityTuningLevel :: Lens' H265Settings (Maybe H265QualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

h265Settings_temporalAdaptiveQuantization :: Lens' H265Settings (Maybe H265TemporalAdaptiveQuantization) Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

h265Settings_sceneChangeDetect :: Lens' H265Settings (Maybe H265SceneChangeDetect) Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default. If this output uses QVBR, choose Transition detection (TRANSITION_DETECTION) for further video quality improvement. For more information about QVBR, see https://docs.aws.amazon.com/console/mediaconvert/cbr-vbr-qvbr.

h265Settings_hrdBufferInitialFillPercentage :: Lens' H265Settings (Maybe Natural) Source #

Percentage of the buffer that should initially be filled (HRD buffer model).

h265Settings_tiles :: Lens' H265Settings (Maybe H265Tiles) Source #

Enable use of tiles, allowing horizontal as well as vertical subdivision of the encoded pictures.

h265Settings_slowPal :: Lens' H265Settings (Maybe H265SlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

h265Settings_temporalIds :: Lens' H265Settings (Maybe H265TemporalIds) Source #

Enables temporal layer identifiers in the encoded bitstream. Up to 3 layers are supported depending on GOP structure: I- and P-frames form one layer, reference B-frames can form a second layer and non-reference b-frames can form a third layer. Decoders can optionally decode only the lower temporal layers to generate a lower frame rate output. For example, given a bitstream with temporal IDs and with b-frames = 1 (i.e. IbPbPb display order), a decoder could decode all the frames for full frame rate output or only the I and P frames (lowest temporal layer) for a half frame rate output.

h265Settings_parNumerator :: Lens' H265Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

h265Settings_gopSize :: Lens' H265Settings (Maybe Double) Source #

GOP Length (keyframe interval) in frames or seconds. Must be greater than zero.

h265Settings_numberBFramesBetweenReferenceFrames :: Lens' H265Settings (Maybe Natural) Source #

Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

h265Settings_gopSizeUnits :: Lens' H265Settings (Maybe H265GopSizeUnits) Source #

Indicates if the GOP Size in H265 is specified in frames or seconds. If seconds the system will convert the GOP Size into a frame count at run time.

h265Settings_hrdBufferSize :: Lens' H265Settings (Maybe Natural) Source #

Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

h265Settings_slices :: Lens' H265Settings (Maybe Natural) Source #

Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

h265Settings_alternateTransferFunctionSei :: Lens' H265Settings (Maybe H265AlternateTransferFunctionSei) Source #

Enables Alternate Transfer Function SEI message for outputs using Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF).

h265Settings_rateControlMode :: Lens' H265Settings (Maybe H265RateControlMode) Source #

Use this setting to specify whether this output has a variable bitrate (VBR), constant bitrate (CBR) or quality-defined variable bitrate (QVBR).

h265Settings_numberReferenceFrames :: Lens' H265Settings (Maybe Natural) Source #

Number of reference frames to use. The encoder may use more than requested if using B-frames and/or interlaced encoding.

h265Settings_telecine :: Lens' H265Settings (Maybe H265Telecine) Source #

This field applies only if the Streams > Advanced > Framerate (framerate) field is set to 29.970. This field works with the Streams > Advanced > Preprocessors > Deinterlacer field (deinterlace_mode) and the Streams > Advanced > Interlaced Mode field (interlace_mode) to identify the scan type for the output: Progressive, Interlaced, Hard Telecine or Soft Telecine. - Hard: produces 29.97i output from 23.976 input. - Soft: produces 23.976; the player converts this output to 29.97i.

h265Settings_dynamicSubGop :: Lens' H265Settings (Maybe H265DynamicSubGop) Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

h265Settings_minIInterval :: Lens' H265Settings (Maybe Natural) Source #

Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

h265Settings_interlaceMode :: Lens' H265Settings (Maybe H265InterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

h265Settings_parControl :: Lens' H265Settings (Maybe H265ParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

h265Settings_scanTypeConversionMode :: Lens' H265Settings (Maybe H265ScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

h265Settings_flickerAdaptiveQuantization :: Lens' H265Settings (Maybe H265FlickerAdaptiveQuantization) Source #

Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set adaptiveQuantization to a value other than Off (OFF).

h265Settings_qvbrSettings :: Lens' H265Settings (Maybe H265QvbrSettings) Source #

Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode (RateControlMode).

h265Settings_sampleAdaptiveOffsetFilterMode :: Lens' H265Settings (Maybe H265SampleAdaptiveOffsetFilterMode) Source #

Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically selects best strength based on content

h265Settings_codecProfile :: Lens' H265Settings (Maybe H265CodecProfile) Source #

Represents the Profile and Tier, per the HEVC (H.265) specification. Selections are grouped as [Profile] / [Tier], so "Main/High" represents Main Profile with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License.

h265Settings_bitrate :: Lens' H265Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

h265Settings_framerateDenominator :: Lens' H265Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

h265Settings_framerateConversionAlgorithm :: Lens' H265Settings (Maybe H265FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

h265Settings_framerateControl :: Lens' H265Settings (Maybe H265FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

h265Settings_writeMp4PackagingType :: Lens' H265Settings (Maybe H265WriteMp4PackagingType) Source #

If the location of parameter set NAL units doesn't matter in your workflow, ignore this setting. Use this setting only with CMAF or DASH outputs, or with standalone file outputs in an MPEG-4 container (MP4 outputs). Choose HVC1 to mark your output as HVC1. This makes your output compliant with the following specification: ISO IECJTC1 SC29 N13798 Text ISO/IEC FDIS 14496-15 3rd Edition. For these outputs, the service stores parameter set NAL units in the sample headers but not in the samples directly. For MP4 outputs, when you choose HVC1, your output video might not work properly with some downstream systems and video players. The service defaults to marking your output as HEV1. For these outputs, the service writes parameter set NAL units directly into the samples.

h265Settings_adaptiveQuantization :: Lens' H265Settings (Maybe H265AdaptiveQuantization) Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

h265Settings_framerateNumerator :: Lens' H265Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

h265Settings_gopBReference :: Lens' H265Settings (Maybe H265GopBReference) Source #

If enable, use reference B frames for GOP structures that have B frames > 1.

h265Settings_maxBitrate :: Lens' H265Settings (Maybe Natural) Source #

Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.

h265Settings_gopClosedCadence :: Lens' H265Settings (Maybe Natural) Source #

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

h265Settings_parDenominator :: Lens' H265Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

h265Settings_spatialAdaptiveQuantization :: Lens' H265Settings (Maybe H265SpatialAdaptiveQuantization) Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Hdr10Metadata

data Hdr10Metadata Source #

Use these settings to specify static color calibration metadata, as defined by SMPTE ST 2086. These values don't affect the pixel values that are encoded in the video stream. They are intended to help the downstream video player display content in a way that reflects the intentions of the the content creator.

See: newHdr10Metadata smart constructor.

Constructors

Hdr10Metadata' 

Fields

  • redPrimaryX :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • bluePrimaryX :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • maxFrameAverageLightLevel :: Maybe Natural

    Maximum average light level of any frame in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

  • whitePointY :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • maxContentLightLevel :: Maybe Natural

    Maximum light level among all samples in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

  • whitePointX :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • bluePrimaryY :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • greenPrimaryY :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • greenPrimaryX :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • minLuminance :: Maybe Natural

    Nominal minimum mastering display luminance in units of of 0.0001 candelas per square meter

  • redPrimaryY :: Maybe Natural

    HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

  • maxLuminance :: Maybe Natural

    Nominal maximum mastering display luminance in units of of 0.0001 candelas per square meter.

Instances

Instances details
Eq Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

Read Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

Show Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

Generic Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

Associated Types

type Rep Hdr10Metadata :: Type -> Type #

NFData Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

Methods

rnf :: Hdr10Metadata -> () #

Hashable Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

ToJSON Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

FromJSON Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

type Rep Hdr10Metadata Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Metadata

type Rep Hdr10Metadata = D1 ('MetaData "Hdr10Metadata" "Amazonka.MediaConvert.Types.Hdr10Metadata" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Hdr10Metadata'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "redPrimaryX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "bluePrimaryX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxFrameAverageLightLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "whitePointY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "maxContentLightLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "whitePointX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "bluePrimaryY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "greenPrimaryY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "greenPrimaryX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "minLuminance") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "redPrimaryY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxLuminance") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newHdr10Metadata :: Hdr10Metadata Source #

Create a value of Hdr10Metadata with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:redPrimaryX:Hdr10Metadata', hdr10Metadata_redPrimaryX - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:bluePrimaryX:Hdr10Metadata', hdr10Metadata_bluePrimaryX - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:maxFrameAverageLightLevel:Hdr10Metadata', hdr10Metadata_maxFrameAverageLightLevel - Maximum average light level of any frame in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

$sel:whitePointY:Hdr10Metadata', hdr10Metadata_whitePointY - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:maxContentLightLevel:Hdr10Metadata', hdr10Metadata_maxContentLightLevel - Maximum light level among all samples in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

$sel:whitePointX:Hdr10Metadata', hdr10Metadata_whitePointX - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:bluePrimaryY:Hdr10Metadata', hdr10Metadata_bluePrimaryY - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:greenPrimaryY:Hdr10Metadata', hdr10Metadata_greenPrimaryY - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:greenPrimaryX:Hdr10Metadata', hdr10Metadata_greenPrimaryX - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:minLuminance:Hdr10Metadata', hdr10Metadata_minLuminance - Nominal minimum mastering display luminance in units of of 0.0001 candelas per square meter

$sel:redPrimaryY:Hdr10Metadata', hdr10Metadata_redPrimaryY - HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

$sel:maxLuminance:Hdr10Metadata', hdr10Metadata_maxLuminance - Nominal maximum mastering display luminance in units of of 0.0001 candelas per square meter.

hdr10Metadata_redPrimaryX :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_bluePrimaryX :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_maxFrameAverageLightLevel :: Lens' Hdr10Metadata (Maybe Natural) Source #

Maximum average light level of any frame in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

hdr10Metadata_whitePointY :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_maxContentLightLevel :: Lens' Hdr10Metadata (Maybe Natural) Source #

Maximum light level among all samples in the coded video sequence, in units of candelas per square meter. This setting doesn't have a default value; you must specify a value that is suitable for the content.

hdr10Metadata_whitePointX :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_bluePrimaryY :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_greenPrimaryY :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_greenPrimaryX :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_minLuminance :: Lens' Hdr10Metadata (Maybe Natural) Source #

Nominal minimum mastering display luminance in units of of 0.0001 candelas per square meter

hdr10Metadata_redPrimaryY :: Lens' Hdr10Metadata (Maybe Natural) Source #

HDR Master Display Information must be provided by a color grader, using color grading tools. Range is 0 to 50,000, each increment represents 0.00002 in CIE1931 color coordinate. Note that this setting is not for color correction.

hdr10Metadata_maxLuminance :: Lens' Hdr10Metadata (Maybe Natural) Source #

Nominal maximum mastering display luminance in units of of 0.0001 candelas per square meter.

Hdr10Plus

data Hdr10Plus Source #

Setting for HDR10+ metadata insertion

See: newHdr10Plus smart constructor.

Constructors

Hdr10Plus' 

Fields

  • masteringMonitorNits :: Maybe Natural

    Specify the HDR10+ mastering display normalized peak luminance, in nits. This is the normalized actual peak luminance of the mastering display, as defined by ST 2094-40.

  • targetMonitorNits :: Maybe Natural

    Specify the HDR10+ target display nominal peak luminance, in nits. This is the nominal maximum luminance of the target display as defined by ST 2094-40.

Instances

Instances details
Eq Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

Read Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

Show Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

Generic Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

Associated Types

type Rep Hdr10Plus :: Type -> Type #

NFData Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

Methods

rnf :: Hdr10Plus -> () #

Hashable Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

ToJSON Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

FromJSON Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

type Rep Hdr10Plus Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Hdr10Plus

type Rep Hdr10Plus = D1 ('MetaData "Hdr10Plus" "Amazonka.MediaConvert.Types.Hdr10Plus" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Hdr10Plus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "masteringMonitorNits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "targetMonitorNits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newHdr10Plus :: Hdr10Plus Source #

Create a value of Hdr10Plus with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:masteringMonitorNits:Hdr10Plus', hdr10Plus_masteringMonitorNits - Specify the HDR10+ mastering display normalized peak luminance, in nits. This is the normalized actual peak luminance of the mastering display, as defined by ST 2094-40.

$sel:targetMonitorNits:Hdr10Plus', hdr10Plus_targetMonitorNits - Specify the HDR10+ target display nominal peak luminance, in nits. This is the nominal maximum luminance of the target display as defined by ST 2094-40.

hdr10Plus_masteringMonitorNits :: Lens' Hdr10Plus (Maybe Natural) Source #

Specify the HDR10+ mastering display normalized peak luminance, in nits. This is the normalized actual peak luminance of the mastering display, as defined by ST 2094-40.

hdr10Plus_targetMonitorNits :: Lens' Hdr10Plus (Maybe Natural) Source #

Specify the HDR10+ target display nominal peak luminance, in nits. This is the nominal maximum luminance of the target display as defined by ST 2094-40.

HlsAdditionalManifest

data HlsAdditionalManifest Source #

Specify the details for each additional HLS manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.

See: newHlsAdditionalManifest smart constructor.

Constructors

HlsAdditionalManifest' 

Fields

  • manifestNameModifier :: Maybe Text

    Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

  • selectedOutputs :: Maybe [Text]

    Specify the outputs that you want this additional top-level manifest to reference.

Instances

Instances details
Eq HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

Read HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

Show HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

Generic HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

Associated Types

type Rep HlsAdditionalManifest :: Type -> Type #

NFData HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

Methods

rnf :: HlsAdditionalManifest -> () #

Hashable HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

ToJSON HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

FromJSON HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

type Rep HlsAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsAdditionalManifest

type Rep HlsAdditionalManifest = D1 ('MetaData "HlsAdditionalManifest" "Amazonka.MediaConvert.Types.HlsAdditionalManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsAdditionalManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "manifestNameModifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "selectedOutputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newHlsAdditionalManifest :: HlsAdditionalManifest Source #

Create a value of HlsAdditionalManifest with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:manifestNameModifier:HlsAdditionalManifest', hlsAdditionalManifest_manifestNameModifier - Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

$sel:selectedOutputs:HlsAdditionalManifest', hlsAdditionalManifest_selectedOutputs - Specify the outputs that you want this additional top-level manifest to reference.

hlsAdditionalManifest_manifestNameModifier :: Lens' HlsAdditionalManifest (Maybe Text) Source #

Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your HLS group is film-name.m3u8. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.m3u8. For HLS output groups, specify a manifestNameModifier that is different from the nameModifier of the output. The service uses the output name modifier to create unique names for the individual variant manifests.

hlsAdditionalManifest_selectedOutputs :: Lens' HlsAdditionalManifest (Maybe [Text]) Source #

Specify the outputs that you want this additional top-level manifest to reference.

HlsCaptionLanguageMapping

data HlsCaptionLanguageMapping Source #

Caption Language Mapping

See: newHlsCaptionLanguageMapping smart constructor.

Constructors

HlsCaptionLanguageMapping' 

Fields

Instances

Instances details
Eq HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

Read HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

Show HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

Generic HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

Associated Types

type Rep HlsCaptionLanguageMapping :: Type -> Type #

NFData HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

Hashable HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

ToJSON HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

FromJSON HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

type Rep HlsCaptionLanguageMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping

type Rep HlsCaptionLanguageMapping = D1 ('MetaData "HlsCaptionLanguageMapping" "Amazonka.MediaConvert.Types.HlsCaptionLanguageMapping" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsCaptionLanguageMapping'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "customLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode))) :*: (S1 ('MetaSel ('Just "languageDescription") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "captionChannel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newHlsCaptionLanguageMapping :: HlsCaptionLanguageMapping Source #

Create a value of HlsCaptionLanguageMapping with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:customLanguageCode:HlsCaptionLanguageMapping', hlsCaptionLanguageMapping_customLanguageCode - Specify the language for this captions channel, using the ISO 639-2 or ISO 639-3 three-letter language code

$sel:languageCode:HlsCaptionLanguageMapping', hlsCaptionLanguageMapping_languageCode - Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php.

$sel:languageDescription:HlsCaptionLanguageMapping', hlsCaptionLanguageMapping_languageDescription - Caption language description.

$sel:captionChannel:HlsCaptionLanguageMapping', hlsCaptionLanguageMapping_captionChannel - Caption channel.

hlsCaptionLanguageMapping_customLanguageCode :: Lens' HlsCaptionLanguageMapping (Maybe Text) Source #

Specify the language for this captions channel, using the ISO 639-2 or ISO 639-3 three-letter language code

hlsCaptionLanguageMapping_languageCode :: Lens' HlsCaptionLanguageMapping (Maybe LanguageCode) Source #

Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php.

HlsEncryptionSettings

data HlsEncryptionSettings Source #

Settings for HLS encryption

See: newHlsEncryptionSettings smart constructor.

Constructors

HlsEncryptionSettings' 

Fields

  • offlineEncrypted :: Maybe HlsOfflineEncrypted

    Enable this setting to insert the EXT-X-SESSION-KEY element into the master playlist. This allows for offline Apple HLS FairPlay content protection.

  • encryptionMethod :: Maybe HlsEncryptionType

    Encrypts the segments with the given encryption scheme. Leave blank to disable. Selecting 'Disabled' in the web interface also disables encryption.

  • constantInitializationVector :: Maybe Text

    This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

  • type' :: Maybe HlsKeyProviderType

    Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

  • staticKeyProvider :: Maybe StaticKeyProvider

    Use these settings to set up encryption with a static key provider.

  • spekeKeyProvider :: Maybe SpekeKeyProvider

    If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

  • initializationVectorInManifest :: Maybe HlsInitializationVectorInManifest

    The Initialization Vector is a 128-bit number used in conjunction with the key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed in the manifest. Otherwise Initialization Vector is not in the manifest.

Instances

Instances details
Eq HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

Read HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

Show HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

Generic HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

Associated Types

type Rep HlsEncryptionSettings :: Type -> Type #

NFData HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

Methods

rnf :: HlsEncryptionSettings -> () #

Hashable HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

ToJSON HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

FromJSON HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

type Rep HlsEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsEncryptionSettings

type Rep HlsEncryptionSettings = D1 ('MetaData "HlsEncryptionSettings" "Amazonka.MediaConvert.Types.HlsEncryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsEncryptionSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "offlineEncrypted") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsOfflineEncrypted)) :*: (S1 ('MetaSel ('Just "encryptionMethod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsEncryptionType)) :*: S1 ('MetaSel ('Just "constantInitializationVector") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsKeyProviderType)) :*: S1 ('MetaSel ('Just "staticKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe StaticKeyProvider))) :*: (S1 ('MetaSel ('Just "spekeKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SpekeKeyProvider)) :*: S1 ('MetaSel ('Just "initializationVectorInManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsInitializationVectorInManifest))))))

newHlsEncryptionSettings :: HlsEncryptionSettings Source #

Create a value of HlsEncryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:offlineEncrypted:HlsEncryptionSettings', hlsEncryptionSettings_offlineEncrypted - Enable this setting to insert the EXT-X-SESSION-KEY element into the master playlist. This allows for offline Apple HLS FairPlay content protection.

$sel:encryptionMethod:HlsEncryptionSettings', hlsEncryptionSettings_encryptionMethod - Encrypts the segments with the given encryption scheme. Leave blank to disable. Selecting 'Disabled' in the web interface also disables encryption.

$sel:constantInitializationVector:HlsEncryptionSettings', hlsEncryptionSettings_constantInitializationVector - This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

$sel:type':HlsEncryptionSettings', hlsEncryptionSettings_type - Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

$sel:staticKeyProvider:HlsEncryptionSettings', hlsEncryptionSettings_staticKeyProvider - Use these settings to set up encryption with a static key provider.

$sel:spekeKeyProvider:HlsEncryptionSettings', hlsEncryptionSettings_spekeKeyProvider - If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

$sel:initializationVectorInManifest:HlsEncryptionSettings', hlsEncryptionSettings_initializationVectorInManifest - The Initialization Vector is a 128-bit number used in conjunction with the key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed in the manifest. Otherwise Initialization Vector is not in the manifest.

hlsEncryptionSettings_offlineEncrypted :: Lens' HlsEncryptionSettings (Maybe HlsOfflineEncrypted) Source #

Enable this setting to insert the EXT-X-SESSION-KEY element into the master playlist. This allows for offline Apple HLS FairPlay content protection.

hlsEncryptionSettings_encryptionMethod :: Lens' HlsEncryptionSettings (Maybe HlsEncryptionType) Source #

Encrypts the segments with the given encryption scheme. Leave blank to disable. Selecting 'Disabled' in the web interface also disables encryption.

hlsEncryptionSettings_constantInitializationVector :: Lens' HlsEncryptionSettings (Maybe Text) Source #

This is a 128-bit, 16-byte hex value represented by a 32-character text string. If this parameter is not set then the Initialization Vector will follow the segment number by default.

hlsEncryptionSettings_type :: Lens' HlsEncryptionSettings (Maybe HlsKeyProviderType) Source #

Specify whether your DRM encryption key is static or from a key provider that follows the SPEKE standard. For more information about SPEKE, see https://docs.aws.amazon.com/speke/latest/documentation/what-is-speke.html.

hlsEncryptionSettings_staticKeyProvider :: Lens' HlsEncryptionSettings (Maybe StaticKeyProvider) Source #

Use these settings to set up encryption with a static key provider.

hlsEncryptionSettings_spekeKeyProvider :: Lens' HlsEncryptionSettings (Maybe SpekeKeyProvider) Source #

If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

hlsEncryptionSettings_initializationVectorInManifest :: Lens' HlsEncryptionSettings (Maybe HlsInitializationVectorInManifest) Source #

The Initialization Vector is a 128-bit number used in conjunction with the key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed in the manifest. Otherwise Initialization Vector is not in the manifest.

HlsGroupSettings

data HlsGroupSettings Source #

Settings related to your HLS output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to HLS_GROUP_SETTINGS.

See: newHlsGroupSettings smart constructor.

Constructors

HlsGroupSettings' 

Fields

  • directoryStructure :: Maybe HlsDirectoryStructure

    Indicates whether segments should be placed in subdirectories.

  • segmentControl :: Maybe HlsSegmentControl

    When set to SINGLE_FILE, emits program as a single media resource (.ts) file, uses #EXT-X-BYTERANGE tags to index segment for playback.

  • destination :: Maybe Text

    Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

  • timedMetadataId3Period :: Maybe Int

    Timed Metadata interval in seconds.

  • targetDurationCompatibilityMode :: Maybe HlsTargetDurationCompatibilityMode

    When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

  • imageBasedTrickPlay :: Maybe HlsImageBasedTrickPlay

    Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

  • additionalManifests :: Maybe [HlsAdditionalManifest]

    By default, the service creates one top-level .m3u8 HLS manifest for each HLS output group in your job. This default manifest references every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here.

  • minSegmentLength :: Maybe Natural

    When set, Minimum Segment Size is enforced by looking ahead and back within the specified range for a nearby avail and extending the segment size if needed.

  • programDateTime :: Maybe HlsProgramDateTime

    Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. The value is calculated as follows: either the program date and time are initialized using the input timecode source, or the time is initialized using the input timecode source and the date is initialized using the timestamp_offset.

  • segmentLengthControl :: Maybe HlsSegmentLengthControl

    Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

  • imageBasedTrickPlaySettings :: Maybe HlsImageBasedTrickPlaySettings

    Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

  • programDateTimePeriod :: Maybe Natural

    Period of insertion of EXT-X-PROGRAM-DATE-TIME entry, in seconds.

  • codecSpecification :: Maybe HlsCodecSpecification

    Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

  • captionLanguageMappings :: Maybe [HlsCaptionLanguageMapping]

    Language to be used on Caption outputs

  • baseUrl :: Maybe Text

    A partial URI prefix that will be prepended to each output in the media .m3u8 file. Can be used if base manifest is delivered from a different URL than the main .m3u8 file.

  • destinationSettings :: Maybe DestinationSettings

    Settings associated with the destination. Will vary based on the type of destination

  • minFinalSegmentLength :: Maybe Double

    Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

  • adMarkers :: Maybe [HlsAdMarkers]

    Choose one or more ad marker types to decorate your Apple HLS manifest. This setting does not determine whether SCTE-35 markers appear in the outputs themselves.

  • encryption :: Maybe HlsEncryptionSettings

    DRM settings.

  • segmentLength :: Maybe Natural

    Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (HlsSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

  • timedMetadataId3Frame :: Maybe HlsTimedMetadataId3Frame

    Indicates ID3 frame that has the timecode.

  • outputSelection :: Maybe HlsOutputSelection

    Indicates whether the .m3u8 manifest file should be generated for this HLS output group.

  • captionLanguageSetting :: Maybe HlsCaptionLanguageSetting

    Applies only to 608 Embedded output captions. Insert: Include CLOSED-CAPTIONS lines in the manifest. Specify at least one language in the CC1 Language Code field. One CLOSED-CAPTION line is added for each Language Code you specify. Make sure to specify the languages in the order in which they appear in the original source (if the source is embedded format) or the order of the caption selectors (if the source is other than embedded). Otherwise, languages in the manifest will not match up properly with the output captions. None: Include CLOSED-CAPTIONS=NONE line in the manifest. Omit: Omit any CLOSED-CAPTIONS line from the manifest.

  • segmentsPerSubdirectory :: Maybe Natural

    Number of segments to write to a subdirectory before starting a new one. directoryStructure must be SINGLE_DIRECTORY for this setting to have an effect.

  • manifestDurationFormat :: Maybe HlsManifestDurationFormat

    Indicates whether the output manifest should use floating point values for segment duration.

  • audioOnlyHeader :: Maybe HlsAudioOnlyHeader

    Ignore this setting unless you are using FairPlay DRM with Verimatrix and you encounter playback issues. Keep the default value, Include (INCLUDE), to output audio-only headers. Choose Exclude (EXCLUDE) to remove the audio-only headers from your audio segments.

  • clientCache :: Maybe HlsClientCache

    Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

  • timestampDeltaMilliseconds :: Maybe Int

    Provides an extra millisecond delta offset to fine tune the timestamps.

  • streamInfResolution :: Maybe HlsStreamInfResolution

    Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

  • manifestCompression :: Maybe HlsManifestCompression

    When set to GZIP, compresses HLS playlist.

Instances

Instances details
Eq HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

Read HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

Show HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

Generic HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

Associated Types

type Rep HlsGroupSettings :: Type -> Type #

NFData HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

Methods

rnf :: HlsGroupSettings -> () #

Hashable HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

ToJSON HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

FromJSON HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

type Rep HlsGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsGroupSettings

type Rep HlsGroupSettings = D1 ('MetaData "HlsGroupSettings" "Amazonka.MediaConvert.Types.HlsGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsGroupSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "directoryStructure") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsDirectoryStructure)) :*: (S1 ('MetaSel ('Just "segmentControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsSegmentControl)) :*: S1 ('MetaSel ('Just "destination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "timedMetadataId3Period") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "targetDurationCompatibilityMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsTargetDurationCompatibilityMode))) :*: (S1 ('MetaSel ('Just "imageBasedTrickPlay") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsImageBasedTrickPlay)) :*: S1 ('MetaSel ('Just "additionalManifests") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [HlsAdditionalManifest]))))) :*: (((S1 ('MetaSel ('Just "minSegmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "programDateTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsProgramDateTime))) :*: (S1 ('MetaSel ('Just "segmentLengthControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsSegmentLengthControl)) :*: S1 ('MetaSel ('Just "imageBasedTrickPlaySettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsImageBasedTrickPlaySettings)))) :*: ((S1 ('MetaSel ('Just "programDateTimePeriod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "codecSpecification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsCodecSpecification))) :*: (S1 ('MetaSel ('Just "captionLanguageMappings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [HlsCaptionLanguageMapping])) :*: S1 ('MetaSel ('Just "baseUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))) :*: (((S1 ('MetaSel ('Just "destinationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DestinationSettings)) :*: (S1 ('MetaSel ('Just "minFinalSegmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "adMarkers") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [HlsAdMarkers])))) :*: ((S1 ('MetaSel ('Just "encryption") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsEncryptionSettings)) :*: S1 ('MetaSel ('Just "segmentLength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "timedMetadataId3Frame") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsTimedMetadataId3Frame)) :*: S1 ('MetaSel ('Just "outputSelection") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsOutputSelection))))) :*: (((S1 ('MetaSel ('Just "captionLanguageSetting") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsCaptionLanguageSetting)) :*: S1 ('MetaSel ('Just "segmentsPerSubdirectory") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "manifestDurationFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsManifestDurationFormat)) :*: S1 ('MetaSel ('Just "audioOnlyHeader") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsAudioOnlyHeader)))) :*: ((S1 ('MetaSel ('Just "clientCache") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsClientCache)) :*: S1 ('MetaSel ('Just "timestampDeltaMilliseconds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))) :*: (S1 ('MetaSel ('Just "streamInfResolution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsStreamInfResolution)) :*: S1 ('MetaSel ('Just "manifestCompression") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsManifestCompression))))))))

newHlsGroupSettings :: HlsGroupSettings Source #

Create a value of HlsGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:directoryStructure:HlsGroupSettings', hlsGroupSettings_directoryStructure - Indicates whether segments should be placed in subdirectories.

$sel:segmentControl:HlsGroupSettings', hlsGroupSettings_segmentControl - When set to SINGLE_FILE, emits program as a single media resource (.ts) file, uses #EXT-X-BYTERANGE tags to index segment for playback.

$sel:destination:HlsGroupSettings', hlsGroupSettings_destination - Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

$sel:timedMetadataId3Period:HlsGroupSettings', hlsGroupSettings_timedMetadataId3Period - Timed Metadata interval in seconds.

$sel:targetDurationCompatibilityMode:HlsGroupSettings', hlsGroupSettings_targetDurationCompatibilityMode - When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

$sel:imageBasedTrickPlay:HlsGroupSettings', hlsGroupSettings_imageBasedTrickPlay - Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

$sel:additionalManifests:HlsGroupSettings', hlsGroupSettings_additionalManifests - By default, the service creates one top-level .m3u8 HLS manifest for each HLS output group in your job. This default manifest references every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here.

$sel:minSegmentLength:HlsGroupSettings', hlsGroupSettings_minSegmentLength - When set, Minimum Segment Size is enforced by looking ahead and back within the specified range for a nearby avail and extending the segment size if needed.

$sel:programDateTime:HlsGroupSettings', hlsGroupSettings_programDateTime - Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. The value is calculated as follows: either the program date and time are initialized using the input timecode source, or the time is initialized using the input timecode source and the date is initialized using the timestamp_offset.

$sel:segmentLengthControl:HlsGroupSettings', hlsGroupSettings_segmentLengthControl - Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

$sel:imageBasedTrickPlaySettings:HlsGroupSettings', hlsGroupSettings_imageBasedTrickPlaySettings - Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

$sel:programDateTimePeriod:HlsGroupSettings', hlsGroupSettings_programDateTimePeriod - Period of insertion of EXT-X-PROGRAM-DATE-TIME entry, in seconds.

$sel:codecSpecification:HlsGroupSettings', hlsGroupSettings_codecSpecification - Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

$sel:captionLanguageMappings:HlsGroupSettings', hlsGroupSettings_captionLanguageMappings - Language to be used on Caption outputs

$sel:baseUrl:HlsGroupSettings', hlsGroupSettings_baseUrl - A partial URI prefix that will be prepended to each output in the media .m3u8 file. Can be used if base manifest is delivered from a different URL than the main .m3u8 file.

$sel:destinationSettings:HlsGroupSettings', hlsGroupSettings_destinationSettings - Settings associated with the destination. Will vary based on the type of destination

$sel:minFinalSegmentLength:HlsGroupSettings', hlsGroupSettings_minFinalSegmentLength - Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

$sel:adMarkers:HlsGroupSettings', hlsGroupSettings_adMarkers - Choose one or more ad marker types to decorate your Apple HLS manifest. This setting does not determine whether SCTE-35 markers appear in the outputs themselves.

$sel:encryption:HlsGroupSettings', hlsGroupSettings_encryption - DRM settings.

$sel:segmentLength:HlsGroupSettings', hlsGroupSettings_segmentLength - Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (HlsSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

$sel:timedMetadataId3Frame:HlsGroupSettings', hlsGroupSettings_timedMetadataId3Frame - Indicates ID3 frame that has the timecode.

$sel:outputSelection:HlsGroupSettings', hlsGroupSettings_outputSelection - Indicates whether the .m3u8 manifest file should be generated for this HLS output group.

$sel:captionLanguageSetting:HlsGroupSettings', hlsGroupSettings_captionLanguageSetting - Applies only to 608 Embedded output captions. Insert: Include CLOSED-CAPTIONS lines in the manifest. Specify at least one language in the CC1 Language Code field. One CLOSED-CAPTION line is added for each Language Code you specify. Make sure to specify the languages in the order in which they appear in the original source (if the source is embedded format) or the order of the caption selectors (if the source is other than embedded). Otherwise, languages in the manifest will not match up properly with the output captions. None: Include CLOSED-CAPTIONS=NONE line in the manifest. Omit: Omit any CLOSED-CAPTIONS line from the manifest.

$sel:segmentsPerSubdirectory:HlsGroupSettings', hlsGroupSettings_segmentsPerSubdirectory - Number of segments to write to a subdirectory before starting a new one. directoryStructure must be SINGLE_DIRECTORY for this setting to have an effect.

$sel:manifestDurationFormat:HlsGroupSettings', hlsGroupSettings_manifestDurationFormat - Indicates whether the output manifest should use floating point values for segment duration.

$sel:audioOnlyHeader:HlsGroupSettings', hlsGroupSettings_audioOnlyHeader - Ignore this setting unless you are using FairPlay DRM with Verimatrix and you encounter playback issues. Keep the default value, Include (INCLUDE), to output audio-only headers. Choose Exclude (EXCLUDE) to remove the audio-only headers from your audio segments.

$sel:clientCache:HlsGroupSettings', hlsGroupSettings_clientCache - Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

$sel:timestampDeltaMilliseconds:HlsGroupSettings', hlsGroupSettings_timestampDeltaMilliseconds - Provides an extra millisecond delta offset to fine tune the timestamps.

$sel:streamInfResolution:HlsGroupSettings', hlsGroupSettings_streamInfResolution - Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

$sel:manifestCompression:HlsGroupSettings', hlsGroupSettings_manifestCompression - When set to GZIP, compresses HLS playlist.

hlsGroupSettings_directoryStructure :: Lens' HlsGroupSettings (Maybe HlsDirectoryStructure) Source #

Indicates whether segments should be placed in subdirectories.

hlsGroupSettings_segmentControl :: Lens' HlsGroupSettings (Maybe HlsSegmentControl) Source #

When set to SINGLE_FILE, emits program as a single media resource (.ts) file, uses #EXT-X-BYTERANGE tags to index segment for playback.

hlsGroupSettings_destination :: Lens' HlsGroupSettings (Maybe Text) Source #

Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

hlsGroupSettings_targetDurationCompatibilityMode :: Lens' HlsGroupSettings (Maybe HlsTargetDurationCompatibilityMode) Source #

When set to LEGACY, the segment target duration is always rounded up to the nearest integer value above its current value in seconds. When set to SPEC\\_COMPLIANT, the segment target duration is rounded up to the nearest integer value if fraction seconds are greater than or equal to 0.5 (>= 0.5) and rounded down if less than 0.5 (< 0.5). You may need to use LEGACY if your client needs to ensure that the target duration is always longer than the actual duration of the segment. Some older players may experience interrupted playback when the actual duration of a track in a segment is longer than the target duration.

hlsGroupSettings_imageBasedTrickPlay :: Lens' HlsGroupSettings (Maybe HlsImageBasedTrickPlay) Source #

Specify whether MediaConvert generates images for trick play. Keep the default value, None (NONE), to not generate any images. Choose Thumbnail (THUMBNAIL) to generate tiled thumbnails. Choose Thumbnail and full frame (THUMBNAIL_AND_FULLFRAME) to generate tiled thumbnails and full-resolution images of single frames. MediaConvert creates a child manifest for each set of images that you generate and adds corresponding entries to the parent manifest. A common application for these images is Roku trick mode. The thumbnails and full-frame images that MediaConvert creates with this feature are compatible with this Roku specification: https://developer.roku.com/docs/developer-program/media-playback/trick-mode/hls-and-dash.md

hlsGroupSettings_additionalManifests :: Lens' HlsGroupSettings (Maybe [HlsAdditionalManifest]) Source #

By default, the service creates one top-level .m3u8 HLS manifest for each HLS output group in your job. This default manifest references every output in the output group. To create additional top-level manifests that reference a subset of the outputs in the output group, specify a list of them here.

hlsGroupSettings_minSegmentLength :: Lens' HlsGroupSettings (Maybe Natural) Source #

When set, Minimum Segment Size is enforced by looking ahead and back within the specified range for a nearby avail and extending the segment size if needed.

hlsGroupSettings_programDateTime :: Lens' HlsGroupSettings (Maybe HlsProgramDateTime) Source #

Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. The value is calculated as follows: either the program date and time are initialized using the input timecode source, or the time is initialized using the input timecode source and the date is initialized using the timestamp_offset.

hlsGroupSettings_segmentLengthControl :: Lens' HlsGroupSettings (Maybe HlsSegmentLengthControl) Source #

Specify how you want MediaConvert to determine the segment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Segment length (SegmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

hlsGroupSettings_imageBasedTrickPlaySettings :: Lens' HlsGroupSettings (Maybe HlsImageBasedTrickPlaySettings) Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

hlsGroupSettings_programDateTimePeriod :: Lens' HlsGroupSettings (Maybe Natural) Source #

Period of insertion of EXT-X-PROGRAM-DATE-TIME entry, in seconds.

hlsGroupSettings_codecSpecification :: Lens' HlsGroupSettings (Maybe HlsCodecSpecification) Source #

Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist generation.

hlsGroupSettings_baseUrl :: Lens' HlsGroupSettings (Maybe Text) Source #

A partial URI prefix that will be prepended to each output in the media .m3u8 file. Can be used if base manifest is delivered from a different URL than the main .m3u8 file.

hlsGroupSettings_destinationSettings :: Lens' HlsGroupSettings (Maybe DestinationSettings) Source #

Settings associated with the destination. Will vary based on the type of destination

hlsGroupSettings_minFinalSegmentLength :: Lens' HlsGroupSettings (Maybe Double) Source #

Keep this setting at the default value of 0, unless you are troubleshooting a problem with how devices play back the end of your video asset. If you know that player devices are hanging on the final segment of your video because the length of your final segment is too short, use this setting to specify a minimum final segment length, in seconds. Choose a value that is greater than or equal to 1 and less than your segment length. When you specify a value for this setting, the encoder will combine any final segment that is shorter than the length that you specify with the previous segment. For example, your segment length is 3 seconds and your final segment is .5 seconds without a minimum final segment length; when you set the minimum final segment length to 1, your final segment is 3.5 seconds.

hlsGroupSettings_adMarkers :: Lens' HlsGroupSettings (Maybe [HlsAdMarkers]) Source #

Choose one or more ad marker types to decorate your Apple HLS manifest. This setting does not determine whether SCTE-35 markers appear in the outputs themselves.

hlsGroupSettings_segmentLength :: Lens' HlsGroupSettings (Maybe Natural) Source #

Specify the length, in whole seconds, of each segment. When you don't specify a value, MediaConvert defaults to 10. Related settings: Use Segment length control (SegmentLengthControl) to specify whether the encoder enforces this value strictly. Use Segment control (HlsSegmentControl) to specify whether MediaConvert creates separate segment files or one content file that has metadata to mark the segment boundaries.

hlsGroupSettings_outputSelection :: Lens' HlsGroupSettings (Maybe HlsOutputSelection) Source #

Indicates whether the .m3u8 manifest file should be generated for this HLS output group.

hlsGroupSettings_captionLanguageSetting :: Lens' HlsGroupSettings (Maybe HlsCaptionLanguageSetting) Source #

Applies only to 608 Embedded output captions. Insert: Include CLOSED-CAPTIONS lines in the manifest. Specify at least one language in the CC1 Language Code field. One CLOSED-CAPTION line is added for each Language Code you specify. Make sure to specify the languages in the order in which they appear in the original source (if the source is embedded format) or the order of the caption selectors (if the source is other than embedded). Otherwise, languages in the manifest will not match up properly with the output captions. None: Include CLOSED-CAPTIONS=NONE line in the manifest. Omit: Omit any CLOSED-CAPTIONS line from the manifest.

hlsGroupSettings_segmentsPerSubdirectory :: Lens' HlsGroupSettings (Maybe Natural) Source #

Number of segments to write to a subdirectory before starting a new one. directoryStructure must be SINGLE_DIRECTORY for this setting to have an effect.

hlsGroupSettings_manifestDurationFormat :: Lens' HlsGroupSettings (Maybe HlsManifestDurationFormat) Source #

Indicates whether the output manifest should use floating point values for segment duration.

hlsGroupSettings_audioOnlyHeader :: Lens' HlsGroupSettings (Maybe HlsAudioOnlyHeader) Source #

Ignore this setting unless you are using FairPlay DRM with Verimatrix and you encounter playback issues. Keep the default value, Include (INCLUDE), to output audio-only headers. Choose Exclude (EXCLUDE) to remove the audio-only headers from your audio segments.

hlsGroupSettings_clientCache :: Lens' HlsGroupSettings (Maybe HlsClientCache) Source #

Disable this setting only when your workflow requires the #EXT-X-ALLOW-CACHE:no tag. Otherwise, keep the default value Enabled (ENABLED) and control caching in your video distribution set up. For example, use the Cache-Control http header.

hlsGroupSettings_timestampDeltaMilliseconds :: Lens' HlsGroupSettings (Maybe Int) Source #

Provides an extra millisecond delta offset to fine tune the timestamps.

hlsGroupSettings_streamInfResolution :: Lens' HlsGroupSettings (Maybe HlsStreamInfResolution) Source #

Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag of variant manifest.

HlsImageBasedTrickPlaySettings

data HlsImageBasedTrickPlaySettings Source #

Tile and thumbnail settings applicable when imageBasedTrickPlay is ADVANCED

See: newHlsImageBasedTrickPlaySettings smart constructor.

Constructors

HlsImageBasedTrickPlaySettings' 

Fields

  • tileWidth :: Maybe Natural

    Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

  • thumbnailHeight :: Maybe Natural

    Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

  • intervalCadence :: Maybe HlsIntervalCadence

    The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

  • thumbnailWidth :: Maybe Natural

    Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

  • thumbnailInterval :: Maybe Double

    Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

  • tileHeight :: Maybe Natural

    Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

Instances

Instances details
Eq HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

Read HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

Show HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

Generic HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

Associated Types

type Rep HlsImageBasedTrickPlaySettings :: Type -> Type #

NFData HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

Hashable HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

ToJSON HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

FromJSON HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

type Rep HlsImageBasedTrickPlaySettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings

type Rep HlsImageBasedTrickPlaySettings = D1 ('MetaData "HlsImageBasedTrickPlaySettings" "Amazonka.MediaConvert.Types.HlsImageBasedTrickPlaySettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsImageBasedTrickPlaySettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "tileWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "intervalCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsIntervalCadence)))) :*: (S1 ('MetaSel ('Just "thumbnailWidth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "thumbnailInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "tileHeight") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newHlsImageBasedTrickPlaySettings :: HlsImageBasedTrickPlaySettings Source #

Create a value of HlsImageBasedTrickPlaySettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tileWidth:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_tileWidth - Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

$sel:thumbnailHeight:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_thumbnailHeight - Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

$sel:intervalCadence:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_intervalCadence - The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

$sel:thumbnailWidth:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_thumbnailWidth - Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

$sel:thumbnailInterval:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_thumbnailInterval - Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

$sel:tileHeight:HlsImageBasedTrickPlaySettings', hlsImageBasedTrickPlaySettings_tileHeight - Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

hlsImageBasedTrickPlaySettings_tileWidth :: Lens' HlsImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each row of a tile image. Set a value between 1 and 512.

hlsImageBasedTrickPlaySettings_thumbnailHeight :: Lens' HlsImageBasedTrickPlaySettings (Maybe Natural) Source #

Height of each thumbnail within each tile image, in pixels. Leave blank to maintain aspect ratio with thumbnail width. If following the aspect ratio would lead to a total tile height greater than 4096, then the job will be rejected. Must be divisible by 2.

hlsImageBasedTrickPlaySettings_intervalCadence :: Lens' HlsImageBasedTrickPlaySettings (Maybe HlsIntervalCadence) Source #

The cadence MediaConvert follows for generating thumbnails. If set to FOLLOW_IFRAME, MediaConvert generates thumbnails for each IDR frame in the output (matching the GOP cadence). If set to FOLLOW_CUSTOM, MediaConvert generates thumbnails according to the interval you specify in thumbnailInterval.

hlsImageBasedTrickPlaySettings_thumbnailWidth :: Lens' HlsImageBasedTrickPlaySettings (Maybe Natural) Source #

Width of each thumbnail within each tile image, in pixels. Default is 312. Must be divisible by 8.

hlsImageBasedTrickPlaySettings_thumbnailInterval :: Lens' HlsImageBasedTrickPlaySettings (Maybe Double) Source #

Enter the interval, in seconds, that MediaConvert uses to generate thumbnails. If the interval you enter doesn't align with the output frame rate, MediaConvert automatically rounds the interval to align with the output frame rate. For example, if the output frame rate is 29.97 frames per second and you enter 5, MediaConvert uses a 150 frame interval to generate thumbnails.

hlsImageBasedTrickPlaySettings_tileHeight :: Lens' HlsImageBasedTrickPlaySettings (Maybe Natural) Source #

Number of thumbnails in each column of a tile image. Set a value between 2 and 2048. Must be divisible by 2.

HlsRenditionGroupSettings

data HlsRenditionGroupSettings Source #

Settings specific to audio sources in an HLS alternate rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique audio track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the properties provided, the job fails. If no properties in hlsRenditionGroupSettings are specified, the default audio track within the video segment is chosen. If there is no audio within video segment, the alternative audio with DEFAULT=YES is chosen instead.

See: newHlsRenditionGroupSettings smart constructor.

Constructors

HlsRenditionGroupSettings' 

Fields

Instances

Instances details
Eq HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

Read HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

Show HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

Generic HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

Associated Types

type Rep HlsRenditionGroupSettings :: Type -> Type #

NFData HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

Hashable HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

ToJSON HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

FromJSON HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

type Rep HlsRenditionGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsRenditionGroupSettings

type Rep HlsRenditionGroupSettings = D1 ('MetaData "HlsRenditionGroupSettings" "Amazonka.MediaConvert.Types.HlsRenditionGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsRenditionGroupSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "renditionName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "renditionGroupId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "renditionLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)))))

newHlsRenditionGroupSettings :: HlsRenditionGroupSettings Source #

Create a value of HlsRenditionGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:renditionName:HlsRenditionGroupSettings', hlsRenditionGroupSettings_renditionName - Optional. Specify media name

$sel:renditionGroupId:HlsRenditionGroupSettings', hlsRenditionGroupSettings_renditionGroupId - Optional. Specify alternative group ID

$sel:renditionLanguageCode:HlsRenditionGroupSettings', hlsRenditionGroupSettings_renditionLanguageCode - Optional. Specify ISO 639-2 or ISO 639-3 code in the language property

hlsRenditionGroupSettings_renditionLanguageCode :: Lens' HlsRenditionGroupSettings (Maybe LanguageCode) Source #

Optional. Specify ISO 639-2 or ISO 639-3 code in the language property

HlsSettings

data HlsSettings Source #

Settings for HLS output groups

See: newHlsSettings smart constructor.

Constructors

HlsSettings' 

Fields

  • descriptiveVideoServiceFlag :: Maybe HlsDescriptiveVideoServiceFlag

    Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

  • audioRenditionSets :: Maybe Text

    List all the audio groups that are used with the video output stream. Input all the audio GROUP-IDs that are associated to the video, separate by ','.

  • iFrameOnlyManifest :: Maybe HlsIFrameOnlyManifest

    Choose Include (INCLUDE) to have MediaConvert generate a child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

  • audioGroupId :: Maybe Text

    Specifies the group to which the audio rendition belongs.

  • segmentModifier :: Maybe Text

    Use this setting to add an identifying string to the filename of each segment. The service adds this string between the name modifier and segment index number. You can use format identifiers in the string. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/using-variables-in-your-job-settings.html

  • audioOnlyContainer :: Maybe HlsAudioOnlyContainer

    Use this setting only in audio-only outputs. Choose MPEG-2 Transport Stream (M2TS) to create a file in an MPEG2-TS container. Keep the default value Automatic (AUTOMATIC) to create an audio-only file in a raw container. Regardless of the value that you specify here, if this output has video, the service will place the output into an MPEG2-TS container.

  • audioTrackType :: Maybe HlsAudioTrackType

    Four types of audio-only tracks are supported: Audio-Only Variant Stream The client can play back this audio-only stream instead of video in low-bandwidth scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest. Alternate Audio, Auto Select, Default Alternate rendition that the client should try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=YES, AUTOSELECT=YES Alternate Audio, Auto Select, Not Default Alternate rendition that the client may try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YES Alternate Audio, not Auto Select Alternate rendition that the client will not try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=NO

Instances

Instances details
Eq HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

Read HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

Show HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

Generic HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

Associated Types

type Rep HlsSettings :: Type -> Type #

NFData HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

Methods

rnf :: HlsSettings -> () #

Hashable HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

ToJSON HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

FromJSON HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

type Rep HlsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HlsSettings

type Rep HlsSettings = D1 ('MetaData "HlsSettings" "Amazonka.MediaConvert.Types.HlsSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HlsSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "descriptiveVideoServiceFlag") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsDescriptiveVideoServiceFlag)) :*: (S1 ('MetaSel ('Just "audioRenditionSets") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "iFrameOnlyManifest") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsIFrameOnlyManifest)))) :*: ((S1 ('MetaSel ('Just "audioGroupId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "segmentModifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "audioOnlyContainer") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsAudioOnlyContainer)) :*: S1 ('MetaSel ('Just "audioTrackType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsAudioTrackType))))))

newHlsSettings :: HlsSettings Source #

Create a value of HlsSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:descriptiveVideoServiceFlag:HlsSettings', hlsSettings_descriptiveVideoServiceFlag - Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

$sel:audioRenditionSets:HlsSettings', hlsSettings_audioRenditionSets - List all the audio groups that are used with the video output stream. Input all the audio GROUP-IDs that are associated to the video, separate by ','.

$sel:iFrameOnlyManifest:HlsSettings', hlsSettings_iFrameOnlyManifest - Choose Include (INCLUDE) to have MediaConvert generate a child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

$sel:audioGroupId:HlsSettings', hlsSettings_audioGroupId - Specifies the group to which the audio rendition belongs.

$sel:segmentModifier:HlsSettings', hlsSettings_segmentModifier - Use this setting to add an identifying string to the filename of each segment. The service adds this string between the name modifier and segment index number. You can use format identifiers in the string. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/using-variables-in-your-job-settings.html

$sel:audioOnlyContainer:HlsSettings', hlsSettings_audioOnlyContainer - Use this setting only in audio-only outputs. Choose MPEG-2 Transport Stream (M2TS) to create a file in an MPEG2-TS container. Keep the default value Automatic (AUTOMATIC) to create an audio-only file in a raw container. Regardless of the value that you specify here, if this output has video, the service will place the output into an MPEG2-TS container.

$sel:audioTrackType:HlsSettings', hlsSettings_audioTrackType - Four types of audio-only tracks are supported: Audio-Only Variant Stream The client can play back this audio-only stream instead of video in low-bandwidth scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest. Alternate Audio, Auto Select, Default Alternate rendition that the client should try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=YES, AUTOSELECT=YES Alternate Audio, Auto Select, Not Default Alternate rendition that the client may try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YES Alternate Audio, not Auto Select Alternate rendition that the client will not try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=NO

hlsSettings_descriptiveVideoServiceFlag :: Lens' HlsSettings (Maybe HlsDescriptiveVideoServiceFlag) Source #

Specify whether to flag this audio track as descriptive video service (DVS) in your HLS parent manifest. When you choose Flag (FLAG), MediaConvert includes the parameter CHARACTERISTICS="public.accessibility.describes-video" in the EXT-X-MEDIA entry for this track. When you keep the default choice, Don't flag (DONT_FLAG), MediaConvert leaves this parameter out. The DVS flag can help with accessibility on Apple devices. For more information, see the Apple documentation.

hlsSettings_audioRenditionSets :: Lens' HlsSettings (Maybe Text) Source #

List all the audio groups that are used with the video output stream. Input all the audio GROUP-IDs that are associated to the video, separate by ','.

hlsSettings_iFrameOnlyManifest :: Lens' HlsSettings (Maybe HlsIFrameOnlyManifest) Source #

Choose Include (INCLUDE) to have MediaConvert generate a child manifest that lists only the I-frames for this rendition, in addition to your regular manifest for this rendition. You might use this manifest as part of a workflow that creates preview functions for your video. MediaConvert adds both the I-frame only child manifest and the regular child manifest to the parent manifest. When you don't need the I-frame only child manifest, keep the default value Exclude (EXCLUDE).

hlsSettings_audioGroupId :: Lens' HlsSettings (Maybe Text) Source #

Specifies the group to which the audio rendition belongs.

hlsSettings_segmentModifier :: Lens' HlsSettings (Maybe Text) Source #

Use this setting to add an identifying string to the filename of each segment. The service adds this string between the name modifier and segment index number. You can use format identifiers in the string. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/using-variables-in-your-job-settings.html

hlsSettings_audioOnlyContainer :: Lens' HlsSettings (Maybe HlsAudioOnlyContainer) Source #

Use this setting only in audio-only outputs. Choose MPEG-2 Transport Stream (M2TS) to create a file in an MPEG2-TS container. Keep the default value Automatic (AUTOMATIC) to create an audio-only file in a raw container. Regardless of the value that you specify here, if this output has video, the service will place the output into an MPEG2-TS container.

hlsSettings_audioTrackType :: Lens' HlsSettings (Maybe HlsAudioTrackType) Source #

Four types of audio-only tracks are supported: Audio-Only Variant Stream The client can play back this audio-only stream instead of video in low-bandwidth scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest. Alternate Audio, Auto Select, Default Alternate rendition that the client should try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=YES, AUTOSELECT=YES Alternate Audio, Auto Select, Not Default Alternate rendition that the client may try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YES Alternate Audio, not Auto Select Alternate rendition that the client will not try to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=NO

HopDestination

data HopDestination Source #

Optional. Configuration for a destination queue to which the job can hop once a customer-defined minimum wait time has passed.

See: newHopDestination smart constructor.

Constructors

HopDestination' 

Fields

  • priority :: Maybe Int

    Optional. When you set up a job to use queue hopping, you can specify a different relative priority for the job in the destination queue. If you don't specify, the relative priority will remain the same as in the previous queue.

  • queue :: Maybe Text

    Optional unless the job is submitted on the default queue. When you set up a job to use queue hopping, you can specify a destination queue. This queue cannot be the original queue to which the job is submitted. If the original queue isn't the default queue and you don't specify the destination queue, the job will move to the default queue.

  • waitMinutes :: Maybe Int

    Required for setting up a job to use queue hopping. Minimum wait time in minutes until the job can hop to the destination queue. Valid range is 1 to 1440 minutes, inclusive.

Instances

Instances details
Eq HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

Read HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

Show HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

Generic HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

Associated Types

type Rep HopDestination :: Type -> Type #

NFData HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

Methods

rnf :: HopDestination -> () #

Hashable HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

ToJSON HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

FromJSON HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

type Rep HopDestination Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.HopDestination

type Rep HopDestination = D1 ('MetaData "HopDestination" "Amazonka.MediaConvert.Types.HopDestination" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "HopDestination'" 'PrefixI 'True) (S1 ('MetaSel ('Just "priority") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "queue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "waitMinutes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newHopDestination :: HopDestination Source #

Create a value of HopDestination with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:priority:HopDestination', hopDestination_priority - Optional. When you set up a job to use queue hopping, you can specify a different relative priority for the job in the destination queue. If you don't specify, the relative priority will remain the same as in the previous queue.

$sel:queue:HopDestination', hopDestination_queue - Optional unless the job is submitted on the default queue. When you set up a job to use queue hopping, you can specify a destination queue. This queue cannot be the original queue to which the job is submitted. If the original queue isn't the default queue and you don't specify the destination queue, the job will move to the default queue.

$sel:waitMinutes:HopDestination', hopDestination_waitMinutes - Required for setting up a job to use queue hopping. Minimum wait time in minutes until the job can hop to the destination queue. Valid range is 1 to 1440 minutes, inclusive.

hopDestination_priority :: Lens' HopDestination (Maybe Int) Source #

Optional. When you set up a job to use queue hopping, you can specify a different relative priority for the job in the destination queue. If you don't specify, the relative priority will remain the same as in the previous queue.

hopDestination_queue :: Lens' HopDestination (Maybe Text) Source #

Optional unless the job is submitted on the default queue. When you set up a job to use queue hopping, you can specify a destination queue. This queue cannot be the original queue to which the job is submitted. If the original queue isn't the default queue and you don't specify the destination queue, the job will move to the default queue.

hopDestination_waitMinutes :: Lens' HopDestination (Maybe Int) Source #

Required for setting up a job to use queue hopping. Minimum wait time in minutes until the job can hop to the destination queue. Valid range is 1 to 1440 minutes, inclusive.

Id3Insertion

data Id3Insertion Source #

To insert ID3 tags in your output, specify two values. Use ID3 tag (Id3) to specify the base 64 encoded string and use Timecode (TimeCode) to specify the time when the tag should be inserted. To insert multiple ID3 tags in your output, create multiple instances of ID3 insertion (Id3Insertion).

See: newId3Insertion smart constructor.

Constructors

Id3Insertion' 

Fields

  • id3 :: Maybe Text

    Use ID3 tag (Id3) to provide a tag value in base64-encode format.

  • timecode :: Maybe Text

    Provide a Timecode (TimeCode) in HH:MM:SS:FF or HH:MM:SS;FF format.

Instances

Instances details
Eq Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

Read Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

Show Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

Generic Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

Associated Types

type Rep Id3Insertion :: Type -> Type #

NFData Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

Methods

rnf :: Id3Insertion -> () #

Hashable Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

ToJSON Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

FromJSON Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

type Rep Id3Insertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Id3Insertion

type Rep Id3Insertion = D1 ('MetaData "Id3Insertion" "Amazonka.MediaConvert.Types.Id3Insertion" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Id3Insertion'" 'PrefixI 'True) (S1 ('MetaSel ('Just "id3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "timecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newId3Insertion :: Id3Insertion Source #

Create a value of Id3Insertion with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:id3:Id3Insertion', id3Insertion_id3 - Use ID3 tag (Id3) to provide a tag value in base64-encode format.

$sel:timecode:Id3Insertion', id3Insertion_timecode - Provide a Timecode (TimeCode) in HH:MM:SS:FF or HH:MM:SS;FF format.

id3Insertion_id3 :: Lens' Id3Insertion (Maybe Text) Source #

Use ID3 tag (Id3) to provide a tag value in base64-encode format.

id3Insertion_timecode :: Lens' Id3Insertion (Maybe Text) Source #

Provide a Timecode (TimeCode) in HH:MM:SS:FF or HH:MM:SS;FF format.

ImageInserter

data ImageInserter Source #

Use the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input or output individually. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/graphic-overlay.html. This setting is disabled by default.

See: newImageInserter smart constructor.

Constructors

ImageInserter' 

Fields

Instances

Instances details
Eq ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

Read ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

Show ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

Generic ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

Associated Types

type Rep ImageInserter :: Type -> Type #

NFData ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

Methods

rnf :: ImageInserter -> () #

Hashable ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

ToJSON ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

FromJSON ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

type Rep ImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImageInserter

type Rep ImageInserter = D1 ('MetaData "ImageInserter" "Amazonka.MediaConvert.Types.ImageInserter" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ImageInserter'" 'PrefixI 'True) (S1 ('MetaSel ('Just "insertableImages") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [InsertableImage]))))

newImageInserter :: ImageInserter Source #

Create a value of ImageInserter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:insertableImages:ImageInserter', imageInserter_insertableImages - Specify the images that you want to overlay on your video. The images must be PNG or TGA files.

imageInserter_insertableImages :: Lens' ImageInserter (Maybe [InsertableImage]) Source #

Specify the images that you want to overlay on your video. The images must be PNG or TGA files.

ImscDestinationSettings

data ImscDestinationSettings Source #

Settings related to IMSC captions. IMSC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to IMSC.

See: newImscDestinationSettings smart constructor.

Constructors

ImscDestinationSettings' 

Fields

  • stylePassthrough :: Maybe ImscStylePassthrough

    Keep this setting enabled to have MediaConvert use the font style and position information from the captions source in the output. This option is available only when your input captions are IMSC, SMPTE-TT, or TTML. Disable this setting for simplified output captions.

Instances

Instances details
Eq ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

Read ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

Show ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

Generic ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

Associated Types

type Rep ImscDestinationSettings :: Type -> Type #

NFData ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

Methods

rnf :: ImscDestinationSettings -> () #

Hashable ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

ToJSON ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

FromJSON ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

type Rep ImscDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ImscDestinationSettings

type Rep ImscDestinationSettings = D1 ('MetaData "ImscDestinationSettings" "Amazonka.MediaConvert.Types.ImscDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ImscDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ImscStylePassthrough))))

newImscDestinationSettings :: ImscDestinationSettings Source #

Create a value of ImscDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stylePassthrough:ImscDestinationSettings', imscDestinationSettings_stylePassthrough - Keep this setting enabled to have MediaConvert use the font style and position information from the captions source in the output. This option is available only when your input captions are IMSC, SMPTE-TT, or TTML. Disable this setting for simplified output captions.

imscDestinationSettings_stylePassthrough :: Lens' ImscDestinationSettings (Maybe ImscStylePassthrough) Source #

Keep this setting enabled to have MediaConvert use the font style and position information from the captions source in the output. This option is available only when your input captions are IMSC, SMPTE-TT, or TTML. Disable this setting for simplified output captions.

Input

data Input Source #

Use inputs to define the source files used in your transcoding job. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/specify-input-settings.html. You can use multiple video inputs to do input stitching. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/assembling-multiple-inputs-and-input-clips.html

See: newInput smart constructor.

Constructors

Input' 

Fields

  • videoSelector :: Maybe VideoSelector

    Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

  • supplementalImps :: Maybe [Text]

    Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you're using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example ["s3://bucket/ov/", "s3://bucket/vf2/ASSETMAP.xml"]. You don't need to specify the IMP that contains your input CPL, because the service automatically detects it.

  • programNumber :: Maybe Natural

    Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

  • audioSelectorGroups :: Maybe (HashMap Text AudioSelectorGroup)

    Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

  • timecodeSource :: Maybe InputTimecodeSource

    Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

  • audioSelectors :: Maybe (HashMap Text AudioSelector)

    Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

  • decryptionSettings :: Maybe InputDecryptionSettings

    Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

  • deblockFilter :: Maybe InputDeblockFilter

    Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

  • inputClippings :: Maybe [InputClipping]

    (InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

  • crop :: Maybe Rectangle

    Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

  • denoiseFilter :: Maybe InputDenoiseFilter

    Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

  • imageInserter :: Maybe ImageInserter

    Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

  • filterStrength :: Maybe Int

    Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

  • psiControl :: Maybe InputPsiControl

    Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

  • captionSelectors :: Maybe (HashMap Text CaptionSelector)

    Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

  • fileInput :: Maybe Text

    Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, "s3://bucket/vf/cpl.xml". If the CPL is in an incomplete IMP, make sure to use *Supplemental IMPs* (SupplementalImps) to specify any supplemental IMPs that contain assets referenced by the CPL.

  • timecodeStart :: Maybe Text

    Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

  • inputScanType :: Maybe InputScanType

    When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

  • position :: Maybe Rectangle

    Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

  • filterEnable :: Maybe InputFilterEnable

    Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

Instances

Instances details
Eq Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Methods

(==) :: Input -> Input -> Bool #

(/=) :: Input -> Input -> Bool #

Read Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Show Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Methods

showsPrec :: Int -> Input -> ShowS #

show :: Input -> String #

showList :: [Input] -> ShowS #

Generic Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Associated Types

type Rep Input :: Type -> Type #

Methods

from :: Input -> Rep Input x #

to :: Rep Input x -> Input #

NFData Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Methods

rnf :: Input -> () #

Hashable Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

Methods

hashWithSalt :: Int -> Input -> Int #

hash :: Input -> Int #

ToJSON Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

FromJSON Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

type Rep Input Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Input

type Rep Input = D1 ('MetaData "Input" "Amazonka.MediaConvert.Types.Input" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Input'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "videoSelector") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoSelector)) :*: S1 ('MetaSel ('Just "supplementalImps") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))) :*: (S1 ('MetaSel ('Just "programNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "audioSelectorGroups") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text AudioSelectorGroup))) :*: S1 ('MetaSel ('Just "timecodeSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputTimecodeSource))))) :*: ((S1 ('MetaSel ('Just "audioSelectors") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text AudioSelector))) :*: S1 ('MetaSel ('Just "decryptionSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputDecryptionSettings))) :*: (S1 ('MetaSel ('Just "deblockFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputDeblockFilter)) :*: (S1 ('MetaSel ('Just "inputClippings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [InputClipping])) :*: S1 ('MetaSel ('Just "crop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle)))))) :*: (((S1 ('MetaSel ('Just "denoiseFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputDenoiseFilter)) :*: S1 ('MetaSel ('Just "imageInserter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ImageInserter))) :*: (S1 ('MetaSel ('Just "filterStrength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "psiControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputPsiControl)) :*: S1 ('MetaSel ('Just "captionSelectors") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text CaptionSelector)))))) :*: ((S1 ('MetaSel ('Just "fileInput") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "timecodeStart") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "inputScanType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputScanType)) :*: (S1 ('MetaSel ('Just "position") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle)) :*: S1 ('MetaSel ('Just "filterEnable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputFilterEnable))))))))

newInput :: Input Source #

Create a value of Input with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:videoSelector:Input', input_videoSelector - Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

$sel:supplementalImps:Input', input_supplementalImps - Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you're using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example ["s3://bucket/ov/", "s3://bucket/vf2/ASSETMAP.xml"]. You don't need to specify the IMP that contains your input CPL, because the service automatically detects it.

$sel:programNumber:Input', input_programNumber - Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

$sel:audioSelectorGroups:Input', input_audioSelectorGroups - Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

$sel:timecodeSource:Input', input_timecodeSource - Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

$sel:audioSelectors:Input', input_audioSelectors - Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

$sel:decryptionSettings:Input', input_decryptionSettings - Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

$sel:deblockFilter:Input', input_deblockFilter - Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

$sel:inputClippings:Input', input_inputClippings - (InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

$sel:crop:Input', input_crop - Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

$sel:denoiseFilter:Input', input_denoiseFilter - Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

$sel:imageInserter:Input', input_imageInserter - Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

$sel:filterStrength:Input', input_filterStrength - Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

$sel:psiControl:Input', input_psiControl - Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

$sel:captionSelectors:Input', input_captionSelectors - Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

$sel:fileInput:Input', input_fileInput - Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, "s3://bucket/vf/cpl.xml". If the CPL is in an incomplete IMP, make sure to use *Supplemental IMPs* (SupplementalImps) to specify any supplemental IMPs that contain assets referenced by the CPL.

$sel:timecodeStart:Input', input_timecodeStart - Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

$sel:inputScanType:Input', input_inputScanType - When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

$sel:position:Input', input_position - Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

$sel:filterEnable:Input', input_filterEnable - Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

input_videoSelector :: Lens' Input (Maybe VideoSelector) Source #

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

input_supplementalImps :: Lens' Input (Maybe [Text]) Source #

Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you're using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example ["s3://bucket/ov/", "s3://bucket/vf2/ASSETMAP.xml"]. You don't need to specify the IMP that contains your input CPL, because the service automatically detects it.

input_programNumber :: Lens' Input (Maybe Natural) Source #

Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

input_audioSelectorGroups :: Lens' Input (Maybe (HashMap Text AudioSelectorGroup)) Source #

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

input_timecodeSource :: Lens' Input (Maybe InputTimecodeSource) Source #

Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

input_audioSelectors :: Lens' Input (Maybe (HashMap Text AudioSelector)) Source #

Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

input_decryptionSettings :: Lens' Input (Maybe InputDecryptionSettings) Source #

Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

input_deblockFilter :: Lens' Input (Maybe InputDeblockFilter) Source #

Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

input_inputClippings :: Lens' Input (Maybe [InputClipping]) Source #

(InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

input_crop :: Lens' Input (Maybe Rectangle) Source #

Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

input_denoiseFilter :: Lens' Input (Maybe InputDenoiseFilter) Source #

Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

input_imageInserter :: Lens' Input (Maybe ImageInserter) Source #

Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

input_filterStrength :: Lens' Input (Maybe Int) Source #

Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

input_psiControl :: Lens' Input (Maybe InputPsiControl) Source #

Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

input_captionSelectors :: Lens' Input (Maybe (HashMap Text CaptionSelector)) Source #

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

input_fileInput :: Lens' Input (Maybe Text) Source #

Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, "s3://bucket/vf/cpl.xml". If the CPL is in an incomplete IMP, make sure to use *Supplemental IMPs* (SupplementalImps) to specify any supplemental IMPs that contain assets referenced by the CPL.

input_timecodeStart :: Lens' Input (Maybe Text) Source #

Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

input_inputScanType :: Lens' Input (Maybe InputScanType) Source #

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

input_position :: Lens' Input (Maybe Rectangle) Source #

Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

input_filterEnable :: Lens' Input (Maybe InputFilterEnable) Source #

Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

InputClipping

data InputClipping Source #

To transcode only portions of your input, include one input clip for each part of your input that you want in your output. All input clips that you specify will be included in every output of the job. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/assembling-multiple-inputs-and-input-clips.html.

See: newInputClipping smart constructor.

Constructors

InputClipping' 

Fields

  • endTimecode :: Maybe Text

    Set End timecode (EndTimecode) to the end of the portion of the input you are clipping. The frame corresponding to the End timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for timecode source under input settings (InputTimecodeSource). For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to end six minutes into the video, use 01:06:00:00.

  • startTimecode :: Maybe Text

    Set Start timecode (StartTimecode) to the beginning of the portion of the input you are clipping. The frame corresponding to the Start timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for Input timecode source. For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to begin five minutes into the video, use 01:05:00:00.

Instances

Instances details
Eq InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

Read InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

Show InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

Generic InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

Associated Types

type Rep InputClipping :: Type -> Type #

NFData InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

Methods

rnf :: InputClipping -> () #

Hashable InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

ToJSON InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

FromJSON InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

type Rep InputClipping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputClipping

type Rep InputClipping = D1 ('MetaData "InputClipping" "Amazonka.MediaConvert.Types.InputClipping" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "InputClipping'" 'PrefixI 'True) (S1 ('MetaSel ('Just "endTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "startTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newInputClipping :: InputClipping Source #

Create a value of InputClipping with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:endTimecode:InputClipping', inputClipping_endTimecode - Set End timecode (EndTimecode) to the end of the portion of the input you are clipping. The frame corresponding to the End timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for timecode source under input settings (InputTimecodeSource). For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to end six minutes into the video, use 01:06:00:00.

$sel:startTimecode:InputClipping', inputClipping_startTimecode - Set Start timecode (StartTimecode) to the beginning of the portion of the input you are clipping. The frame corresponding to the Start timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for Input timecode source. For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to begin five minutes into the video, use 01:05:00:00.

inputClipping_endTimecode :: Lens' InputClipping (Maybe Text) Source #

Set End timecode (EndTimecode) to the end of the portion of the input you are clipping. The frame corresponding to the End timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for timecode source under input settings (InputTimecodeSource). For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to end six minutes into the video, use 01:06:00:00.

inputClipping_startTimecode :: Lens' InputClipping (Maybe Text) Source #

Set Start timecode (StartTimecode) to the beginning of the portion of the input you are clipping. The frame corresponding to the Start timecode value is included in the clip. Start timecode or End timecode may be left blank, but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the minute, SS is the second, and FF is the frame number. When choosing this value, take into account your setting for Input timecode source. For example, if you have embedded timecodes that start at 01:00:00:00 and you want your clip to begin five minutes into the video, use 01:05:00:00.

InputDecryptionSettings

data InputDecryptionSettings Source #

Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

See: newInputDecryptionSettings smart constructor.

Constructors

InputDecryptionSettings' 

Fields

  • encryptedDecryptionKey :: Maybe Text

    Warning! Don't provide your encryption key in plaintext. Your job settings could be intercepted, making your encrypted content vulnerable. Specify the encrypted version of the data key that you used to encrypt your content. The data key must be encrypted by AWS Key Management Service (KMS). The key can be 128, 192, or 256 bits.

  • kmsKeyRegion :: Maybe Text

    Specify the AWS Region for AWS Key Management Service (KMS) that you used to encrypt your data key, if that Region is different from the one you are using for AWS Elemental MediaConvert.

  • decryptionMode :: Maybe DecryptionMode

    Specify the encryption mode that you used to encrypt your input files.

  • initializationVector :: Maybe Text

    Specify the initialization vector that you used when you encrypted your content before uploading it to Amazon S3. You can use a 16-byte initialization vector with any encryption mode. Or, you can use a 12-byte initialization vector with GCM or CTR. MediaConvert accepts only initialization vectors that are base64-encoded.

Instances

Instances details
Eq InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

Read InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

Show InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

Generic InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

Associated Types

type Rep InputDecryptionSettings :: Type -> Type #

NFData InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

Methods

rnf :: InputDecryptionSettings -> () #

Hashable InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

ToJSON InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

FromJSON InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

type Rep InputDecryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputDecryptionSettings

type Rep InputDecryptionSettings = D1 ('MetaData "InputDecryptionSettings" "Amazonka.MediaConvert.Types.InputDecryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "InputDecryptionSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "encryptedDecryptionKey") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "kmsKeyRegion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "decryptionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DecryptionMode)) :*: S1 ('MetaSel ('Just "initializationVector") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newInputDecryptionSettings :: InputDecryptionSettings Source #

Create a value of InputDecryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:encryptedDecryptionKey:InputDecryptionSettings', inputDecryptionSettings_encryptedDecryptionKey - Warning! Don't provide your encryption key in plaintext. Your job settings could be intercepted, making your encrypted content vulnerable. Specify the encrypted version of the data key that you used to encrypt your content. The data key must be encrypted by AWS Key Management Service (KMS). The key can be 128, 192, or 256 bits.

$sel:kmsKeyRegion:InputDecryptionSettings', inputDecryptionSettings_kmsKeyRegion - Specify the AWS Region for AWS Key Management Service (KMS) that you used to encrypt your data key, if that Region is different from the one you are using for AWS Elemental MediaConvert.

$sel:decryptionMode:InputDecryptionSettings', inputDecryptionSettings_decryptionMode - Specify the encryption mode that you used to encrypt your input files.

$sel:initializationVector:InputDecryptionSettings', inputDecryptionSettings_initializationVector - Specify the initialization vector that you used when you encrypted your content before uploading it to Amazon S3. You can use a 16-byte initialization vector with any encryption mode. Or, you can use a 12-byte initialization vector with GCM or CTR. MediaConvert accepts only initialization vectors that are base64-encoded.

inputDecryptionSettings_encryptedDecryptionKey :: Lens' InputDecryptionSettings (Maybe Text) Source #

Warning! Don't provide your encryption key in plaintext. Your job settings could be intercepted, making your encrypted content vulnerable. Specify the encrypted version of the data key that you used to encrypt your content. The data key must be encrypted by AWS Key Management Service (KMS). The key can be 128, 192, or 256 bits.

inputDecryptionSettings_kmsKeyRegion :: Lens' InputDecryptionSettings (Maybe Text) Source #

Specify the AWS Region for AWS Key Management Service (KMS) that you used to encrypt your data key, if that Region is different from the one you are using for AWS Elemental MediaConvert.

inputDecryptionSettings_decryptionMode :: Lens' InputDecryptionSettings (Maybe DecryptionMode) Source #

Specify the encryption mode that you used to encrypt your input files.

inputDecryptionSettings_initializationVector :: Lens' InputDecryptionSettings (Maybe Text) Source #

Specify the initialization vector that you used when you encrypted your content before uploading it to Amazon S3. You can use a 16-byte initialization vector with any encryption mode. Or, you can use a 12-byte initialization vector with GCM or CTR. MediaConvert accepts only initialization vectors that are base64-encoded.

InputTemplate

data InputTemplate Source #

Specified video input in a template.

See: newInputTemplate smart constructor.

Constructors

InputTemplate' 

Fields

  • videoSelector :: Maybe VideoSelector

    Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

  • programNumber :: Maybe Natural

    Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

  • audioSelectorGroups :: Maybe (HashMap Text AudioSelectorGroup)

    Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

  • timecodeSource :: Maybe InputTimecodeSource

    Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

  • audioSelectors :: Maybe (HashMap Text AudioSelector)

    Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

  • deblockFilter :: Maybe InputDeblockFilter

    Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

  • inputClippings :: Maybe [InputClipping]

    (InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

  • crop :: Maybe Rectangle

    Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

  • denoiseFilter :: Maybe InputDenoiseFilter

    Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

  • imageInserter :: Maybe ImageInserter

    Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

  • filterStrength :: Maybe Int

    Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

  • psiControl :: Maybe InputPsiControl

    Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

  • captionSelectors :: Maybe (HashMap Text CaptionSelector)

    Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

  • timecodeStart :: Maybe Text

    Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

  • inputScanType :: Maybe InputScanType

    When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

  • position :: Maybe Rectangle

    Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

  • filterEnable :: Maybe InputFilterEnable

    Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

Instances

Instances details
Eq InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

Read InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

Show InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

Generic InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

Associated Types

type Rep InputTemplate :: Type -> Type #

NFData InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

Methods

rnf :: InputTemplate -> () #

Hashable InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

ToJSON InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

FromJSON InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

type Rep InputTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InputTemplate

type Rep InputTemplate = D1 ('MetaData "InputTemplate" "Amazonka.MediaConvert.Types.InputTemplate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "InputTemplate'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "videoSelector") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoSelector)) :*: S1 ('MetaSel ('Just "programNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "audioSelectorGroups") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text AudioSelectorGroup))) :*: S1 ('MetaSel ('Just "timecodeSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputTimecodeSource)))) :*: ((S1 ('MetaSel ('Just "audioSelectors") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text AudioSelector))) :*: S1 ('MetaSel ('Just "deblockFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputDeblockFilter))) :*: (S1 ('MetaSel ('Just "inputClippings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [InputClipping])) :*: S1 ('MetaSel ('Just "crop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle))))) :*: (((S1 ('MetaSel ('Just "denoiseFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputDenoiseFilter)) :*: S1 ('MetaSel ('Just "imageInserter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ImageInserter))) :*: (S1 ('MetaSel ('Just "filterStrength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "psiControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputPsiControl)))) :*: ((S1 ('MetaSel ('Just "captionSelectors") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text CaptionSelector))) :*: S1 ('MetaSel ('Just "timecodeStart") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "inputScanType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputScanType)) :*: (S1 ('MetaSel ('Just "position") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle)) :*: S1 ('MetaSel ('Just "filterEnable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputFilterEnable))))))))

newInputTemplate :: InputTemplate Source #

Create a value of InputTemplate with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:videoSelector:InputTemplate', inputTemplate_videoSelector - Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

$sel:programNumber:InputTemplate', inputTemplate_programNumber - Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

$sel:audioSelectorGroups:InputTemplate', inputTemplate_audioSelectorGroups - Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

$sel:timecodeSource:InputTemplate', inputTemplate_timecodeSource - Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

$sel:audioSelectors:InputTemplate', inputTemplate_audioSelectors - Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

$sel:deblockFilter:InputTemplate', inputTemplate_deblockFilter - Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

$sel:inputClippings:InputTemplate', inputTemplate_inputClippings - (InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

$sel:crop:InputTemplate', inputTemplate_crop - Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

$sel:denoiseFilter:InputTemplate', inputTemplate_denoiseFilter - Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

$sel:imageInserter:InputTemplate', inputTemplate_imageInserter - Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

$sel:filterStrength:InputTemplate', inputTemplate_filterStrength - Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

$sel:psiControl:InputTemplate', inputTemplate_psiControl - Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

$sel:captionSelectors:InputTemplate', inputTemplate_captionSelectors - Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

$sel:timecodeStart:InputTemplate', inputTemplate_timecodeStart - Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

$sel:inputScanType:InputTemplate', inputTemplate_inputScanType - When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

$sel:position:InputTemplate', inputTemplate_position - Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

$sel:filterEnable:InputTemplate', inputTemplate_filterEnable - Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

inputTemplate_videoSelector :: Lens' InputTemplate (Maybe VideoSelector) Source #

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

inputTemplate_programNumber :: Lens' InputTemplate (Maybe Natural) Source #

Use Program (programNumber) to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn't exist, the transcoding service will use this default.

inputTemplate_audioSelectorGroups :: Lens' InputTemplate (Maybe (HashMap Text AudioSelectorGroup)) Source #

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab (AudioDescription). Note that, if you're working with embedded audio, it's simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

inputTemplate_timecodeSource :: Lens' InputTemplate (Maybe InputTimecodeSource) Source #

Use this Timecode source setting, located under the input settings (InputTimecodeSource), to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded (EMBEDDED) to use the timecodes in your input video. Choose Start at zero (ZEROBASED) to start the first frame at zero. Choose Specified start (SPECIFIEDSTART) to start the first frame at the timecode that you specify in the setting Start timecode (timecodeStart). If you don't specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

inputTemplate_audioSelectors :: Lens' InputTemplate (Maybe (HashMap Text AudioSelector)) Source #

Use Audio selectors (AudioSelectors) to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

inputTemplate_deblockFilter :: Lens' InputTemplate (Maybe InputDeblockFilter) Source #

Enable Deblock (InputDeblockFilter) to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

inputTemplate_inputClippings :: Lens' InputTemplate (Maybe [InputClipping]) Source #

(InputClippings) contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

inputTemplate_crop :: Lens' InputTemplate (Maybe Rectangle) Source #

Use Cropping selection (crop) to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection (crop).

inputTemplate_denoiseFilter :: Lens' InputTemplate (Maybe InputDenoiseFilter) Source #

Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

inputTemplate_imageInserter :: Lens' InputTemplate (Maybe ImageInserter) Source #

Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

inputTemplate_filterStrength :: Lens' InputTemplate (Maybe Int) Source #

Use Filter strength (FilterStrength) to adjust the magnitude the input filter settings (Deblock and Denoise). The range is -5 to 5. Default is 0.

inputTemplate_psiControl :: Lens' InputTemplate (Maybe InputPsiControl) Source #

Set PSI control (InputPsiControl) for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

inputTemplate_captionSelectors :: Lens' InputTemplate (Maybe (HashMap Text CaptionSelector)) Source #

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 20 captions selectors per input.

inputTemplate_timecodeStart :: Lens' InputTemplate (Maybe Text) Source #

Specify the timecode that you want the service to use for this input's initial frame. To use this setting, you must set the Timecode source setting, located under the input settings (InputTimecodeSource), to Specified start (SPECIFIEDSTART). For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

inputTemplate_inputScanType :: Lens' InputTemplate (Maybe InputScanType) Source #

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn't automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don't specify, the default value is Auto (AUTO). Auto is the correct setting for all inputs that are not PsF. Don't set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

inputTemplate_position :: Lens' InputTemplate (Maybe Rectangle) Source #

Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement (position). If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD (RespondToAfd) to Respond (RESPOND). If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior (scalingBehavior).

inputTemplate_filterEnable :: Lens' InputTemplate (Maybe InputFilterEnable) Source #

Specify how the transcoding service applies the denoise and deblock filters. You must also enable the filters separately, with Denoise (InputDenoiseFilter) and Deblock (InputDeblockFilter). * Auto - The transcoding service determines whether to apply filtering, depending on input type and quality. * Disable - The input is not filtered. This is true even if you use the API to enable them in (InputDeblockFilter) and (InputDeblockFilter). * Force - The input is filtered regardless of input type.

InsertableImage

data InsertableImage Source #

These settings apply to a specific graphic overlay. You can include multiple overlays in your job.

See: newInsertableImage smart constructor.

Constructors

InsertableImage' 

Fields

  • imageX :: Maybe Natural

    Specify the distance, in pixels, between the inserted image and the left edge of the video frame. Required for any image overlay that you specify.

  • height :: Maybe Natural

    Specify the height of the inserted image in pixels. If you specify a value that's larger than the video resolution height, the service will crop your overlaid image to fit. To use the native height of the image, keep this setting blank.

  • startTime :: Maybe Text

    Specify the timecode of the frame that you want the overlay to first appear on. This must be in timecode (HH:MM:SS:FF or HH:MM:SS;FF) format. Remember to take into account your timecode source settings.

  • fadeOut :: Maybe Natural

    Specify the length of time, in milliseconds, between the end of the time that you have specified for the image overlay Duration and when the overlaid image has faded to total transparency. If you don't specify a value for Fade-out, the image will disappear abruptly at the end of the inserted image duration.

  • width :: Maybe Natural

    Specify the width of the inserted image in pixels. If you specify a value that's larger than the video resolution width, the service will crop your overlaid image to fit. To use the native width of the image, keep this setting blank.

  • opacity :: Maybe Natural

    Use Opacity (Opacity) to specify how much of the underlying video shows through the inserted image. 0 is transparent and 100 is fully opaque. Default is 50.

  • layer :: Maybe Natural

    Specify how overlapping inserted images appear. Images with higher values for Layer appear on top of images with lower values for Layer.

  • duration :: Maybe Natural

    Specify the time, in milliseconds, for the image to remain on the output video. This duration includes fade-in time but not fade-out time.

  • imageY :: Maybe Natural

    Specify the distance, in pixels, between the overlaid image and the top edge of the video frame. Required for any image overlay that you specify.

  • imageInserterInput :: Maybe Text

    Specify the HTTP, HTTPS, or Amazon S3 location of the image that you want to overlay on the video. Use a PNG or TGA file.

  • fadeIn :: Maybe Natural

    Specify the length of time, in milliseconds, between the Start time that you specify for the image insertion and the time that the image appears at full opacity. Full opacity is the level that you specify for the opacity setting. If you don't specify a value for Fade-in, the image will appear abruptly at the overlay start time.

Instances

Instances details
Eq InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

Read InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

Show InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

Generic InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

Associated Types

type Rep InsertableImage :: Type -> Type #

NFData InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

Methods

rnf :: InsertableImage -> () #

Hashable InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

ToJSON InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

FromJSON InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

type Rep InsertableImage Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.InsertableImage

type Rep InsertableImage = D1 ('MetaData "InsertableImage" "Amazonka.MediaConvert.Types.InsertableImage" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "InsertableImage'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "imageX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "height") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "fadeOut") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "width") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: ((S1 ('MetaSel ('Just "opacity") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "layer") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "duration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "imageY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "imageInserterInput") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "fadeIn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newInsertableImage :: InsertableImage Source #

Create a value of InsertableImage with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:imageX:InsertableImage', insertableImage_imageX - Specify the distance, in pixels, between the inserted image and the left edge of the video frame. Required for any image overlay that you specify.

$sel:height:InsertableImage', insertableImage_height - Specify the height of the inserted image in pixels. If you specify a value that's larger than the video resolution height, the service will crop your overlaid image to fit. To use the native height of the image, keep this setting blank.

$sel:startTime:InsertableImage', insertableImage_startTime - Specify the timecode of the frame that you want the overlay to first appear on. This must be in timecode (HH:MM:SS:FF or HH:MM:SS;FF) format. Remember to take into account your timecode source settings.

$sel:fadeOut:InsertableImage', insertableImage_fadeOut - Specify the length of time, in milliseconds, between the end of the time that you have specified for the image overlay Duration and when the overlaid image has faded to total transparency. If you don't specify a value for Fade-out, the image will disappear abruptly at the end of the inserted image duration.

$sel:width:InsertableImage', insertableImage_width - Specify the width of the inserted image in pixels. If you specify a value that's larger than the video resolution width, the service will crop your overlaid image to fit. To use the native width of the image, keep this setting blank.

$sel:opacity:InsertableImage', insertableImage_opacity - Use Opacity (Opacity) to specify how much of the underlying video shows through the inserted image. 0 is transparent and 100 is fully opaque. Default is 50.

$sel:layer:InsertableImage', insertableImage_layer - Specify how overlapping inserted images appear. Images with higher values for Layer appear on top of images with lower values for Layer.

$sel:duration:InsertableImage', insertableImage_duration - Specify the time, in milliseconds, for the image to remain on the output video. This duration includes fade-in time but not fade-out time.

$sel:imageY:InsertableImage', insertableImage_imageY - Specify the distance, in pixels, between the overlaid image and the top edge of the video frame. Required for any image overlay that you specify.

$sel:imageInserterInput:InsertableImage', insertableImage_imageInserterInput - Specify the HTTP, HTTPS, or Amazon S3 location of the image that you want to overlay on the video. Use a PNG or TGA file.

$sel:fadeIn:InsertableImage', insertableImage_fadeIn - Specify the length of time, in milliseconds, between the Start time that you specify for the image insertion and the time that the image appears at full opacity. Full opacity is the level that you specify for the opacity setting. If you don't specify a value for Fade-in, the image will appear abruptly at the overlay start time.

insertableImage_imageX :: Lens' InsertableImage (Maybe Natural) Source #

Specify the distance, in pixels, between the inserted image and the left edge of the video frame. Required for any image overlay that you specify.

insertableImage_height :: Lens' InsertableImage (Maybe Natural) Source #

Specify the height of the inserted image in pixels. If you specify a value that's larger than the video resolution height, the service will crop your overlaid image to fit. To use the native height of the image, keep this setting blank.

insertableImage_startTime :: Lens' InsertableImage (Maybe Text) Source #

Specify the timecode of the frame that you want the overlay to first appear on. This must be in timecode (HH:MM:SS:FF or HH:MM:SS;FF) format. Remember to take into account your timecode source settings.

insertableImage_fadeOut :: Lens' InsertableImage (Maybe Natural) Source #

Specify the length of time, in milliseconds, between the end of the time that you have specified for the image overlay Duration and when the overlaid image has faded to total transparency. If you don't specify a value for Fade-out, the image will disappear abruptly at the end of the inserted image duration.

insertableImage_width :: Lens' InsertableImage (Maybe Natural) Source #

Specify the width of the inserted image in pixels. If you specify a value that's larger than the video resolution width, the service will crop your overlaid image to fit. To use the native width of the image, keep this setting blank.

insertableImage_opacity :: Lens' InsertableImage (Maybe Natural) Source #

Use Opacity (Opacity) to specify how much of the underlying video shows through the inserted image. 0 is transparent and 100 is fully opaque. Default is 50.

insertableImage_layer :: Lens' InsertableImage (Maybe Natural) Source #

Specify how overlapping inserted images appear. Images with higher values for Layer appear on top of images with lower values for Layer.

insertableImage_duration :: Lens' InsertableImage (Maybe Natural) Source #

Specify the time, in milliseconds, for the image to remain on the output video. This duration includes fade-in time but not fade-out time.

insertableImage_imageY :: Lens' InsertableImage (Maybe Natural) Source #

Specify the distance, in pixels, between the overlaid image and the top edge of the video frame. Required for any image overlay that you specify.

insertableImage_imageInserterInput :: Lens' InsertableImage (Maybe Text) Source #

Specify the HTTP, HTTPS, or Amazon S3 location of the image that you want to overlay on the video. Use a PNG or TGA file.

insertableImage_fadeIn :: Lens' InsertableImage (Maybe Natural) Source #

Specify the length of time, in milliseconds, between the Start time that you specify for the image insertion and the time that the image appears at full opacity. Full opacity is the level that you specify for the opacity setting. If you don't specify a value for Fade-in, the image will appear abruptly at the overlay start time.

Job

data Job Source #

Each job converts an input file into an output file or files. For more information, see the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html

See: newJob smart constructor.

Constructors

Job' 

Fields

  • status :: Maybe JobStatus

    A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR.

  • jobTemplate :: Maybe Text

    The job template that the job is created from, if it is created from a job template.

  • accelerationSettings :: Maybe AccelerationSettings

    Accelerated transcoding can significantly speed up jobs with long, visually complex content.

  • priority :: Maybe Int

    Relative priority on the job.

  • statusUpdateInterval :: Maybe StatusUpdateInterval

    Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

  • arn :: Maybe Text

    An identifier for this resource that is unique within all of AWS.

  • createdAt :: Maybe POSIX

    The time, in Unix epoch format in seconds, when the job got created.

  • hopDestinations :: Maybe [HopDestination]

    Optional list of hop destinations.

  • retryCount :: Maybe Int

    The number of times that the service automatically attempted to process your job after encountering an error.

  • simulateReservedQueue :: Maybe SimulateReservedQueue

    Enable this setting when you run a test job to estimate how many reserved transcoding slots (RTS) you need. When this is enabled, MediaConvert runs your job from an on-demand queue with similar performance to what you will see with one RTS in a reserved queue. This setting is disabled by default.

  • currentPhase :: Maybe JobPhase

    A job's phase can be PROBING, TRANSCODING OR UPLOADING

  • queue :: Maybe Text

    When you create a job, you can specify a queue to send it to. If you don't specify, the job will go to the default queue. For more about queues, see the User Guide topic at https://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html

  • userMetadata :: Maybe (HashMap Text Text)

    User-defined metadata that you want to associate with an MediaConvert job. You specify metadata in key/value pairs.

  • billingTagsSource :: Maybe BillingTagsSource

    The tag type that AWS Billing and Cost Management will use to sort your AWS Elemental MediaConvert costs on any billing report that you set up.

  • outputGroupDetails :: Maybe [OutputGroupDetail]

    List of output group details

  • errorCode :: Maybe Int

    Error code for the job

  • queueTransitions :: Maybe [QueueTransition]

    The job's queue hopping history.

  • id :: Maybe Text

    A portion of the job's ARN, unique within your AWS Elemental MediaConvert resources

  • jobPercentComplete :: Maybe Int

    An estimate of how far your job has progressed. This estimate is shown as a percentage of the total time from when your job leaves its queue to when your output files appear in your output Amazon S3 bucket. AWS Elemental MediaConvert provides jobPercentComplete in CloudWatch STATUS_UPDATE events and in the response to GetJob and ListJobs requests. The jobPercentComplete estimate is reliable for the following input containers: Quicktime, Transport Stream, MP4, and MXF. For some jobs, the service can't provide information about job progress. In those cases, jobPercentComplete returns a null value.

  • timing :: Maybe Timing

    Information about when jobs are submitted, started, and finished is specified in Unix epoch format in seconds.

  • messages :: Maybe JobMessages

    Provides messages from the service about jobs that you have already successfully submitted.

  • errorMessage :: Maybe Text

    Error message of Job

  • accelerationStatus :: Maybe AccelerationStatus

    Describes whether the current job is running with accelerated transcoding. For jobs that have Acceleration (AccelerationMode) set to DISABLED, AccelerationStatus is always NOT_APPLICABLE. For jobs that have Acceleration (AccelerationMode) set to ENABLED or PREFERRED, AccelerationStatus is one of the other states. AccelerationStatus is IN_PROGRESS initially, while the service determines whether the input files and job settings are compatible with accelerated transcoding. If they are, AcclerationStatus is ACCELERATED. If your input files and job settings aren't compatible with accelerated transcoding, the service either fails your job or runs it without accelerated transcoding, depending on how you set Acceleration (AccelerationMode). When the service runs your job without accelerated transcoding, AccelerationStatus is NOT_ACCELERATED.

  • role' :: Text

    The IAM role you use for creating this job. For details about permissions, see the User Guide topic at the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

  • settings :: JobSettings

    JobSettings contains all the transcode settings for a job.

Instances

Instances details
Eq Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Methods

(==) :: Job -> Job -> Bool #

(/=) :: Job -> Job -> Bool #

Read Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Show Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Methods

showsPrec :: Int -> Job -> ShowS #

show :: Job -> String #

showList :: [Job] -> ShowS #

Generic Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Associated Types

type Rep Job :: Type -> Type #

Methods

from :: Job -> Rep Job x #

to :: Rep Job x -> Job #

NFData Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Methods

rnf :: Job -> () #

Hashable Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

Methods

hashWithSalt :: Int -> Job -> Int #

hash :: Job -> Int #

FromJSON Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

type Rep Job Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Job

type Rep Job = D1 ('MetaData "Job" "Amazonka.MediaConvert.Types.Job" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Job'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe JobStatus)) :*: (S1 ('MetaSel ('Just "jobTemplate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "accelerationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AccelerationSettings)))) :*: (S1 ('MetaSel ('Just "priority") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "statusUpdateInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe StatusUpdateInterval)) :*: S1 ('MetaSel ('Just "arn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: ((S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "hopDestinations") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [HopDestination])) :*: S1 ('MetaSel ('Just "retryCount") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))) :*: (S1 ('MetaSel ('Just "simulateReservedQueue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SimulateReservedQueue)) :*: (S1 ('MetaSel ('Just "currentPhase") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe JobPhase)) :*: S1 ('MetaSel ('Just "queue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))) :*: (((S1 ('MetaSel ('Just "userMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text Text))) :*: (S1 ('MetaSel ('Just "billingTagsSource") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe BillingTagsSource)) :*: S1 ('MetaSel ('Just "outputGroupDetails") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [OutputGroupDetail])))) :*: (S1 ('MetaSel ('Just "errorCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "queueTransitions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [QueueTransition])) :*: S1 ('MetaSel ('Just "id") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: ((S1 ('MetaSel ('Just "jobPercentComplete") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: (S1 ('MetaSel ('Just "timing") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Timing)) :*: S1 ('MetaSel ('Just "messages") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe JobMessages)))) :*: ((S1 ('MetaSel ('Just "errorMessage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "accelerationStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AccelerationStatus))) :*: (S1 ('MetaSel ('Just "role'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 JobSettings)))))))

newJob Source #

Create a value of Job with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:status:Job', job_status - A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR.

$sel:jobTemplate:Job', job_jobTemplate - The job template that the job is created from, if it is created from a job template.

$sel:accelerationSettings:Job', job_accelerationSettings - Accelerated transcoding can significantly speed up jobs with long, visually complex content.

$sel:priority:Job', job_priority - Relative priority on the job.

$sel:statusUpdateInterval:Job', job_statusUpdateInterval - Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

$sel:arn:Job', job_arn - An identifier for this resource that is unique within all of AWS.

$sel:createdAt:Job', job_createdAt - The time, in Unix epoch format in seconds, when the job got created.

$sel:hopDestinations:Job', job_hopDestinations - Optional list of hop destinations.

$sel:retryCount:Job', job_retryCount - The number of times that the service automatically attempted to process your job after encountering an error.

$sel:simulateReservedQueue:Job', job_simulateReservedQueue - Enable this setting when you run a test job to estimate how many reserved transcoding slots (RTS) you need. When this is enabled, MediaConvert runs your job from an on-demand queue with similar performance to what you will see with one RTS in a reserved queue. This setting is disabled by default.

$sel:currentPhase:Job', job_currentPhase - A job's phase can be PROBING, TRANSCODING OR UPLOADING

$sel:queue:Job', job_queue - When you create a job, you can specify a queue to send it to. If you don't specify, the job will go to the default queue. For more about queues, see the User Guide topic at https://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html

$sel:userMetadata:Job', job_userMetadata - User-defined metadata that you want to associate with an MediaConvert job. You specify metadata in key/value pairs.

$sel:billingTagsSource:Job', job_billingTagsSource - The tag type that AWS Billing and Cost Management will use to sort your AWS Elemental MediaConvert costs on any billing report that you set up.

$sel:outputGroupDetails:Job', job_outputGroupDetails - List of output group details

$sel:errorCode:Job', job_errorCode - Error code for the job

$sel:queueTransitions:Job', job_queueTransitions - The job's queue hopping history.

$sel:id:Job', job_id - A portion of the job's ARN, unique within your AWS Elemental MediaConvert resources

$sel:jobPercentComplete:Job', job_jobPercentComplete - An estimate of how far your job has progressed. This estimate is shown as a percentage of the total time from when your job leaves its queue to when your output files appear in your output Amazon S3 bucket. AWS Elemental MediaConvert provides jobPercentComplete in CloudWatch STATUS_UPDATE events and in the response to GetJob and ListJobs requests. The jobPercentComplete estimate is reliable for the following input containers: Quicktime, Transport Stream, MP4, and MXF. For some jobs, the service can't provide information about job progress. In those cases, jobPercentComplete returns a null value.

$sel:timing:Job', job_timing - Information about when jobs are submitted, started, and finished is specified in Unix epoch format in seconds.

$sel:messages:Job', job_messages - Provides messages from the service about jobs that you have already successfully submitted.

$sel:errorMessage:Job', job_errorMessage - Error message of Job

$sel:accelerationStatus:Job', job_accelerationStatus - Describes whether the current job is running with accelerated transcoding. For jobs that have Acceleration (AccelerationMode) set to DISABLED, AccelerationStatus is always NOT_APPLICABLE. For jobs that have Acceleration (AccelerationMode) set to ENABLED or PREFERRED, AccelerationStatus is one of the other states. AccelerationStatus is IN_PROGRESS initially, while the service determines whether the input files and job settings are compatible with accelerated transcoding. If they are, AcclerationStatus is ACCELERATED. If your input files and job settings aren't compatible with accelerated transcoding, the service either fails your job or runs it without accelerated transcoding, depending on how you set Acceleration (AccelerationMode). When the service runs your job without accelerated transcoding, AccelerationStatus is NOT_ACCELERATED.

$sel:role':Job', job_role - The IAM role you use for creating this job. For details about permissions, see the User Guide topic at the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

$sel:settings:Job', job_settings - JobSettings contains all the transcode settings for a job.

job_status :: Lens' Job (Maybe JobStatus) Source #

A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR.

job_jobTemplate :: Lens' Job (Maybe Text) Source #

The job template that the job is created from, if it is created from a job template.

job_accelerationSettings :: Lens' Job (Maybe AccelerationSettings) Source #

Accelerated transcoding can significantly speed up jobs with long, visually complex content.

job_priority :: Lens' Job (Maybe Int) Source #

Relative priority on the job.

job_statusUpdateInterval :: Lens' Job (Maybe StatusUpdateInterval) Source #

Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

job_arn :: Lens' Job (Maybe Text) Source #

An identifier for this resource that is unique within all of AWS.

job_createdAt :: Lens' Job (Maybe UTCTime) Source #

The time, in Unix epoch format in seconds, when the job got created.

job_hopDestinations :: Lens' Job (Maybe [HopDestination]) Source #

Optional list of hop destinations.

job_retryCount :: Lens' Job (Maybe Int) Source #

The number of times that the service automatically attempted to process your job after encountering an error.

job_simulateReservedQueue :: Lens' Job (Maybe SimulateReservedQueue) Source #

Enable this setting when you run a test job to estimate how many reserved transcoding slots (RTS) you need. When this is enabled, MediaConvert runs your job from an on-demand queue with similar performance to what you will see with one RTS in a reserved queue. This setting is disabled by default.

job_currentPhase :: Lens' Job (Maybe JobPhase) Source #

A job's phase can be PROBING, TRANSCODING OR UPLOADING

job_queue :: Lens' Job (Maybe Text) Source #

When you create a job, you can specify a queue to send it to. If you don't specify, the job will go to the default queue. For more about queues, see the User Guide topic at https://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html

job_userMetadata :: Lens' Job (Maybe (HashMap Text Text)) Source #

User-defined metadata that you want to associate with an MediaConvert job. You specify metadata in key/value pairs.

job_billingTagsSource :: Lens' Job (Maybe BillingTagsSource) Source #

The tag type that AWS Billing and Cost Management will use to sort your AWS Elemental MediaConvert costs on any billing report that you set up.

job_outputGroupDetails :: Lens' Job (Maybe [OutputGroupDetail]) Source #

List of output group details

job_errorCode :: Lens' Job (Maybe Int) Source #

Error code for the job

job_queueTransitions :: Lens' Job (Maybe [QueueTransition]) Source #

The job's queue hopping history.

job_id :: Lens' Job (Maybe Text) Source #

A portion of the job's ARN, unique within your AWS Elemental MediaConvert resources

job_jobPercentComplete :: Lens' Job (Maybe Int) Source #

An estimate of how far your job has progressed. This estimate is shown as a percentage of the total time from when your job leaves its queue to when your output files appear in your output Amazon S3 bucket. AWS Elemental MediaConvert provides jobPercentComplete in CloudWatch STATUS_UPDATE events and in the response to GetJob and ListJobs requests. The jobPercentComplete estimate is reliable for the following input containers: Quicktime, Transport Stream, MP4, and MXF. For some jobs, the service can't provide information about job progress. In those cases, jobPercentComplete returns a null value.

job_timing :: Lens' Job (Maybe Timing) Source #

Information about when jobs are submitted, started, and finished is specified in Unix epoch format in seconds.

job_messages :: Lens' Job (Maybe JobMessages) Source #

Provides messages from the service about jobs that you have already successfully submitted.

job_errorMessage :: Lens' Job (Maybe Text) Source #

Error message of Job

job_accelerationStatus :: Lens' Job (Maybe AccelerationStatus) Source #

Describes whether the current job is running with accelerated transcoding. For jobs that have Acceleration (AccelerationMode) set to DISABLED, AccelerationStatus is always NOT_APPLICABLE. For jobs that have Acceleration (AccelerationMode) set to ENABLED or PREFERRED, AccelerationStatus is one of the other states. AccelerationStatus is IN_PROGRESS initially, while the service determines whether the input files and job settings are compatible with accelerated transcoding. If they are, AcclerationStatus is ACCELERATED. If your input files and job settings aren't compatible with accelerated transcoding, the service either fails your job or runs it without accelerated transcoding, depending on how you set Acceleration (AccelerationMode). When the service runs your job without accelerated transcoding, AccelerationStatus is NOT_ACCELERATED.

job_role :: Lens' Job Text Source #

The IAM role you use for creating this job. For details about permissions, see the User Guide topic at the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

job_settings :: Lens' Job JobSettings Source #

JobSettings contains all the transcode settings for a job.

JobMessages

data JobMessages Source #

Provides messages from the service about jobs that you have already successfully submitted.

See: newJobMessages smart constructor.

Constructors

JobMessages' 

Fields

  • warning :: Maybe [Text]

    List of messages that warn about conditions that might cause your job not to run or to fail.

  • info :: Maybe [Text]

    List of messages that are informational only and don't indicate a problem with your job.

Instances

Instances details
Eq JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

Read JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

Show JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

Generic JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

Associated Types

type Rep JobMessages :: Type -> Type #

NFData JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

Methods

rnf :: JobMessages -> () #

Hashable JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

FromJSON JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

type Rep JobMessages Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobMessages

type Rep JobMessages = D1 ('MetaData "JobMessages" "Amazonka.MediaConvert.Types.JobMessages" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "JobMessages'" 'PrefixI 'True) (S1 ('MetaSel ('Just "warning") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text])) :*: S1 ('MetaSel ('Just "info") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newJobMessages :: JobMessages Source #

Create a value of JobMessages with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:warning:JobMessages', jobMessages_warning - List of messages that warn about conditions that might cause your job not to run or to fail.

$sel:info:JobMessages', jobMessages_info - List of messages that are informational only and don't indicate a problem with your job.

jobMessages_warning :: Lens' JobMessages (Maybe [Text]) Source #

List of messages that warn about conditions that might cause your job not to run or to fail.

jobMessages_info :: Lens' JobMessages (Maybe [Text]) Source #

List of messages that are informational only and don't indicate a problem with your job.

JobSettings

data JobSettings Source #

JobSettings contains all the transcode settings for a job.

See: newJobSettings smart constructor.

Constructors

JobSettings' 

Fields

  • nielsenNonLinearWatermark :: Maybe NielsenNonLinearWatermarkSettings

    Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

  • esam :: Maybe EsamSettings

    Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

  • inputs :: Maybe [Input]

    Use Inputs (inputs) to define source file used in the transcode job. There can be multiple inputs add in a job. These inputs will be concantenated together to create the output.

  • timedMetadataInsertion :: Maybe TimedMetadataInsertion

    Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

  • nielsenConfiguration :: Maybe NielsenConfiguration

    Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

  • availBlanking :: Maybe AvailBlanking

    Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

  • extendedDataServices :: Maybe ExtendedDataServices

    If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

  • motionImageInserter :: Maybe MotionImageInserter

    Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

  • timecodeConfig :: Maybe TimecodeConfig

    These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

  • outputGroups :: Maybe [OutputGroup]

    (OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

  • adAvailOffset :: Maybe Int

    When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

  • kantarWatermark :: Maybe KantarWatermarkSettings

    Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

Instances

Instances details
Eq JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

Read JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

Show JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

Generic JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

Associated Types

type Rep JobSettings :: Type -> Type #

NFData JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

Methods

rnf :: JobSettings -> () #

Hashable JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

ToJSON JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

FromJSON JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

type Rep JobSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobSettings

type Rep JobSettings = D1 ('MetaData "JobSettings" "Amazonka.MediaConvert.Types.JobSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "JobSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "nielsenNonLinearWatermark") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenNonLinearWatermarkSettings)) :*: (S1 ('MetaSel ('Just "esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EsamSettings)) :*: S1 ('MetaSel ('Just "inputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Input])))) :*: (S1 ('MetaSel ('Just "timedMetadataInsertion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimedMetadataInsertion)) :*: (S1 ('MetaSel ('Just "nielsenConfiguration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenConfiguration)) :*: S1 ('MetaSel ('Just "availBlanking") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvailBlanking))))) :*: ((S1 ('MetaSel ('Just "extendedDataServices") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ExtendedDataServices)) :*: (S1 ('MetaSel ('Just "motionImageInserter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MotionImageInserter)) :*: S1 ('MetaSel ('Just "timecodeConfig") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimecodeConfig)))) :*: (S1 ('MetaSel ('Just "outputGroups") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [OutputGroup])) :*: (S1 ('MetaSel ('Just "adAvailOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "kantarWatermark") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe KantarWatermarkSettings)))))))

newJobSettings :: JobSettings Source #

Create a value of JobSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nielsenNonLinearWatermark:JobSettings', jobSettings_nielsenNonLinearWatermark - Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

$sel:esam:JobSettings', jobSettings_esam - Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

$sel:inputs:JobSettings', jobSettings_inputs - Use Inputs (inputs) to define source file used in the transcode job. There can be multiple inputs add in a job. These inputs will be concantenated together to create the output.

$sel:timedMetadataInsertion:JobSettings', jobSettings_timedMetadataInsertion - Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

$sel:nielsenConfiguration:JobSettings', jobSettings_nielsenConfiguration - Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

$sel:availBlanking:JobSettings', jobSettings_availBlanking - Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

$sel:extendedDataServices:JobSettings', jobSettings_extendedDataServices - If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

$sel:motionImageInserter:JobSettings', jobSettings_motionImageInserter - Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

$sel:timecodeConfig:JobSettings', jobSettings_timecodeConfig - These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

$sel:outputGroups:JobSettings', jobSettings_outputGroups - (OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

$sel:adAvailOffset:JobSettings', jobSettings_adAvailOffset - When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

$sel:kantarWatermark:JobSettings', jobSettings_kantarWatermark - Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

jobSettings_nielsenNonLinearWatermark :: Lens' JobSettings (Maybe NielsenNonLinearWatermarkSettings) Source #

Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

jobSettings_esam :: Lens' JobSettings (Maybe EsamSettings) Source #

Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

jobSettings_inputs :: Lens' JobSettings (Maybe [Input]) Source #

Use Inputs (inputs) to define source file used in the transcode job. There can be multiple inputs add in a job. These inputs will be concantenated together to create the output.

jobSettings_timedMetadataInsertion :: Lens' JobSettings (Maybe TimedMetadataInsertion) Source #

Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

jobSettings_nielsenConfiguration :: Lens' JobSettings (Maybe NielsenConfiguration) Source #

Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

jobSettings_availBlanking :: Lens' JobSettings (Maybe AvailBlanking) Source #

Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

jobSettings_extendedDataServices :: Lens' JobSettings (Maybe ExtendedDataServices) Source #

If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

jobSettings_motionImageInserter :: Lens' JobSettings (Maybe MotionImageInserter) Source #

Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

jobSettings_timecodeConfig :: Lens' JobSettings (Maybe TimecodeConfig) Source #

These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

jobSettings_outputGroups :: Lens' JobSettings (Maybe [OutputGroup]) Source #

(OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

jobSettings_adAvailOffset :: Lens' JobSettings (Maybe Int) Source #

When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

jobSettings_kantarWatermark :: Lens' JobSettings (Maybe KantarWatermarkSettings) Source #

Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

JobTemplate

data JobTemplate Source #

A job template is a pre-made set of encoding instructions that you can use to quickly create a job.

See: newJobTemplate smart constructor.

Constructors

JobTemplate' 

Fields

  • accelerationSettings :: Maybe AccelerationSettings

    Accelerated transcoding can significantly speed up jobs with long, visually complex content.

  • lastUpdated :: Maybe POSIX

    The timestamp in epoch seconds when the Job template was last updated.

  • priority :: Maybe Int

    Relative priority on the job.

  • statusUpdateInterval :: Maybe StatusUpdateInterval

    Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

  • arn :: Maybe Text

    An identifier for this resource that is unique within all of AWS.

  • createdAt :: Maybe POSIX

    The timestamp in epoch seconds for Job template creation.

  • category :: Maybe Text

    An optional category you create to organize your job templates.

  • hopDestinations :: Maybe [HopDestination]

    Optional list of hop destinations.

  • queue :: Maybe Text

    Optional. The queue that jobs created from this template are assigned to. If you don't specify this, jobs will go to the default queue.

  • type' :: Maybe Type

    A job template can be of two types: system or custom. System or built-in job templates can't be modified or deleted by the user.

  • description :: Maybe Text

    An optional description you create for each job template.

  • settings :: JobTemplateSettings

    JobTemplateSettings contains all the transcode settings saved in the template that will be applied to jobs created from it.

  • name :: Text

    A name you create for each job template. Each name must be unique within your account.

Instances

Instances details
Eq JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

Read JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

Show JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

Generic JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

Associated Types

type Rep JobTemplate :: Type -> Type #

NFData JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

Methods

rnf :: JobTemplate -> () #

Hashable JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

FromJSON JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

type Rep JobTemplate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplate

type Rep JobTemplate = D1 ('MetaData "JobTemplate" "Amazonka.MediaConvert.Types.JobTemplate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "JobTemplate'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "accelerationSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AccelerationSettings)) :*: (S1 ('MetaSel ('Just "lastUpdated") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "priority") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))) :*: (S1 ('MetaSel ('Just "statusUpdateInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe StatusUpdateInterval)) :*: (S1 ('MetaSel ('Just "arn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))))) :*: ((S1 ('MetaSel ('Just "category") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "hopDestinations") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [HopDestination])) :*: S1 ('MetaSel ('Just "queue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Type)) :*: S1 ('MetaSel ('Just "description") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 JobTemplateSettings) :*: S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))))

newJobTemplate Source #

Create a value of JobTemplate with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:accelerationSettings:JobTemplate', jobTemplate_accelerationSettings - Accelerated transcoding can significantly speed up jobs with long, visually complex content.

$sel:lastUpdated:JobTemplate', jobTemplate_lastUpdated - The timestamp in epoch seconds when the Job template was last updated.

$sel:priority:JobTemplate', jobTemplate_priority - Relative priority on the job.

$sel:statusUpdateInterval:JobTemplate', jobTemplate_statusUpdateInterval - Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

$sel:arn:JobTemplate', jobTemplate_arn - An identifier for this resource that is unique within all of AWS.

$sel:createdAt:JobTemplate', jobTemplate_createdAt - The timestamp in epoch seconds for Job template creation.

$sel:category:JobTemplate', jobTemplate_category - An optional category you create to organize your job templates.

$sel:hopDestinations:JobTemplate', jobTemplate_hopDestinations - Optional list of hop destinations.

$sel:queue:JobTemplate', jobTemplate_queue - Optional. The queue that jobs created from this template are assigned to. If you don't specify this, jobs will go to the default queue.

$sel:type':JobTemplate', jobTemplate_type - A job template can be of two types: system or custom. System or built-in job templates can't be modified or deleted by the user.

$sel:description:JobTemplate', jobTemplate_description - An optional description you create for each job template.

$sel:settings:JobTemplate', jobTemplate_settings - JobTemplateSettings contains all the transcode settings saved in the template that will be applied to jobs created from it.

$sel:name:JobTemplate', jobTemplate_name - A name you create for each job template. Each name must be unique within your account.

jobTemplate_accelerationSettings :: Lens' JobTemplate (Maybe AccelerationSettings) Source #

Accelerated transcoding can significantly speed up jobs with long, visually complex content.

jobTemplate_lastUpdated :: Lens' JobTemplate (Maybe UTCTime) Source #

The timestamp in epoch seconds when the Job template was last updated.

jobTemplate_priority :: Lens' JobTemplate (Maybe Int) Source #

Relative priority on the job.

jobTemplate_statusUpdateInterval :: Lens' JobTemplate (Maybe StatusUpdateInterval) Source #

Specify how often MediaConvert sends STATUS_UPDATE events to Amazon CloudWatch Events. Set the interval, in seconds, between status updates. MediaConvert sends an update at this interval from the time the service begins processing your job to the time it completes the transcode or encounters an error.

jobTemplate_arn :: Lens' JobTemplate (Maybe Text) Source #

An identifier for this resource that is unique within all of AWS.

jobTemplate_createdAt :: Lens' JobTemplate (Maybe UTCTime) Source #

The timestamp in epoch seconds for Job template creation.

jobTemplate_category :: Lens' JobTemplate (Maybe Text) Source #

An optional category you create to organize your job templates.

jobTemplate_hopDestinations :: Lens' JobTemplate (Maybe [HopDestination]) Source #

Optional list of hop destinations.

jobTemplate_queue :: Lens' JobTemplate (Maybe Text) Source #

Optional. The queue that jobs created from this template are assigned to. If you don't specify this, jobs will go to the default queue.

jobTemplate_type :: Lens' JobTemplate (Maybe Type) Source #

A job template can be of two types: system or custom. System or built-in job templates can't be modified or deleted by the user.

jobTemplate_description :: Lens' JobTemplate (Maybe Text) Source #

An optional description you create for each job template.

jobTemplate_settings :: Lens' JobTemplate JobTemplateSettings Source #

JobTemplateSettings contains all the transcode settings saved in the template that will be applied to jobs created from it.

jobTemplate_name :: Lens' JobTemplate Text Source #

A name you create for each job template. Each name must be unique within your account.

JobTemplateSettings

data JobTemplateSettings Source #

JobTemplateSettings contains all the transcode settings saved in the template that will be applied to jobs created from it.

See: newJobTemplateSettings smart constructor.

Constructors

JobTemplateSettings' 

Fields

  • nielsenNonLinearWatermark :: Maybe NielsenNonLinearWatermarkSettings

    Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

  • esam :: Maybe EsamSettings

    Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

  • inputs :: Maybe [InputTemplate]

    Use Inputs (inputs) to define the source file used in the transcode job. There can only be one input in a job template. Using the API, you can include multiple inputs when referencing a job template.

  • timedMetadataInsertion :: Maybe TimedMetadataInsertion

    Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

  • nielsenConfiguration :: Maybe NielsenConfiguration

    Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

  • availBlanking :: Maybe AvailBlanking

    Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

  • extendedDataServices :: Maybe ExtendedDataServices

    If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

  • motionImageInserter :: Maybe MotionImageInserter

    Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

  • timecodeConfig :: Maybe TimecodeConfig

    These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

  • outputGroups :: Maybe [OutputGroup]

    (OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

  • adAvailOffset :: Maybe Int

    When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

  • kantarWatermark :: Maybe KantarWatermarkSettings

    Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

Instances

Instances details
Eq JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

Read JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

Show JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

Generic JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

Associated Types

type Rep JobTemplateSettings :: Type -> Type #

NFData JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

Methods

rnf :: JobTemplateSettings -> () #

Hashable JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

ToJSON JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

FromJSON JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

type Rep JobTemplateSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.JobTemplateSettings

type Rep JobTemplateSettings = D1 ('MetaData "JobTemplateSettings" "Amazonka.MediaConvert.Types.JobTemplateSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "JobTemplateSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "nielsenNonLinearWatermark") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenNonLinearWatermarkSettings)) :*: (S1 ('MetaSel ('Just "esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EsamSettings)) :*: S1 ('MetaSel ('Just "inputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [InputTemplate])))) :*: (S1 ('MetaSel ('Just "timedMetadataInsertion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimedMetadataInsertion)) :*: (S1 ('MetaSel ('Just "nielsenConfiguration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenConfiguration)) :*: S1 ('MetaSel ('Just "availBlanking") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvailBlanking))))) :*: ((S1 ('MetaSel ('Just "extendedDataServices") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ExtendedDataServices)) :*: (S1 ('MetaSel ('Just "motionImageInserter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MotionImageInserter)) :*: S1 ('MetaSel ('Just "timecodeConfig") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimecodeConfig)))) :*: (S1 ('MetaSel ('Just "outputGroups") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [OutputGroup])) :*: (S1 ('MetaSel ('Just "adAvailOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "kantarWatermark") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe KantarWatermarkSettings)))))))

newJobTemplateSettings :: JobTemplateSettings Source #

Create a value of JobTemplateSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nielsenNonLinearWatermark:JobTemplateSettings', jobTemplateSettings_nielsenNonLinearWatermark - Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

$sel:esam:JobTemplateSettings', jobTemplateSettings_esam - Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

$sel:inputs:JobTemplateSettings', jobTemplateSettings_inputs - Use Inputs (inputs) to define the source file used in the transcode job. There can only be one input in a job template. Using the API, you can include multiple inputs when referencing a job template.

$sel:timedMetadataInsertion:JobTemplateSettings', jobTemplateSettings_timedMetadataInsertion - Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

$sel:nielsenConfiguration:JobTemplateSettings', jobTemplateSettings_nielsenConfiguration - Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

$sel:availBlanking:JobTemplateSettings', jobTemplateSettings_availBlanking - Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

$sel:extendedDataServices:JobTemplateSettings', jobTemplateSettings_extendedDataServices - If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

$sel:motionImageInserter:JobTemplateSettings', jobTemplateSettings_motionImageInserter - Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

$sel:timecodeConfig:JobTemplateSettings', jobTemplateSettings_timecodeConfig - These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

$sel:outputGroups:JobTemplateSettings', jobTemplateSettings_outputGroups - (OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

$sel:adAvailOffset:JobTemplateSettings', jobTemplateSettings_adAvailOffset - When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

$sel:kantarWatermark:JobTemplateSettings', jobTemplateSettings_kantarWatermark - Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

jobTemplateSettings_nielsenNonLinearWatermark :: Lens' JobTemplateSettings (Maybe NielsenNonLinearWatermarkSettings) Source #

Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

jobTemplateSettings_esam :: Lens' JobTemplateSettings (Maybe EsamSettings) Source #

Settings for Event Signaling And Messaging (ESAM). If you don't do ad insertion, you can ignore these settings.

jobTemplateSettings_inputs :: Lens' JobTemplateSettings (Maybe [InputTemplate]) Source #

Use Inputs (inputs) to define the source file used in the transcode job. There can only be one input in a job template. Using the API, you can include multiple inputs when referencing a job template.

jobTemplateSettings_timedMetadataInsertion :: Lens' JobTemplateSettings (Maybe TimedMetadataInsertion) Source #

Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

jobTemplateSettings_nielsenConfiguration :: Lens' JobTemplateSettings (Maybe NielsenConfiguration) Source #

Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

jobTemplateSettings_availBlanking :: Lens' JobTemplateSettings (Maybe AvailBlanking) Source #

Settings for ad avail blanking. Video can be blanked or overlaid with an image, and audio muted during SCTE-35 triggered ad avails.

jobTemplateSettings_extendedDataServices :: Lens' JobTemplateSettings (Maybe ExtendedDataServices) Source #

If your source content has EIA-608 Line 21 Data Services, enable this feature to specify what MediaConvert does with the Extended Data Services (XDS) packets. You can choose to pass through XDS packets, or remove them from the output. For more information about XDS, see EIA-608 Line Data Services, section 9.5.1.5 05h Content Advisory.

jobTemplateSettings_motionImageInserter :: Lens' JobTemplateSettings (Maybe MotionImageInserter) Source #

Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

jobTemplateSettings_timecodeConfig :: Lens' JobTemplateSettings (Maybe TimecodeConfig) Source #

These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

jobTemplateSettings_outputGroups :: Lens' JobTemplateSettings (Maybe [OutputGroup]) Source #

(OutputGroups) contains one group of settings for each set of outputs that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, MXF, and no container) are grouped in a single output group as well. Required in (OutputGroups) is a group of settings that apply to the whole group. This required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, CmafGroupSettings

jobTemplateSettings_adAvailOffset :: Lens' JobTemplateSettings (Maybe Int) Source #

When specified, this offset (in milliseconds) is added to the input Ad Avail PTS time.

jobTemplateSettings_kantarWatermark :: Lens' JobTemplateSettings (Maybe KantarWatermarkSettings) Source #

Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

KantarWatermarkSettings

data KantarWatermarkSettings Source #

Use these settings only when you use Kantar watermarking. Specify the values that MediaConvert uses to generate and place Kantar watermarks in your output audio. These settings apply to every output in your job. In addition to specifying these values, you also need to store your Kantar credentials in AWS Secrets Manager. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/kantar-watermarking.html.

See: newKantarWatermarkSettings smart constructor.

Constructors

KantarWatermarkSettings' 

Fields

  • metadata7 :: Maybe Text

    Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

  • kantarServerUrl :: Maybe Text

    Provide the HTTPS endpoint to the Kantar server. You should get this endpoint from Kantar.

  • kantarLicenseId :: Maybe Natural

    Provide your Kantar license ID number. You should get this number from Kantar.

  • logDestination :: Maybe Text

    Optional. Specify the Amazon S3 bucket where you want MediaConvert to store your Kantar watermark XML logs. When you don't specify a bucket, MediaConvert doesn't save these logs. Note that your MediaConvert service role must provide access to this location. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

  • fileOffset :: Maybe Double

    Optional. Specify an offset, in whole seconds, from the start of your output and the beginning of the watermarking. When you don't specify an offset, Kantar defaults to zero.

  • metadata6 :: Maybe Text

    Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

  • metadata3 :: Maybe Text

    You can optionally use this field to specify the first timestamp that Kantar embeds during watermarking. Kantar suggests that you be very cautious when using this Kantar feature, and that you use it only on channels that are managed specifically for use with this feature by your Audience Measurement Operator. For more information about this feature, contact Kantar technical support.

  • credentialsSecretName :: Maybe Text

    Provide the name of the AWS Secrets Manager secret where your Kantar credentials are stored. Note that your MediaConvert service role must provide access to this secret. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/granting-permissions-for-mediaconvert-to-access-secrets-manager-secret.html. For instructions on creating a secret, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html, in the AWS Secrets Manager User Guide.

  • channelName :: Maybe Text

    Provide an audio channel name from your Kantar audio license.

  • contentReference :: Maybe Text

    Specify a unique identifier for Kantar to use for this piece of content.

  • metadata8 :: Maybe Text

    Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

  • metadata5 :: Maybe Text

    Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

  • metadata4 :: Maybe Text

    Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

Instances

Instances details
Eq KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

Read KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

Show KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

Generic KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

Associated Types

type Rep KantarWatermarkSettings :: Type -> Type #

NFData KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

Methods

rnf :: KantarWatermarkSettings -> () #

Hashable KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

ToJSON KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

FromJSON KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

type Rep KantarWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.KantarWatermarkSettings

type Rep KantarWatermarkSettings = D1 ('MetaData "KantarWatermarkSettings" "Amazonka.MediaConvert.Types.KantarWatermarkSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "KantarWatermarkSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "metadata7") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "kantarServerUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "kantarLicenseId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "logDestination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "fileOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "metadata6") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: ((S1 ('MetaSel ('Just "metadata3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "credentialsSecretName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "channelName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "contentReference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "metadata8") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "metadata5") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "metadata4") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))))

newKantarWatermarkSettings :: KantarWatermarkSettings Source #

Create a value of KantarWatermarkSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:metadata7:KantarWatermarkSettings', kantarWatermarkSettings_metadata7 - Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

$sel:kantarServerUrl:KantarWatermarkSettings', kantarWatermarkSettings_kantarServerUrl - Provide the HTTPS endpoint to the Kantar server. You should get this endpoint from Kantar.

$sel:kantarLicenseId:KantarWatermarkSettings', kantarWatermarkSettings_kantarLicenseId - Provide your Kantar license ID number. You should get this number from Kantar.

$sel:logDestination:KantarWatermarkSettings', kantarWatermarkSettings_logDestination - Optional. Specify the Amazon S3 bucket where you want MediaConvert to store your Kantar watermark XML logs. When you don't specify a bucket, MediaConvert doesn't save these logs. Note that your MediaConvert service role must provide access to this location. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

$sel:fileOffset:KantarWatermarkSettings', kantarWatermarkSettings_fileOffset - Optional. Specify an offset, in whole seconds, from the start of your output and the beginning of the watermarking. When you don't specify an offset, Kantar defaults to zero.

$sel:metadata6:KantarWatermarkSettings', kantarWatermarkSettings_metadata6 - Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

$sel:metadata3:KantarWatermarkSettings', kantarWatermarkSettings_metadata3 - You can optionally use this field to specify the first timestamp that Kantar embeds during watermarking. Kantar suggests that you be very cautious when using this Kantar feature, and that you use it only on channels that are managed specifically for use with this feature by your Audience Measurement Operator. For more information about this feature, contact Kantar technical support.

$sel:credentialsSecretName:KantarWatermarkSettings', kantarWatermarkSettings_credentialsSecretName - Provide the name of the AWS Secrets Manager secret where your Kantar credentials are stored. Note that your MediaConvert service role must provide access to this secret. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/granting-permissions-for-mediaconvert-to-access-secrets-manager-secret.html. For instructions on creating a secret, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html, in the AWS Secrets Manager User Guide.

$sel:channelName:KantarWatermarkSettings', kantarWatermarkSettings_channelName - Provide an audio channel name from your Kantar audio license.

$sel:contentReference:KantarWatermarkSettings', kantarWatermarkSettings_contentReference - Specify a unique identifier for Kantar to use for this piece of content.

$sel:metadata8:KantarWatermarkSettings', kantarWatermarkSettings_metadata8 - Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

$sel:metadata5:KantarWatermarkSettings', kantarWatermarkSettings_metadata5 - Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

$sel:metadata4:KantarWatermarkSettings', kantarWatermarkSettings_metadata4 - Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

kantarWatermarkSettings_metadata7 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

kantarWatermarkSettings_kantarServerUrl :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Provide the HTTPS endpoint to the Kantar server. You should get this endpoint from Kantar.

kantarWatermarkSettings_kantarLicenseId :: Lens' KantarWatermarkSettings (Maybe Natural) Source #

Provide your Kantar license ID number. You should get this number from Kantar.

kantarWatermarkSettings_logDestination :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Optional. Specify the Amazon S3 bucket where you want MediaConvert to store your Kantar watermark XML logs. When you don't specify a bucket, MediaConvert doesn't save these logs. Note that your MediaConvert service role must provide access to this location. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html

kantarWatermarkSettings_fileOffset :: Lens' KantarWatermarkSettings (Maybe Double) Source #

Optional. Specify an offset, in whole seconds, from the start of your output and the beginning of the watermarking. When you don't specify an offset, Kantar defaults to zero.

kantarWatermarkSettings_metadata6 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

kantarWatermarkSettings_metadata3 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

You can optionally use this field to specify the first timestamp that Kantar embeds during watermarking. Kantar suggests that you be very cautious when using this Kantar feature, and that you use it only on channels that are managed specifically for use with this feature by your Audience Measurement Operator. For more information about this feature, contact Kantar technical support.

kantarWatermarkSettings_credentialsSecretName :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Provide the name of the AWS Secrets Manager secret where your Kantar credentials are stored. Note that your MediaConvert service role must provide access to this secret. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/granting-permissions-for-mediaconvert-to-access-secrets-manager-secret.html. For instructions on creating a secret, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html, in the AWS Secrets Manager User Guide.

kantarWatermarkSettings_channelName :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Provide an audio channel name from your Kantar audio license.

kantarWatermarkSettings_contentReference :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Specify a unique identifier for Kantar to use for this piece of content.

kantarWatermarkSettings_metadata8 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

kantarWatermarkSettings_metadata5 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

kantarWatermarkSettings_metadata4 :: Lens' KantarWatermarkSettings (Maybe Text) Source #

Additional metadata that MediaConvert sends to Kantar. Maximum length is 50 characters.

M2tsScte35Esam

data M2tsScte35Esam Source #

Settings for SCTE-35 signals from ESAM. Include this in your job settings to put SCTE-35 markers in your HLS and transport stream outputs at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

See: newM2tsScte35Esam smart constructor.

Constructors

M2tsScte35Esam' 

Fields

Instances

Instances details
Eq M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

Read M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

Show M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

Generic M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

Associated Types

type Rep M2tsScte35Esam :: Type -> Type #

NFData M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

Methods

rnf :: M2tsScte35Esam -> () #

Hashable M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

ToJSON M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

FromJSON M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

type Rep M2tsScte35Esam Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsScte35Esam

type Rep M2tsScte35Esam = D1 ('MetaData "M2tsScte35Esam" "Amazonka.MediaConvert.Types.M2tsScte35Esam" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "M2tsScte35Esam'" 'PrefixI 'True) (S1 ('MetaSel ('Just "scte35EsamPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newM2tsScte35Esam :: M2tsScte35Esam Source #

Create a value of M2tsScte35Esam with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:scte35EsamPid:M2tsScte35Esam', m2tsScte35Esam_scte35EsamPid - Packet Identifier (PID) of the SCTE-35 stream in the transport stream generated by ESAM.

m2tsScte35Esam_scte35EsamPid :: Lens' M2tsScte35Esam (Maybe Natural) Source #

Packet Identifier (PID) of the SCTE-35 stream in the transport stream generated by ESAM.

M2tsSettings

data M2tsSettings Source #

MPEG-2 TS container settings. These apply to outputs in a File output group when the output's container (ContainerType) is MPEG-2 Transport Stream (M2TS). In these assets, data is organized by the program map table (PMT). Each transport stream program contains subsets of data, including audio, video, and metadata. Each of these subsets of data has a numerical label called a packet identifier (PID). Each transport stream program corresponds to one MediaConvert output. The PMT lists the types of data in a program along with their PID. Downstream systems and players use the program map table to look up the PID for each type of data it accesses and then uses the PIDs to locate specific data within the asset.

See: newM2tsSettings smart constructor.

Constructors

M2tsSettings' 

Fields

  • pmtPid :: Maybe Natural

    Specify the packet identifier (PID) for the program map table (PMT) itself. Default is 480.

  • videoPid :: Maybe Natural

    Specify the packet identifier (PID) of the elementary video stream in the transport stream.

  • bufferModel :: Maybe M2tsBufferModel

    Controls what buffer model to use for accurate interleaving. If set to MULTIPLEX, use multiplex buffer model. If set to NONE, this can lead to lower latency, but low-memory devices may not be able to play back the stream without interruptions.

  • programNumber :: Maybe Natural

    Use Program number (programNumber) to specify the program number used in the program map table (PMT) for this output. Default is 1. Program numbers and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

  • scte35Pid :: Maybe Natural

    Specify the packet identifier (PID) of the SCTE-35 stream in the transport stream.

  • minEbpInterval :: Maybe Natural

    When set, enforces that Encoder Boundary Points do not come within the specified time interval of each other by looking ahead at input video. If another EBP is going to come in within the specified time interval, the current EBP is not emitted, and the segment is "stretched" to the next marker. The lookahead value does not add latency to the system. The Live Event must be configured elsewhere to create sufficient latency to make the lookahead accurate.

  • transportStreamId :: Maybe Natural

    Specify the ID for the transport stream itself in the program map table for this output. Transport stream IDs and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

  • maxPcrInterval :: Maybe Natural

    Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

  • fragmentTime :: Maybe Double

    The length, in seconds, of each fragment. Only used with EBP markers.

  • privateMetadataPid :: Maybe Natural

    Specify the packet identifier (PID) of the private metadata stream. Default is 503.

  • scte35Esam :: Maybe M2tsScte35Esam

    Include this in your job settings to put SCTE-35 markers in your HLS and transport stream outputs at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

  • audioDuration :: Maybe M2tsAudioDuration

    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

  • pmtInterval :: Maybe Natural

    Specify the number of milliseconds between instances of the program map table (PMT) in the output transport stream.

  • dvbSdtSettings :: Maybe DvbSdtSettings

    Use these settings to insert a DVB Service Description Table (SDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

  • nullPacketBitrate :: Maybe Double

    Value in bits per second of extra null packets to insert into the transport stream. This can be used if a downstream encryption system requires periodic null packets.

  • audioBufferModel :: Maybe M2tsAudioBufferModel

    Selects between the DVB and ATSC buffer models for Dolby Digital audio.

  • timedMetadataPid :: Maybe Natural

    Specify the packet identifier (PID) for timed metadata in this output. Default is 502.

  • audioFramesPerPes :: Maybe Natural

    The number of audio frames to insert for each PES packet.

  • pcrPid :: Maybe Natural

    Specify the packet identifier (PID) for the program clock reference (PCR) in this output. If you do not specify a value, the service will use the value for Video PID (VideoPid).

  • segmentationMarkers :: Maybe M2tsSegmentationMarkers

    Inserts segmentation markers at each segmentation_time period. rai_segstart sets the Random Access Indicator bit in the adaptation field. rai_adapt sets the RAI bit and adds the current timecode in the private data bytes. psi_segstart inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary Point information to the adaptation field as per OpenCable specification OC-SP-EBP-I01-130118. ebp_legacy adds Encoder Boundary Point information to the adaptation field using a legacy proprietary format.

  • dvbSubPids :: Maybe [Natural]

    Specify the packet identifiers (PIDs) for DVB subtitle data included in this output. Specify multiple PIDs as a JSON array. Default is the range 460-479.

  • scte35Source :: Maybe M2tsScte35Source

    For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE). Also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml). Also enable ESAM SCTE-35 (include the property scte35Esam).

  • patInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

  • forceTsVideoEbpOrder :: Maybe M2tsForceTsVideoEbpOrder

    Keep the default value (DEFAULT) unless you know that your audio EBP markers are incorrectly appearing before your video EBP markers. To correct this problem, set this value to Force (FORCE).

  • esRateInPes :: Maybe M2tsEsRateInPes

    Controls whether to include the ES Rate field in the PES header.

  • bitrate :: Maybe Natural

    Specify the output bitrate of the transport stream in bits per second. Setting to 0 lets the muxer automatically determine the appropriate bitrate. Other common values are 3750000, 7500000, and 15000000.

  • audioPids :: Maybe [Natural]

    Specify the packet identifiers (PIDs) for any elementary audio streams you include in this output. Specify multiple PIDs as a JSON array. Default is the range 482-492.

  • dvbTeletextPid :: Maybe Natural

    Specify the packet identifier (PID) for DVB teletext data you include in this output. Default is 499.

  • nielsenId3 :: Maybe M2tsNielsenId3

    If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

  • dataPTSControl :: Maybe M2tsDataPtsControl

    If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

  • segmentationTime :: Maybe Double

    Specify the length, in seconds, of each segment. Required unless markers is set to _none_.

  • ebpAudioInterval :: Maybe M2tsEbpAudioInterval

    When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to partitions 3 and 4. The interval between these additional markers will be fixed, and will be slightly shorter than the video EBP marker interval. When set to VIDEO_INTERVAL, these additional markers will not be inserted. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

  • dvbNitSettings :: Maybe DvbNitSettings

    Use these settings to insert a DVB Network Information Table (NIT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

  • pcrControl :: Maybe M2tsPcrControl

    When set to PCR_EVERY_PES_PACKET, a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This is effective only when the PCR PID is the same as the video or audio elementary stream.

  • ebpPlacement :: Maybe M2tsEbpPlacement

    Selects which PIDs to place EBP markers on. They can either be placed only on the video PID, or on both the video PID and all audio PIDs. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

  • rateMode :: Maybe M2tsRateMode

    When set to CBR, inserts null packets into transport stream to fill specified bitrate. When set to VBR, the bitrate setting acts as the maximum bitrate, but the output will not be padded up to that bitrate.

  • segmentationStyle :: Maybe M2tsSegmentationStyle

    The segmentation style parameter controls how segmentation markers are inserted into the transport stream. With avails, it is possible that segments may be truncated, which can influence where future segmentation markers are inserted. When a segmentation style of "reset_cadence" is selected and a segment is truncated due to an avail, we will reset the segmentation cadence. This means the subsequent segment will have a duration of of $segmentation_time seconds. When a segmentation style of "maintain_cadence" is selected and a segment is truncated due to an avail, we will not reset the segmentation cadence. This means the subsequent segment will likely be truncated as well. However, all segments after that will have a duration of $segmentation_time seconds. Note that EBP lookahead is a slight exception to this rule.

  • dvbTdtSettings :: Maybe DvbTdtSettings

    Use these settings to insert a DVB Time and Date Table (TDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

Instances

Instances details
Eq M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

Read M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

Show M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

Generic M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

Associated Types

type Rep M2tsSettings :: Type -> Type #

NFData M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

Methods

rnf :: M2tsSettings -> () #

Hashable M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

ToJSON M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

FromJSON M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

type Rep M2tsSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M2tsSettings

type Rep M2tsSettings = D1 ('MetaData "M2tsSettings" "Amazonka.MediaConvert.Types.M2tsSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "M2tsSettings'" 'PrefixI 'True) (((((S1 ('MetaSel ('Just "pmtPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "videoPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "bufferModel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsBufferModel)) :*: S1 ('MetaSel ('Just "programNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "scte35Pid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "minEbpInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "transportStreamId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "maxPcrInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "fragmentTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))) :*: (((S1 ('MetaSel ('Just "privateMetadataPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "scte35Esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsScte35Esam))) :*: (S1 ('MetaSel ('Just "audioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsAudioDuration)) :*: (S1 ('MetaSel ('Just "pmtInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "dvbSdtSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbSdtSettings))))) :*: ((S1 ('MetaSel ('Just "nullPacketBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "audioBufferModel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsAudioBufferModel))) :*: (S1 ('MetaSel ('Just "timedMetadataPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "audioFramesPerPes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "pcrPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))) :*: ((((S1 ('MetaSel ('Just "segmentationMarkers") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsSegmentationMarkers)) :*: S1 ('MetaSel ('Just "dvbSubPids") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Natural]))) :*: (S1 ('MetaSel ('Just "scte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsScte35Source)) :*: S1 ('MetaSel ('Just "patInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "forceTsVideoEbpOrder") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsForceTsVideoEbpOrder)) :*: S1 ('MetaSel ('Just "esRateInPes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsEsRateInPes))) :*: (S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "audioPids") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Natural])) :*: S1 ('MetaSel ('Just "dvbTeletextPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))) :*: (((S1 ('MetaSel ('Just "nielsenId3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsNielsenId3)) :*: S1 ('MetaSel ('Just "dataPTSControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsDataPtsControl))) :*: (S1 ('MetaSel ('Just "segmentationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "ebpAudioInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsEbpAudioInterval)) :*: S1 ('MetaSel ('Just "dvbNitSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbNitSettings))))) :*: ((S1 ('MetaSel ('Just "pcrControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsPcrControl)) :*: S1 ('MetaSel ('Just "ebpPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsEbpPlacement))) :*: (S1 ('MetaSel ('Just "rateMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsRateMode)) :*: (S1 ('MetaSel ('Just "segmentationStyle") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M2tsSegmentationStyle)) :*: S1 ('MetaSel ('Just "dvbTdtSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DvbTdtSettings)))))))))

newM2tsSettings :: M2tsSettings Source #

Create a value of M2tsSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:pmtPid:M2tsSettings', m2tsSettings_pmtPid - Specify the packet identifier (PID) for the program map table (PMT) itself. Default is 480.

$sel:videoPid:M2tsSettings', m2tsSettings_videoPid - Specify the packet identifier (PID) of the elementary video stream in the transport stream.

$sel:bufferModel:M2tsSettings', m2tsSettings_bufferModel - Controls what buffer model to use for accurate interleaving. If set to MULTIPLEX, use multiplex buffer model. If set to NONE, this can lead to lower latency, but low-memory devices may not be able to play back the stream without interruptions.

$sel:programNumber:M2tsSettings', m2tsSettings_programNumber - Use Program number (programNumber) to specify the program number used in the program map table (PMT) for this output. Default is 1. Program numbers and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

$sel:scte35Pid:M2tsSettings', m2tsSettings_scte35Pid - Specify the packet identifier (PID) of the SCTE-35 stream in the transport stream.

$sel:minEbpInterval:M2tsSettings', m2tsSettings_minEbpInterval - When set, enforces that Encoder Boundary Points do not come within the specified time interval of each other by looking ahead at input video. If another EBP is going to come in within the specified time interval, the current EBP is not emitted, and the segment is "stretched" to the next marker. The lookahead value does not add latency to the system. The Live Event must be configured elsewhere to create sufficient latency to make the lookahead accurate.

$sel:transportStreamId:M2tsSettings', m2tsSettings_transportStreamId - Specify the ID for the transport stream itself in the program map table for this output. Transport stream IDs and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

$sel:maxPcrInterval:M2tsSettings', m2tsSettings_maxPcrInterval - Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

$sel:fragmentTime:M2tsSettings', m2tsSettings_fragmentTime - The length, in seconds, of each fragment. Only used with EBP markers.

$sel:privateMetadataPid:M2tsSettings', m2tsSettings_privateMetadataPid - Specify the packet identifier (PID) of the private metadata stream. Default is 503.

$sel:scte35Esam:M2tsSettings', m2tsSettings_scte35Esam - Include this in your job settings to put SCTE-35 markers in your HLS and transport stream outputs at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

$sel:audioDuration:M2tsSettings', m2tsSettings_audioDuration - Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

$sel:pmtInterval:M2tsSettings', m2tsSettings_pmtInterval - Specify the number of milliseconds between instances of the program map table (PMT) in the output transport stream.

$sel:dvbSdtSettings:M2tsSettings', m2tsSettings_dvbSdtSettings - Use these settings to insert a DVB Service Description Table (SDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

$sel:nullPacketBitrate:M2tsSettings', m2tsSettings_nullPacketBitrate - Value in bits per second of extra null packets to insert into the transport stream. This can be used if a downstream encryption system requires periodic null packets.

$sel:audioBufferModel:M2tsSettings', m2tsSettings_audioBufferModel - Selects between the DVB and ATSC buffer models for Dolby Digital audio.

$sel:timedMetadataPid:M2tsSettings', m2tsSettings_timedMetadataPid - Specify the packet identifier (PID) for timed metadata in this output. Default is 502.

$sel:audioFramesPerPes:M2tsSettings', m2tsSettings_audioFramesPerPes - The number of audio frames to insert for each PES packet.

$sel:pcrPid:M2tsSettings', m2tsSettings_pcrPid - Specify the packet identifier (PID) for the program clock reference (PCR) in this output. If you do not specify a value, the service will use the value for Video PID (VideoPid).

$sel:segmentationMarkers:M2tsSettings', m2tsSettings_segmentationMarkers - Inserts segmentation markers at each segmentation_time period. rai_segstart sets the Random Access Indicator bit in the adaptation field. rai_adapt sets the RAI bit and adds the current timecode in the private data bytes. psi_segstart inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary Point information to the adaptation field as per OpenCable specification OC-SP-EBP-I01-130118. ebp_legacy adds Encoder Boundary Point information to the adaptation field using a legacy proprietary format.

$sel:dvbSubPids:M2tsSettings', m2tsSettings_dvbSubPids - Specify the packet identifiers (PIDs) for DVB subtitle data included in this output. Specify multiple PIDs as a JSON array. Default is the range 460-479.

$sel:scte35Source:M2tsSettings', m2tsSettings_scte35Source - For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE). Also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml). Also enable ESAM SCTE-35 (include the property scte35Esam).

$sel:patInterval:M2tsSettings', m2tsSettings_patInterval - The number of milliseconds between instances of this table in the output transport stream.

$sel:forceTsVideoEbpOrder:M2tsSettings', m2tsSettings_forceTsVideoEbpOrder - Keep the default value (DEFAULT) unless you know that your audio EBP markers are incorrectly appearing before your video EBP markers. To correct this problem, set this value to Force (FORCE).

$sel:esRateInPes:M2tsSettings', m2tsSettings_esRateInPes - Controls whether to include the ES Rate field in the PES header.

$sel:bitrate:M2tsSettings', m2tsSettings_bitrate - Specify the output bitrate of the transport stream in bits per second. Setting to 0 lets the muxer automatically determine the appropriate bitrate. Other common values are 3750000, 7500000, and 15000000.

$sel:audioPids:M2tsSettings', m2tsSettings_audioPids - Specify the packet identifiers (PIDs) for any elementary audio streams you include in this output. Specify multiple PIDs as a JSON array. Default is the range 482-492.

$sel:dvbTeletextPid:M2tsSettings', m2tsSettings_dvbTeletextPid - Specify the packet identifier (PID) for DVB teletext data you include in this output. Default is 499.

$sel:nielsenId3:M2tsSettings', m2tsSettings_nielsenId3 - If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

$sel:dataPTSControl:M2tsSettings', m2tsSettings_dataPTSControl - If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

$sel:segmentationTime:M2tsSettings', m2tsSettings_segmentationTime - Specify the length, in seconds, of each segment. Required unless markers is set to _none_.

$sel:ebpAudioInterval:M2tsSettings', m2tsSettings_ebpAudioInterval - When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to partitions 3 and 4. The interval between these additional markers will be fixed, and will be slightly shorter than the video EBP marker interval. When set to VIDEO_INTERVAL, these additional markers will not be inserted. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

$sel:dvbNitSettings:M2tsSettings', m2tsSettings_dvbNitSettings - Use these settings to insert a DVB Network Information Table (NIT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

$sel:pcrControl:M2tsSettings', m2tsSettings_pcrControl - When set to PCR_EVERY_PES_PACKET, a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This is effective only when the PCR PID is the same as the video or audio elementary stream.

$sel:ebpPlacement:M2tsSettings', m2tsSettings_ebpPlacement - Selects which PIDs to place EBP markers on. They can either be placed only on the video PID, or on both the video PID and all audio PIDs. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

$sel:rateMode:M2tsSettings', m2tsSettings_rateMode - When set to CBR, inserts null packets into transport stream to fill specified bitrate. When set to VBR, the bitrate setting acts as the maximum bitrate, but the output will not be padded up to that bitrate.

$sel:segmentationStyle:M2tsSettings', m2tsSettings_segmentationStyle - The segmentation style parameter controls how segmentation markers are inserted into the transport stream. With avails, it is possible that segments may be truncated, which can influence where future segmentation markers are inserted. When a segmentation style of "reset_cadence" is selected and a segment is truncated due to an avail, we will reset the segmentation cadence. This means the subsequent segment will have a duration of of $segmentation_time seconds. When a segmentation style of "maintain_cadence" is selected and a segment is truncated due to an avail, we will not reset the segmentation cadence. This means the subsequent segment will likely be truncated as well. However, all segments after that will have a duration of $segmentation_time seconds. Note that EBP lookahead is a slight exception to this rule.

$sel:dvbTdtSettings:M2tsSettings', m2tsSettings_dvbTdtSettings - Use these settings to insert a DVB Time and Date Table (TDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

m2tsSettings_pmtPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) for the program map table (PMT) itself. Default is 480.

m2tsSettings_videoPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) of the elementary video stream in the transport stream.

m2tsSettings_bufferModel :: Lens' M2tsSettings (Maybe M2tsBufferModel) Source #

Controls what buffer model to use for accurate interleaving. If set to MULTIPLEX, use multiplex buffer model. If set to NONE, this can lead to lower latency, but low-memory devices may not be able to play back the stream without interruptions.

m2tsSettings_programNumber :: Lens' M2tsSettings (Maybe Natural) Source #

Use Program number (programNumber) to specify the program number used in the program map table (PMT) for this output. Default is 1. Program numbers and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

m2tsSettings_scte35Pid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) of the SCTE-35 stream in the transport stream.

m2tsSettings_minEbpInterval :: Lens' M2tsSettings (Maybe Natural) Source #

When set, enforces that Encoder Boundary Points do not come within the specified time interval of each other by looking ahead at input video. If another EBP is going to come in within the specified time interval, the current EBP is not emitted, and the segment is "stretched" to the next marker. The lookahead value does not add latency to the system. The Live Event must be configured elsewhere to create sufficient latency to make the lookahead accurate.

m2tsSettings_transportStreamId :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the ID for the transport stream itself in the program map table for this output. Transport stream IDs and program map tables are parts of MPEG-2 transport stream containers, used for organizing data.

m2tsSettings_maxPcrInterval :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

m2tsSettings_fragmentTime :: Lens' M2tsSettings (Maybe Double) Source #

The length, in seconds, of each fragment. Only used with EBP markers.

m2tsSettings_privateMetadataPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) of the private metadata stream. Default is 503.

m2tsSettings_scte35Esam :: Lens' M2tsSettings (Maybe M2tsScte35Esam) Source #

Include this in your job settings to put SCTE-35 markers in your HLS and transport stream outputs at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

m2tsSettings_audioDuration :: Lens' M2tsSettings (Maybe M2tsAudioDuration) Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

m2tsSettings_pmtInterval :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the number of milliseconds between instances of the program map table (PMT) in the output transport stream.

m2tsSettings_dvbSdtSettings :: Lens' M2tsSettings (Maybe DvbSdtSettings) Source #

Use these settings to insert a DVB Service Description Table (SDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

m2tsSettings_nullPacketBitrate :: Lens' M2tsSettings (Maybe Double) Source #

Value in bits per second of extra null packets to insert into the transport stream. This can be used if a downstream encryption system requires periodic null packets.

m2tsSettings_audioBufferModel :: Lens' M2tsSettings (Maybe M2tsAudioBufferModel) Source #

Selects between the DVB and ATSC buffer models for Dolby Digital audio.

m2tsSettings_timedMetadataPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) for timed metadata in this output. Default is 502.

m2tsSettings_audioFramesPerPes :: Lens' M2tsSettings (Maybe Natural) Source #

The number of audio frames to insert for each PES packet.

m2tsSettings_pcrPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) for the program clock reference (PCR) in this output. If you do not specify a value, the service will use the value for Video PID (VideoPid).

m2tsSettings_segmentationMarkers :: Lens' M2tsSettings (Maybe M2tsSegmentationMarkers) Source #

Inserts segmentation markers at each segmentation_time period. rai_segstart sets the Random Access Indicator bit in the adaptation field. rai_adapt sets the RAI bit and adds the current timecode in the private data bytes. psi_segstart inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary Point information to the adaptation field as per OpenCable specification OC-SP-EBP-I01-130118. ebp_legacy adds Encoder Boundary Point information to the adaptation field using a legacy proprietary format.

m2tsSettings_dvbSubPids :: Lens' M2tsSettings (Maybe [Natural]) Source #

Specify the packet identifiers (PIDs) for DVB subtitle data included in this output. Specify multiple PIDs as a JSON array. Default is the range 460-479.

m2tsSettings_scte35Source :: Lens' M2tsSettings (Maybe M2tsScte35Source) Source #

For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE). Also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml). Also enable ESAM SCTE-35 (include the property scte35Esam).

m2tsSettings_patInterval :: Lens' M2tsSettings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

m2tsSettings_forceTsVideoEbpOrder :: Lens' M2tsSettings (Maybe M2tsForceTsVideoEbpOrder) Source #

Keep the default value (DEFAULT) unless you know that your audio EBP markers are incorrectly appearing before your video EBP markers. To correct this problem, set this value to Force (FORCE).

m2tsSettings_esRateInPes :: Lens' M2tsSettings (Maybe M2tsEsRateInPes) Source #

Controls whether to include the ES Rate field in the PES header.

m2tsSettings_bitrate :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the output bitrate of the transport stream in bits per second. Setting to 0 lets the muxer automatically determine the appropriate bitrate. Other common values are 3750000, 7500000, and 15000000.

m2tsSettings_audioPids :: Lens' M2tsSettings (Maybe [Natural]) Source #

Specify the packet identifiers (PIDs) for any elementary audio streams you include in this output. Specify multiple PIDs as a JSON array. Default is the range 482-492.

m2tsSettings_dvbTeletextPid :: Lens' M2tsSettings (Maybe Natural) Source #

Specify the packet identifier (PID) for DVB teletext data you include in this output. Default is 499.

m2tsSettings_nielsenId3 :: Lens' M2tsSettings (Maybe M2tsNielsenId3) Source #

If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

m2tsSettings_dataPTSControl :: Lens' M2tsSettings (Maybe M2tsDataPtsControl) Source #

If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

m2tsSettings_segmentationTime :: Lens' M2tsSettings (Maybe Double) Source #

Specify the length, in seconds, of each segment. Required unless markers is set to _none_.

m2tsSettings_ebpAudioInterval :: Lens' M2tsSettings (Maybe M2tsEbpAudioInterval) Source #

When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to partitions 3 and 4. The interval between these additional markers will be fixed, and will be slightly shorter than the video EBP marker interval. When set to VIDEO_INTERVAL, these additional markers will not be inserted. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

m2tsSettings_dvbNitSettings :: Lens' M2tsSettings (Maybe DvbNitSettings) Source #

Use these settings to insert a DVB Network Information Table (NIT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

m2tsSettings_pcrControl :: Lens' M2tsSettings (Maybe M2tsPcrControl) Source #

When set to PCR_EVERY_PES_PACKET, a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This is effective only when the PCR PID is the same as the video or audio elementary stream.

m2tsSettings_ebpPlacement :: Lens' M2tsSettings (Maybe M2tsEbpPlacement) Source #

Selects which PIDs to place EBP markers on. They can either be placed only on the video PID, or on both the video PID and all audio PIDs. Only applicable when EBP segmentation markers are is selected (segmentationMarkers is EBP or EBP_LEGACY).

m2tsSettings_rateMode :: Lens' M2tsSettings (Maybe M2tsRateMode) Source #

When set to CBR, inserts null packets into transport stream to fill specified bitrate. When set to VBR, the bitrate setting acts as the maximum bitrate, but the output will not be padded up to that bitrate.

m2tsSettings_segmentationStyle :: Lens' M2tsSettings (Maybe M2tsSegmentationStyle) Source #

The segmentation style parameter controls how segmentation markers are inserted into the transport stream. With avails, it is possible that segments may be truncated, which can influence where future segmentation markers are inserted. When a segmentation style of "reset_cadence" is selected and a segment is truncated due to an avail, we will reset the segmentation cadence. This means the subsequent segment will have a duration of of $segmentation_time seconds. When a segmentation style of "maintain_cadence" is selected and a segment is truncated due to an avail, we will not reset the segmentation cadence. This means the subsequent segment will likely be truncated as well. However, all segments after that will have a duration of $segmentation_time seconds. Note that EBP lookahead is a slight exception to this rule.

m2tsSettings_dvbTdtSettings :: Lens' M2tsSettings (Maybe DvbTdtSettings) Source #

Use these settings to insert a DVB Time and Date Table (TDT) in the transport stream of this output. When you work directly in your JSON job specification, include this object only when your job has a transport stream output and the container settings contain the object M2tsSettings.

M3u8Settings

data M3u8Settings Source #

These settings relate to the MPEG-2 transport stream (MPEG2-TS) container for the MPEG2-TS segments in your HLS outputs.

See: newM3u8Settings smart constructor.

Constructors

M3u8Settings' 

Fields

  • pmtPid :: Maybe Natural

    Packet Identifier (PID) for the Program Map Table (PMT) in the transport stream.

  • videoPid :: Maybe Natural

    Packet Identifier (PID) of the elementary video stream in the transport stream.

  • programNumber :: Maybe Natural

    The value of the program number field in the Program Map Table.

  • scte35Pid :: Maybe Natural

    Packet Identifier (PID) of the SCTE-35 stream in the transport stream.

  • transportStreamId :: Maybe Natural

    The value of the transport stream ID field in the Program Map Table.

  • maxPcrInterval :: Maybe Natural

    Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

  • privateMetadataPid :: Maybe Natural

    Packet Identifier (PID) of the private metadata stream in the transport stream.

  • audioDuration :: Maybe M3u8AudioDuration

    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

  • pmtInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

  • timedMetadataPid :: Maybe Natural

    Packet Identifier (PID) of the timed metadata stream in the transport stream.

  • audioFramesPerPes :: Maybe Natural

    The number of audio frames to insert for each PES packet.

  • pcrPid :: Maybe Natural

    Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport stream. When no value is given, the encoder will assign the same value as the Video PID.

  • timedMetadata :: Maybe TimedMetadata

    Applies only to HLS outputs. Use this setting to specify whether the service inserts the ID3 timed metadata from the input in this output.

  • scte35Source :: Maybe M3u8Scte35Source

    For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE) if you don't want manifest conditioning. Choose Passthrough (PASSTHROUGH) and choose Ad markers (adMarkers) if you do want manifest conditioning. In both cases, also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml).

  • patInterval :: Maybe Natural

    The number of milliseconds between instances of this table in the output transport stream.

  • audioPids :: Maybe [Natural]

    Packet Identifier (PID) of the elementary audio stream(s) in the transport stream. Multiple values are accepted, and can be entered in ranges and/or by comma separation.

  • nielsenId3 :: Maybe M3u8NielsenId3

    If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

  • dataPTSControl :: Maybe M3u8DataPtsControl

    If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

  • pcrControl :: Maybe M3u8PcrControl

    When set to PCR_EVERY_PES_PACKET a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This parameter is effective only when the PCR PID is the same as the video or audio elementary stream.

Instances

Instances details
Eq M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

Read M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

Show M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

Generic M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

Associated Types

type Rep M3u8Settings :: Type -> Type #

NFData M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

Methods

rnf :: M3u8Settings -> () #

Hashable M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

ToJSON M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

FromJSON M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

type Rep M3u8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.M3u8Settings

type Rep M3u8Settings = D1 ('MetaData "M3u8Settings" "Amazonka.MediaConvert.Types.M3u8Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "M3u8Settings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "pmtPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "videoPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "programNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "scte35Pid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "transportStreamId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxPcrInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "privateMetadataPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "audioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M3u8AudioDuration)) :*: S1 ('MetaSel ('Just "pmtInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))) :*: (((S1 ('MetaSel ('Just "timedMetadataPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "audioFramesPerPes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "pcrPid") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "timedMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimedMetadata)) :*: S1 ('MetaSel ('Just "scte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M3u8Scte35Source))))) :*: ((S1 ('MetaSel ('Just "patInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "audioPids") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Natural]))) :*: (S1 ('MetaSel ('Just "nielsenId3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M3u8NielsenId3)) :*: (S1 ('MetaSel ('Just "dataPTSControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M3u8DataPtsControl)) :*: S1 ('MetaSel ('Just "pcrControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe M3u8PcrControl))))))))

newM3u8Settings :: M3u8Settings Source #

Create a value of M3u8Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:pmtPid:M3u8Settings', m3u8Settings_pmtPid - Packet Identifier (PID) for the Program Map Table (PMT) in the transport stream.

$sel:videoPid:M3u8Settings', m3u8Settings_videoPid - Packet Identifier (PID) of the elementary video stream in the transport stream.

$sel:programNumber:M3u8Settings', m3u8Settings_programNumber - The value of the program number field in the Program Map Table.

$sel:scte35Pid:M3u8Settings', m3u8Settings_scte35Pid - Packet Identifier (PID) of the SCTE-35 stream in the transport stream.

$sel:transportStreamId:M3u8Settings', m3u8Settings_transportStreamId - The value of the transport stream ID field in the Program Map Table.

$sel:maxPcrInterval:M3u8Settings', m3u8Settings_maxPcrInterval - Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

$sel:privateMetadataPid:M3u8Settings', m3u8Settings_privateMetadataPid - Packet Identifier (PID) of the private metadata stream in the transport stream.

$sel:audioDuration:M3u8Settings', m3u8Settings_audioDuration - Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

$sel:pmtInterval:M3u8Settings', m3u8Settings_pmtInterval - The number of milliseconds between instances of this table in the output transport stream.

$sel:timedMetadataPid:M3u8Settings', m3u8Settings_timedMetadataPid - Packet Identifier (PID) of the timed metadata stream in the transport stream.

$sel:audioFramesPerPes:M3u8Settings', m3u8Settings_audioFramesPerPes - The number of audio frames to insert for each PES packet.

$sel:pcrPid:M3u8Settings', m3u8Settings_pcrPid - Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport stream. When no value is given, the encoder will assign the same value as the Video PID.

$sel:timedMetadata:M3u8Settings', m3u8Settings_timedMetadata - Applies only to HLS outputs. Use this setting to specify whether the service inserts the ID3 timed metadata from the input in this output.

$sel:scte35Source:M3u8Settings', m3u8Settings_scte35Source - For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE) if you don't want manifest conditioning. Choose Passthrough (PASSTHROUGH) and choose Ad markers (adMarkers) if you do want manifest conditioning. In both cases, also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml).

$sel:patInterval:M3u8Settings', m3u8Settings_patInterval - The number of milliseconds between instances of this table in the output transport stream.

$sel:audioPids:M3u8Settings', m3u8Settings_audioPids - Packet Identifier (PID) of the elementary audio stream(s) in the transport stream. Multiple values are accepted, and can be entered in ranges and/or by comma separation.

$sel:nielsenId3:M3u8Settings', m3u8Settings_nielsenId3 - If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

$sel:dataPTSControl:M3u8Settings', m3u8Settings_dataPTSControl - If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

$sel:pcrControl:M3u8Settings', m3u8Settings_pcrControl - When set to PCR_EVERY_PES_PACKET a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This parameter is effective only when the PCR PID is the same as the video or audio elementary stream.

m3u8Settings_pmtPid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) for the Program Map Table (PMT) in the transport stream.

m3u8Settings_videoPid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) of the elementary video stream in the transport stream.

m3u8Settings_programNumber :: Lens' M3u8Settings (Maybe Natural) Source #

The value of the program number field in the Program Map Table.

m3u8Settings_scte35Pid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) of the SCTE-35 stream in the transport stream.

m3u8Settings_transportStreamId :: Lens' M3u8Settings (Maybe Natural) Source #

The value of the transport stream ID field in the Program Map Table.

m3u8Settings_maxPcrInterval :: Lens' M3u8Settings (Maybe Natural) Source #

Specify the maximum time, in milliseconds, between Program Clock References (PCRs) inserted into the transport stream.

m3u8Settings_privateMetadataPid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) of the private metadata stream in the transport stream.

m3u8Settings_audioDuration :: Lens' M3u8Settings (Maybe M3u8AudioDuration) Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

m3u8Settings_pmtInterval :: Lens' M3u8Settings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

m3u8Settings_timedMetadataPid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) of the timed metadata stream in the transport stream.

m3u8Settings_audioFramesPerPes :: Lens' M3u8Settings (Maybe Natural) Source #

The number of audio frames to insert for each PES packet.

m3u8Settings_pcrPid :: Lens' M3u8Settings (Maybe Natural) Source #

Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport stream. When no value is given, the encoder will assign the same value as the Video PID.

m3u8Settings_timedMetadata :: Lens' M3u8Settings (Maybe TimedMetadata) Source #

Applies only to HLS outputs. Use this setting to specify whether the service inserts the ID3 timed metadata from the input in this output.

m3u8Settings_scte35Source :: Lens' M3u8Settings (Maybe M3u8Scte35Source) Source #

For SCTE-35 markers from your input-- Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want SCTE-35 markers in this output. For SCTE-35 markers from an ESAM XML document-- Choose None (NONE) if you don't want manifest conditioning. Choose Passthrough (PASSTHROUGH) and choose Ad markers (adMarkers) if you do want manifest conditioning. In both cases, also provide the ESAM XML as a string in the setting Signal processing notification XML (sccXml).

m3u8Settings_patInterval :: Lens' M3u8Settings (Maybe Natural) Source #

The number of milliseconds between instances of this table in the output transport stream.

m3u8Settings_audioPids :: Lens' M3u8Settings (Maybe [Natural]) Source #

Packet Identifier (PID) of the elementary audio stream(s) in the transport stream. Multiple values are accepted, and can be entered in ranges and/or by comma separation.

m3u8Settings_nielsenId3 :: Lens' M3u8Settings (Maybe M3u8NielsenId3) Source #

If INSERT, Nielsen inaudible tones for media tracking will be detected in the input audio and an equivalent ID3 tag will be inserted in the output.

m3u8Settings_dataPTSControl :: Lens' M3u8Settings (Maybe M3u8DataPtsControl) Source #

If you select ALIGN_TO_VIDEO, MediaConvert writes captions and data packets with Presentation Timestamp (PTS) values greater than or equal to the first video packet PTS (MediaConvert drops captions and data packets with lesser PTS values). Keep the default value (AUTO) to allow all PTS values.

m3u8Settings_pcrControl :: Lens' M3u8Settings (Maybe M3u8PcrControl) Source #

When set to PCR_EVERY_PES_PACKET a Program Clock Reference value is inserted for every Packetized Elementary Stream (PES) header. This parameter is effective only when the PCR PID is the same as the video or audio elementary stream.

MotionImageInserter

data MotionImageInserter Source #

Overlay motion graphics on top of your video. The motion graphics that you specify here appear on all outputs in all output groups. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/motion-graphic-overlay.html.

See: newMotionImageInserter smart constructor.

Constructors

MotionImageInserter' 

Fields

  • framerate :: Maybe MotionImageInsertionFramerate

    If your motion graphic asset is a .mov file, keep this setting unspecified. If your motion graphic asset is a series of .png files, specify the frame rate of the overlay in frames per second, as a fraction. For example, specify 24 fps as 24/1. Make sure that the number of images in your series matches the frame rate and your intended overlay duration. For example, if you want a 30-second overlay at 30 fps, you should have 900 .png images. This overlay frame rate doesn't need to match the frame rate of the underlying video.

  • startTime :: Maybe Text

    Specify when the motion overlay begins. Use timecode format (HH:MM:SS:FF or HH:MM:SS;FF). Make sure that the timecode you provide here takes into account how you have set up your timecode configuration under both job settings and input settings. The simplest way to do that is to set both to start at 0. If you need to set up your job to follow timecodes embedded in your source that don't start at zero, make sure that you specify a start time that is after the first embedded timecode. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/setting-up-timecode.html Find job-wide and input timecode configuration settings in your JSON job settings specification at settings>timecodeConfig>source and settings>inputs>timecodeSource.

  • offset :: Maybe MotionImageInsertionOffset

    Use Offset to specify the placement of your motion graphic overlay on the video frame. Specify in pixels, from the upper-left corner of the frame. If you don't specify an offset, the service scales your overlay to the full size of the frame. Otherwise, the service inserts the overlay at its native resolution and scales the size up or down with any video scaling.

  • input :: Maybe Text

    Specify the .mov file or series of .png files that you want to overlay on your video. For .png files, provide the file name of the first file in the series. Make sure that the names of the .png files end with sequential numbers that specify the order that they are played in. For example, overlay_000.png, overlay_001.png, overlay_002.png, and so on. The sequence must start at zero, and each image file name must have the same number of digits. Pad your initial file names with enough zeros to complete the sequence. For example, if the first image is overlay_0.png, there can be only 10 images in the sequence, with the last image being overlay_9.png. But if the first image is overlay_00.png, there can be 100 images in the sequence.

  • insertionMode :: Maybe MotionImageInsertionMode

    Choose the type of motion graphic asset that you are providing for your overlay. You can choose either a .mov file or a series of .png files.

  • playback :: Maybe MotionImagePlayback

    Specify whether your motion graphic overlay repeats on a loop or plays only once.

Instances

Instances details
Eq MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

Read MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

Show MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

Generic MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

Associated Types

type Rep MotionImageInserter :: Type -> Type #

NFData MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

Methods

rnf :: MotionImageInserter -> () #

Hashable MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

ToJSON MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

FromJSON MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

type Rep MotionImageInserter Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInserter

newMotionImageInserter :: MotionImageInserter Source #

Create a value of MotionImageInserter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:framerate:MotionImageInserter', motionImageInserter_framerate - If your motion graphic asset is a .mov file, keep this setting unspecified. If your motion graphic asset is a series of .png files, specify the frame rate of the overlay in frames per second, as a fraction. For example, specify 24 fps as 24/1. Make sure that the number of images in your series matches the frame rate and your intended overlay duration. For example, if you want a 30-second overlay at 30 fps, you should have 900 .png images. This overlay frame rate doesn't need to match the frame rate of the underlying video.

$sel:startTime:MotionImageInserter', motionImageInserter_startTime - Specify when the motion overlay begins. Use timecode format (HH:MM:SS:FF or HH:MM:SS;FF). Make sure that the timecode you provide here takes into account how you have set up your timecode configuration under both job settings and input settings. The simplest way to do that is to set both to start at 0. If you need to set up your job to follow timecodes embedded in your source that don't start at zero, make sure that you specify a start time that is after the first embedded timecode. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/setting-up-timecode.html Find job-wide and input timecode configuration settings in your JSON job settings specification at settings>timecodeConfig>source and settings>inputs>timecodeSource.

$sel:offset:MotionImageInserter', motionImageInserter_offset - Use Offset to specify the placement of your motion graphic overlay on the video frame. Specify in pixels, from the upper-left corner of the frame. If you don't specify an offset, the service scales your overlay to the full size of the frame. Otherwise, the service inserts the overlay at its native resolution and scales the size up or down with any video scaling.

$sel:input:MotionImageInserter', motionImageInserter_input - Specify the .mov file or series of .png files that you want to overlay on your video. For .png files, provide the file name of the first file in the series. Make sure that the names of the .png files end with sequential numbers that specify the order that they are played in. For example, overlay_000.png, overlay_001.png, overlay_002.png, and so on. The sequence must start at zero, and each image file name must have the same number of digits. Pad your initial file names with enough zeros to complete the sequence. For example, if the first image is overlay_0.png, there can be only 10 images in the sequence, with the last image being overlay_9.png. But if the first image is overlay_00.png, there can be 100 images in the sequence.

$sel:insertionMode:MotionImageInserter', motionImageInserter_insertionMode - Choose the type of motion graphic asset that you are providing for your overlay. You can choose either a .mov file or a series of .png files.

$sel:playback:MotionImageInserter', motionImageInserter_playback - Specify whether your motion graphic overlay repeats on a loop or plays only once.

motionImageInserter_framerate :: Lens' MotionImageInserter (Maybe MotionImageInsertionFramerate) Source #

If your motion graphic asset is a .mov file, keep this setting unspecified. If your motion graphic asset is a series of .png files, specify the frame rate of the overlay in frames per second, as a fraction. For example, specify 24 fps as 24/1. Make sure that the number of images in your series matches the frame rate and your intended overlay duration. For example, if you want a 30-second overlay at 30 fps, you should have 900 .png images. This overlay frame rate doesn't need to match the frame rate of the underlying video.

motionImageInserter_startTime :: Lens' MotionImageInserter (Maybe Text) Source #

Specify when the motion overlay begins. Use timecode format (HH:MM:SS:FF or HH:MM:SS;FF). Make sure that the timecode you provide here takes into account how you have set up your timecode configuration under both job settings and input settings. The simplest way to do that is to set both to start at 0. If you need to set up your job to follow timecodes embedded in your source that don't start at zero, make sure that you specify a start time that is after the first embedded timecode. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/setting-up-timecode.html Find job-wide and input timecode configuration settings in your JSON job settings specification at settings>timecodeConfig>source and settings>inputs>timecodeSource.

motionImageInserter_offset :: Lens' MotionImageInserter (Maybe MotionImageInsertionOffset) Source #

Use Offset to specify the placement of your motion graphic overlay on the video frame. Specify in pixels, from the upper-left corner of the frame. If you don't specify an offset, the service scales your overlay to the full size of the frame. Otherwise, the service inserts the overlay at its native resolution and scales the size up or down with any video scaling.

motionImageInserter_input :: Lens' MotionImageInserter (Maybe Text) Source #

Specify the .mov file or series of .png files that you want to overlay on your video. For .png files, provide the file name of the first file in the series. Make sure that the names of the .png files end with sequential numbers that specify the order that they are played in. For example, overlay_000.png, overlay_001.png, overlay_002.png, and so on. The sequence must start at zero, and each image file name must have the same number of digits. Pad your initial file names with enough zeros to complete the sequence. For example, if the first image is overlay_0.png, there can be only 10 images in the sequence, with the last image being overlay_9.png. But if the first image is overlay_00.png, there can be 100 images in the sequence.

motionImageInserter_insertionMode :: Lens' MotionImageInserter (Maybe MotionImageInsertionMode) Source #

Choose the type of motion graphic asset that you are providing for your overlay. You can choose either a .mov file or a series of .png files.

motionImageInserter_playback :: Lens' MotionImageInserter (Maybe MotionImagePlayback) Source #

Specify whether your motion graphic overlay repeats on a loop or plays only once.

MotionImageInsertionFramerate

data MotionImageInsertionFramerate Source #

For motion overlays that don't have a built-in frame rate, specify the frame rate of the overlay in frames per second, as a fraction. For example, specify 24 fps as 24/1. The overlay frame rate doesn't need to match the frame rate of the underlying video.

See: newMotionImageInsertionFramerate smart constructor.

Constructors

MotionImageInsertionFramerate' 

Fields

  • framerateDenominator :: Maybe Natural

    The bottom of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 1.

  • framerateNumerator :: Maybe Natural

    The top of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 24.

Instances

Instances details
Eq MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

Read MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

Show MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

Generic MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

Associated Types

type Rep MotionImageInsertionFramerate :: Type -> Type #

NFData MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

Hashable MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

ToJSON MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

FromJSON MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

type Rep MotionImageInsertionFramerate Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionFramerate

type Rep MotionImageInsertionFramerate = D1 ('MetaData "MotionImageInsertionFramerate" "Amazonka.MediaConvert.Types.MotionImageInsertionFramerate" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MotionImageInsertionFramerate'" 'PrefixI 'True) (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newMotionImageInsertionFramerate :: MotionImageInsertionFramerate Source #

Create a value of MotionImageInsertionFramerate with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:framerateDenominator:MotionImageInsertionFramerate', motionImageInsertionFramerate_framerateDenominator - The bottom of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 1.

$sel:framerateNumerator:MotionImageInsertionFramerate', motionImageInsertionFramerate_framerateNumerator - The top of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 24.

motionImageInsertionFramerate_framerateDenominator :: Lens' MotionImageInsertionFramerate (Maybe Natural) Source #

The bottom of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 1.

motionImageInsertionFramerate_framerateNumerator :: Lens' MotionImageInsertionFramerate (Maybe Natural) Source #

The top of the fraction that expresses your overlay frame rate. For example, if your frame rate is 24 fps, set this value to 24.

MotionImageInsertionOffset

data MotionImageInsertionOffset Source #

Specify the offset between the upper-left corner of the video frame and the top left corner of the overlay.

See: newMotionImageInsertionOffset smart constructor.

Constructors

MotionImageInsertionOffset' 

Fields

  • imageX :: Maybe Natural

    Set the distance, in pixels, between the overlay and the left edge of the video frame.

  • imageY :: Maybe Natural

    Set the distance, in pixels, between the overlay and the top edge of the video frame.

Instances

Instances details
Eq MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

Read MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

Show MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

Generic MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

Associated Types

type Rep MotionImageInsertionOffset :: Type -> Type #

NFData MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

Hashable MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

ToJSON MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

FromJSON MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

type Rep MotionImageInsertionOffset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MotionImageInsertionOffset

type Rep MotionImageInsertionOffset = D1 ('MetaData "MotionImageInsertionOffset" "Amazonka.MediaConvert.Types.MotionImageInsertionOffset" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MotionImageInsertionOffset'" 'PrefixI 'True) (S1 ('MetaSel ('Just "imageX") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "imageY") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newMotionImageInsertionOffset :: MotionImageInsertionOffset Source #

Create a value of MotionImageInsertionOffset with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:imageX:MotionImageInsertionOffset', motionImageInsertionOffset_imageX - Set the distance, in pixels, between the overlay and the left edge of the video frame.

$sel:imageY:MotionImageInsertionOffset', motionImageInsertionOffset_imageY - Set the distance, in pixels, between the overlay and the top edge of the video frame.

motionImageInsertionOffset_imageX :: Lens' MotionImageInsertionOffset (Maybe Natural) Source #

Set the distance, in pixels, between the overlay and the left edge of the video frame.

motionImageInsertionOffset_imageY :: Lens' MotionImageInsertionOffset (Maybe Natural) Source #

Set the distance, in pixels, between the overlay and the top edge of the video frame.

MovSettings

data MovSettings Source #

These settings relate to your QuickTime MOV output container.

See: newMovSettings smart constructor.

Constructors

MovSettings' 

Fields

  • reference :: Maybe MovReference

    Always keep the default value (SELF_CONTAINED) for this setting.

  • cslgAtom :: Maybe MovCslgAtom

    When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

  • mpeg2FourCCControl :: Maybe MovMpeg2FourCCControl

    When set to XDCAM, writes MPEG2 video streams into the QuickTime file using XDCAM fourcc codes. This increases compatibility with Apple editors and players, but may decrease compatibility with other players. Only applicable when the video codec is MPEG2.

  • paddingControl :: Maybe MovPaddingControl

    To make this output compatible with Omenon, keep the default value, OMNEON. Unless you need Omneon compatibility, set this value to NONE. When you keep the default value, OMNEON, MediaConvert increases the length of the edit list atom. This might cause file rejections when a recipient of the output file doesn't expct this extra padding.

  • clapAtom :: Maybe MovClapAtom

    When enabled, include 'clap' atom if appropriate for the video output settings.

Instances

Instances details
Eq MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

Read MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

Show MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

Generic MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

Associated Types

type Rep MovSettings :: Type -> Type #

NFData MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

Methods

rnf :: MovSettings -> () #

Hashable MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

ToJSON MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

FromJSON MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

type Rep MovSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MovSettings

type Rep MovSettings = D1 ('MetaData "MovSettings" "Amazonka.MediaConvert.Types.MovSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MovSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "reference") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MovReference)) :*: S1 ('MetaSel ('Just "cslgAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MovCslgAtom))) :*: (S1 ('MetaSel ('Just "mpeg2FourCCControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MovMpeg2FourCCControl)) :*: (S1 ('MetaSel ('Just "paddingControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MovPaddingControl)) :*: S1 ('MetaSel ('Just "clapAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MovClapAtom))))))

newMovSettings :: MovSettings Source #

Create a value of MovSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:reference:MovSettings', movSettings_reference - Always keep the default value (SELF_CONTAINED) for this setting.

$sel:cslgAtom:MovSettings', movSettings_cslgAtom - When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

$sel:mpeg2FourCCControl:MovSettings', movSettings_mpeg2FourCCControl - When set to XDCAM, writes MPEG2 video streams into the QuickTime file using XDCAM fourcc codes. This increases compatibility with Apple editors and players, but may decrease compatibility with other players. Only applicable when the video codec is MPEG2.

$sel:paddingControl:MovSettings', movSettings_paddingControl - To make this output compatible with Omenon, keep the default value, OMNEON. Unless you need Omneon compatibility, set this value to NONE. When you keep the default value, OMNEON, MediaConvert increases the length of the edit list atom. This might cause file rejections when a recipient of the output file doesn't expct this extra padding.

$sel:clapAtom:MovSettings', movSettings_clapAtom - When enabled, include 'clap' atom if appropriate for the video output settings.

movSettings_reference :: Lens' MovSettings (Maybe MovReference) Source #

Always keep the default value (SELF_CONTAINED) for this setting.

movSettings_cslgAtom :: Lens' MovSettings (Maybe MovCslgAtom) Source #

When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

movSettings_mpeg2FourCCControl :: Lens' MovSettings (Maybe MovMpeg2FourCCControl) Source #

When set to XDCAM, writes MPEG2 video streams into the QuickTime file using XDCAM fourcc codes. This increases compatibility with Apple editors and players, but may decrease compatibility with other players. Only applicable when the video codec is MPEG2.

movSettings_paddingControl :: Lens' MovSettings (Maybe MovPaddingControl) Source #

To make this output compatible with Omenon, keep the default value, OMNEON. Unless you need Omneon compatibility, set this value to NONE. When you keep the default value, OMNEON, MediaConvert increases the length of the edit list atom. This might cause file rejections when a recipient of the output file doesn't expct this extra padding.

movSettings_clapAtom :: Lens' MovSettings (Maybe MovClapAtom) Source #

When enabled, include 'clap' atom if appropriate for the video output settings.

Mp2Settings

data Mp2Settings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value MP2.

See: newMp2Settings smart constructor.

Constructors

Mp2Settings' 

Fields

  • channels :: Maybe Natural

    Set Channels to specify the number of channels in this output audio track. Choosing Mono in the console will give you 1 output channel; choosing Stereo will give you 2. In the API, valid values are 1 and 2.

  • sampleRate :: Maybe Natural

    Sample rate in hz.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second.

Instances

Instances details
Eq Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

Read Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

Show Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

Generic Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

Associated Types

type Rep Mp2Settings :: Type -> Type #

NFData Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

Methods

rnf :: Mp2Settings -> () #

Hashable Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

ToJSON Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

FromJSON Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

type Rep Mp2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp2Settings

type Rep Mp2Settings = D1 ('MetaData "Mp2Settings" "Amazonka.MediaConvert.Types.Mp2Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Mp2Settings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newMp2Settings :: Mp2Settings Source #

Create a value of Mp2Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channels:Mp2Settings', mp2Settings_channels - Set Channels to specify the number of channels in this output audio track. Choosing Mono in the console will give you 1 output channel; choosing Stereo will give you 2. In the API, valid values are 1 and 2.

$sel:sampleRate:Mp2Settings', mp2Settings_sampleRate - Sample rate in hz.

$sel:bitrate:Mp2Settings', mp2Settings_bitrate - Specify the average bitrate in bits per second.

mp2Settings_channels :: Lens' Mp2Settings (Maybe Natural) Source #

Set Channels to specify the number of channels in this output audio track. Choosing Mono in the console will give you 1 output channel; choosing Stereo will give you 2. In the API, valid values are 1 and 2.

mp2Settings_bitrate :: Lens' Mp2Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second.

Mp3Settings

data Mp3Settings Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value MP3.

See: newMp3Settings smart constructor.

Constructors

Mp3Settings' 

Fields

  • channels :: Maybe Natural

    Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

  • rateControlMode :: Maybe Mp3RateControlMode

    Specify whether the service encodes this MP3 audio output with a constant bitrate (CBR) or a variable bitrate (VBR).

  • sampleRate :: Maybe Natural

    Sample rate in hz.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second.

  • vbrQuality :: Maybe Natural

    Required when you set Bitrate control mode (rateControlMode) to VBR. Specify the audio quality of this MP3 output from 0 (highest quality) to 9 (lowest quality).

Instances

Instances details
Eq Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

Read Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

Show Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

Generic Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

Associated Types

type Rep Mp3Settings :: Type -> Type #

NFData Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

Methods

rnf :: Mp3Settings -> () #

Hashable Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

ToJSON Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

FromJSON Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

type Rep Mp3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp3Settings

type Rep Mp3Settings = D1 ('MetaData "Mp3Settings" "Amazonka.MediaConvert.Types.Mp3Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Mp3Settings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp3RateControlMode))) :*: (S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "vbrQuality") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newMp3Settings :: Mp3Settings Source #

Create a value of Mp3Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channels:Mp3Settings', mp3Settings_channels - Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

$sel:rateControlMode:Mp3Settings', mp3Settings_rateControlMode - Specify whether the service encodes this MP3 audio output with a constant bitrate (CBR) or a variable bitrate (VBR).

$sel:sampleRate:Mp3Settings', mp3Settings_sampleRate - Sample rate in hz.

$sel:bitrate:Mp3Settings', mp3Settings_bitrate - Specify the average bitrate in bits per second.

$sel:vbrQuality:Mp3Settings', mp3Settings_vbrQuality - Required when you set Bitrate control mode (rateControlMode) to VBR. Specify the audio quality of this MP3 output from 0 (highest quality) to 9 (lowest quality).

mp3Settings_channels :: Lens' Mp3Settings (Maybe Natural) Source #

Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

mp3Settings_rateControlMode :: Lens' Mp3Settings (Maybe Mp3RateControlMode) Source #

Specify whether the service encodes this MP3 audio output with a constant bitrate (CBR) or a variable bitrate (VBR).

mp3Settings_bitrate :: Lens' Mp3Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second.

mp3Settings_vbrQuality :: Lens' Mp3Settings (Maybe Natural) Source #

Required when you set Bitrate control mode (rateControlMode) to VBR. Specify the audio quality of this MP3 output from 0 (highest quality) to 9 (lowest quality).

Mp4Settings

data Mp4Settings Source #

These settings relate to your MP4 output container. You can create audio only outputs with this container. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/supported-codecs-containers-audio-only.html#output-codecs-and-containers-supported-for-audio-only.

See: newMp4Settings smart constructor.

Constructors

Mp4Settings' 

Fields

  • moovPlacement :: Maybe Mp4MoovPlacement

    If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

  • cttsVersion :: Maybe Natural

    Ignore this setting unless compliance to the CTTS box version specification matters in your workflow. Specify a value of 1 to set your CTTS box version to 1 and make your output compliant with the specification. When you specify a value of 1, you must also set CSLG atom (cslgAtom) to the value INCLUDE. Keep the default value 0 to set your CTTS box version to 0. This can provide backward compatibility for some players and packagers.

  • freeSpaceBox :: Maybe Mp4FreeSpaceBox

    Inserts a free-space box immediately after the moov box.

  • audioDuration :: Maybe CmfcAudioDuration

    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

  • mp4MajorBrand :: Maybe Text

    Overrides the "Major Brand" field in the output file. Usually not necessary to specify.

  • cslgAtom :: Maybe Mp4CslgAtom

    When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

Instances

Instances details
Eq Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

Read Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

Show Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

Generic Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

Associated Types

type Rep Mp4Settings :: Type -> Type #

NFData Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

Methods

rnf :: Mp4Settings -> () #

Hashable Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

ToJSON Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

FromJSON Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

type Rep Mp4Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mp4Settings

type Rep Mp4Settings = D1 ('MetaData "Mp4Settings" "Amazonka.MediaConvert.Types.Mp4Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Mp4Settings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "moovPlacement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp4MoovPlacement)) :*: (S1 ('MetaSel ('Just "cttsVersion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "freeSpaceBox") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp4FreeSpaceBox)))) :*: (S1 ('MetaSel ('Just "audioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmfcAudioDuration)) :*: (S1 ('MetaSel ('Just "mp4MajorBrand") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "cslgAtom") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mp4CslgAtom))))))

newMp4Settings :: Mp4Settings Source #

Create a value of Mp4Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:moovPlacement:Mp4Settings', mp4Settings_moovPlacement - If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

$sel:cttsVersion:Mp4Settings', mp4Settings_cttsVersion - Ignore this setting unless compliance to the CTTS box version specification matters in your workflow. Specify a value of 1 to set your CTTS box version to 1 and make your output compliant with the specification. When you specify a value of 1, you must also set CSLG atom (cslgAtom) to the value INCLUDE. Keep the default value 0 to set your CTTS box version to 0. This can provide backward compatibility for some players and packagers.

$sel:freeSpaceBox:Mp4Settings', mp4Settings_freeSpaceBox - Inserts a free-space box immediately after the moov box.

$sel:audioDuration:Mp4Settings', mp4Settings_audioDuration - Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

$sel:mp4MajorBrand:Mp4Settings', mp4Settings_mp4MajorBrand - Overrides the "Major Brand" field in the output file. Usually not necessary to specify.

$sel:cslgAtom:Mp4Settings', mp4Settings_cslgAtom - When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

mp4Settings_moovPlacement :: Lens' Mp4Settings (Maybe Mp4MoovPlacement) Source #

If set to PROGRESSIVE_DOWNLOAD, the MOOV atom is relocated to the beginning of the archive as required for progressive downloading. Otherwise it is placed normally at the end.

mp4Settings_cttsVersion :: Lens' Mp4Settings (Maybe Natural) Source #

Ignore this setting unless compliance to the CTTS box version specification matters in your workflow. Specify a value of 1 to set your CTTS box version to 1 and make your output compliant with the specification. When you specify a value of 1, you must also set CSLG atom (cslgAtom) to the value INCLUDE. Keep the default value 0 to set your CTTS box version to 0. This can provide backward compatibility for some players and packagers.

mp4Settings_freeSpaceBox :: Lens' Mp4Settings (Maybe Mp4FreeSpaceBox) Source #

Inserts a free-space box immediately after the moov box.

mp4Settings_audioDuration :: Lens' Mp4Settings (Maybe CmfcAudioDuration) Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

mp4Settings_mp4MajorBrand :: Lens' Mp4Settings (Maybe Text) Source #

Overrides the "Major Brand" field in the output file. Usually not necessary to specify.

mp4Settings_cslgAtom :: Lens' Mp4Settings (Maybe Mp4CslgAtom) Source #

When enabled, file composition times will start at zero, composition times in the 'ctts' (composition time to sample) box for B-frames will be negative, and a 'cslg' (composition shift least greatest) box will be included per 14496-1 amendment 1. This improves compatibility with Apple players and tools.

MpdSettings

data MpdSettings Source #

These settings relate to the fragmented MP4 container for the segments in your DASH outputs.

See: newMpdSettings smart constructor.

Constructors

MpdSettings' 

Fields

  • scte35Esam :: Maybe MpdScte35Esam

    Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

  • audioDuration :: Maybe MpdAudioDuration

    Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

  • scte35Source :: Maybe MpdScte35Source

    Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

  • accessibilityCaptionHints :: Maybe MpdAccessibilityCaptionHints

    Optional. Choose Include (INCLUDE) to have MediaConvert mark up your DASH manifest with elements for embedded 608 captions. This markup isn't generally required, but some video players require it to discover and play embedded 608 captions. Keep the default value, Exclude (EXCLUDE), to leave these elements out. When you enable this setting, this is the markup that MediaConvert includes in your manifest:

  • captionContainerType :: Maybe MpdCaptionContainerType

    Use this setting only in DASH output groups that include sidecar TTML or IMSC captions. You specify sidecar captions in a separate output from your audio and video. Choose Raw (RAW) for captions in a single XML file in a raw container. Choose Fragmented MPEG-4 (FRAGMENTED_MP4) for captions in XML format contained within fragmented MP4 files. This set of fragmented MP4 files is separate from your video and audio fragmented MP4 files.

Instances

Instances details
Eq MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

Read MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

Show MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

Generic MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

Associated Types

type Rep MpdSettings :: Type -> Type #

NFData MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

Methods

rnf :: MpdSettings -> () #

Hashable MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

ToJSON MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

FromJSON MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

type Rep MpdSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MpdSettings

type Rep MpdSettings = D1 ('MetaData "MpdSettings" "Amazonka.MediaConvert.Types.MpdSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MpdSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "scte35Esam") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MpdScte35Esam)) :*: S1 ('MetaSel ('Just "audioDuration") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MpdAudioDuration))) :*: (S1 ('MetaSel ('Just "scte35Source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MpdScte35Source)) :*: (S1 ('MetaSel ('Just "accessibilityCaptionHints") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MpdAccessibilityCaptionHints)) :*: S1 ('MetaSel ('Just "captionContainerType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MpdCaptionContainerType))))))

newMpdSettings :: MpdSettings Source #

Create a value of MpdSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:scte35Esam:MpdSettings', mpdSettings_scte35Esam - Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

$sel:audioDuration:MpdSettings', mpdSettings_audioDuration - Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

$sel:scte35Source:MpdSettings', mpdSettings_scte35Source - Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

$sel:accessibilityCaptionHints:MpdSettings', mpdSettings_accessibilityCaptionHints - Optional. Choose Include (INCLUDE) to have MediaConvert mark up your DASH manifest with elements for embedded 608 captions. This markup isn't generally required, but some video players require it to discover and play embedded 608 captions. Keep the default value, Exclude (EXCLUDE), to leave these elements out. When you enable this setting, this is the markup that MediaConvert includes in your manifest:

$sel:captionContainerType:MpdSettings', mpdSettings_captionContainerType - Use this setting only in DASH output groups that include sidecar TTML or IMSC captions. You specify sidecar captions in a separate output from your audio and video. Choose Raw (RAW) for captions in a single XML file in a raw container. Choose Fragmented MPEG-4 (FRAGMENTED_MP4) for captions in XML format contained within fragmented MP4 files. This set of fragmented MP4 files is separate from your video and audio fragmented MP4 files.

mpdSettings_scte35Esam :: Lens' MpdSettings (Maybe MpdScte35Esam) Source #

Use this setting only when you specify SCTE-35 markers from ESAM. Choose INSERT to put SCTE-35 markers in this output at the insertion points that you specify in an ESAM XML document. Provide the document in the setting SCC XML (sccXml).

mpdSettings_audioDuration :: Lens' MpdSettings (Maybe MpdAudioDuration) Source #

Specify this setting only when your output will be consumed by a downstream repackaging workflow that is sensitive to very small duration differences between video and audio. For this situation, choose Match video duration (MATCH_VIDEO_DURATION). In all other cases, keep the default value, Default codec duration (DEFAULT_CODEC_DURATION). When you choose Match video duration, MediaConvert pads the output audio streams with silence or trims them to ensure that the total duration of each audio stream is at least as long as the total duration of the video stream. After padding or trimming, the audio stream duration is no more than one frame longer than the video stream. MediaConvert applies audio padding or trimming only to the end of the last segment of the output. For unsegmented outputs, MediaConvert adds padding only to the end of the file. When you keep the default value, any minor discrepancies between audio and video duration will depend on your output audio codec.

mpdSettings_scte35Source :: Lens' MpdSettings (Maybe MpdScte35Source) Source #

Ignore this setting unless you have SCTE-35 markers in your input video file. Choose Passthrough (PASSTHROUGH) if you want SCTE-35 markers that appear in your input to also appear in this output. Choose None (NONE) if you don't want those SCTE-35 markers in this output.

mpdSettings_accessibilityCaptionHints :: Lens' MpdSettings (Maybe MpdAccessibilityCaptionHints) Source #

Optional. Choose Include (INCLUDE) to have MediaConvert mark up your DASH manifest with elements for embedded 608 captions. This markup isn't generally required, but some video players require it to discover and play embedded 608 captions. Keep the default value, Exclude (EXCLUDE), to leave these elements out. When you enable this setting, this is the markup that MediaConvert includes in your manifest:

mpdSettings_captionContainerType :: Lens' MpdSettings (Maybe MpdCaptionContainerType) Source #

Use this setting only in DASH output groups that include sidecar TTML or IMSC captions. You specify sidecar captions in a separate output from your audio and video. Choose Raw (RAW) for captions in a single XML file in a raw container. Choose Fragmented MPEG-4 (FRAGMENTED_MP4) for captions in XML format contained within fragmented MP4 files. This set of fragmented MP4 files is separate from your video and audio fragmented MP4 files.

Mpeg2Settings

data Mpeg2Settings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value MPEG2.

See: newMpeg2Settings smart constructor.

Constructors

Mpeg2Settings' 

Fields

  • qualityTuningLevel :: Maybe Mpeg2QualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

  • temporalAdaptiveQuantization :: Maybe Mpeg2TemporalAdaptiveQuantization

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

  • sceneChangeDetect :: Maybe Mpeg2SceneChangeDetect

    Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default.

  • hrdBufferInitialFillPercentage :: Maybe Natural

    Percentage of the buffer that should initially be filled (HRD buffer model).

  • slowPal :: Maybe Mpeg2SlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • gopSize :: Maybe Double

    Specify the interval between keyframes, in seconds or frames, for this output. Default: 12 Related settings: When you specify the GOP size in seconds, set GOP mode control (GopSizeUnits) to Specified, seconds (SECONDS). The default value for GOP mode control (GopSizeUnits) is Frames (FRAMES).

  • numberBFramesBetweenReferenceFrames :: Maybe Natural

    Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

  • gopSizeUnits :: Maybe Mpeg2GopSizeUnits

    Specify the units for GOP size (GopSize). If you don't specify a value here, by default the encoder measures GOP size in frames.

  • hrdBufferSize :: Maybe Natural

    Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

  • rateControlMode :: Maybe Mpeg2RateControlMode

    Use Rate control mode (Mpeg2RateControlMode) to specify whether the bitrate is variable (vbr) or constant (cbr).

  • telecine :: Maybe Mpeg2Telecine

    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

  • intraDcPrecision :: Maybe Mpeg2IntraDcPrecision

    Use Intra DC precision (Mpeg2IntraDcPrecision) to set quantization precision for intra-block DC coefficients. If you choose the value auto, the service will automatically select the precision based on the per-frame compression ratio.

  • dynamicSubGop :: Maybe Mpeg2DynamicSubGop

    Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

  • minIInterval :: Maybe Natural

    Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

  • interlaceMode :: Maybe Mpeg2InterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • parControl :: Maybe Mpeg2ParControl

    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

  • scanTypeConversionMode :: Maybe Mpeg2ScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • softness :: Maybe Natural

    Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, to use the AWS Elemental default matrices. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

  • codecProfile :: Maybe Mpeg2CodecProfile

    Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output.

  • bitrate :: Maybe Natural

    Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe Mpeg2FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • codecLevel :: Maybe Mpeg2CodecLevel

    Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output.

  • framerateControl :: Maybe Mpeg2FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • adaptiveQuantization :: Maybe Mpeg2AdaptiveQuantization

    Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • maxBitrate :: Maybe Natural

    Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000.

  • syntax :: Maybe Mpeg2Syntax

    Specify whether this output's video uses the D10 syntax. Keep the default value to not use the syntax. Related settings: When you choose D10 (D_10) for your MXF profile (profile), you must also set this value to to D10 (D_10).

  • gopClosedCadence :: Maybe Natural

    Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

  • spatialAdaptiveQuantization :: Maybe Mpeg2SpatialAdaptiveQuantization

    Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

Read Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

Show Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

Generic Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

Associated Types

type Rep Mpeg2Settings :: Type -> Type #

NFData Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

Methods

rnf :: Mpeg2Settings -> () #

Hashable Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

ToJSON Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

FromJSON Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

type Rep Mpeg2Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Mpeg2Settings

type Rep Mpeg2Settings = D1 ('MetaData "Mpeg2Settings" "Amazonka.MediaConvert.Types.Mpeg2Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Mpeg2Settings'" 'PrefixI 'True) (((((S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2QualityTuningLevel)) :*: S1 ('MetaSel ('Just "temporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2TemporalAdaptiveQuantization))) :*: (S1 ('MetaSel ('Just "sceneChangeDetect") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2SceneChangeDetect)) :*: S1 ('MetaSel ('Just "hrdBufferInitialFillPercentage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2SlowPal)) :*: S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "numberBFramesBetweenReferenceFrames") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: (((S1 ('MetaSel ('Just "gopSizeUnits") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2GopSizeUnits)) :*: S1 ('MetaSel ('Just "hrdBufferSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2RateControlMode)) :*: S1 ('MetaSel ('Just "telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2Telecine)))) :*: ((S1 ('MetaSel ('Just "intraDcPrecision") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2IntraDcPrecision)) :*: S1 ('MetaSel ('Just "dynamicSubGop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2DynamicSubGop))) :*: (S1 ('MetaSel ('Just "minIInterval") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "interlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2InterlaceMode)))))) :*: ((((S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2ParControl)) :*: S1 ('MetaSel ('Just "scanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2ScanTypeConversionMode))) :*: (S1 ('MetaSel ('Just "softness") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "codecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2CodecProfile)))) :*: ((S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2FramerateConversionAlgorithm)) :*: S1 ('MetaSel ('Just "codecLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2CodecLevel))))) :*: (((S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2FramerateControl)) :*: S1 ('MetaSel ('Just "adaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2AdaptiveQuantization))) :*: (S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "syntax") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2Syntax)) :*: S1 ('MetaSel ('Just "gopClosedCadence") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "spatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2SpatialAdaptiveQuantization))))))))

newMpeg2Settings :: Mpeg2Settings Source #

Create a value of Mpeg2Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:Mpeg2Settings', mpeg2Settings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

$sel:temporalAdaptiveQuantization:Mpeg2Settings', mpeg2Settings_temporalAdaptiveQuantization - Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

$sel:sceneChangeDetect:Mpeg2Settings', mpeg2Settings_sceneChangeDetect - Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default.

$sel:hrdBufferInitialFillPercentage:Mpeg2Settings', mpeg2Settings_hrdBufferInitialFillPercentage - Percentage of the buffer that should initially be filled (HRD buffer model).

$sel:slowPal:Mpeg2Settings', mpeg2Settings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:parNumerator:Mpeg2Settings', mpeg2Settings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:gopSize:Mpeg2Settings', mpeg2Settings_gopSize - Specify the interval between keyframes, in seconds or frames, for this output. Default: 12 Related settings: When you specify the GOP size in seconds, set GOP mode control (GopSizeUnits) to Specified, seconds (SECONDS). The default value for GOP mode control (GopSizeUnits) is Frames (FRAMES).

$sel:numberBFramesBetweenReferenceFrames:Mpeg2Settings', mpeg2Settings_numberBFramesBetweenReferenceFrames - Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

$sel:gopSizeUnits:Mpeg2Settings', mpeg2Settings_gopSizeUnits - Specify the units for GOP size (GopSize). If you don't specify a value here, by default the encoder measures GOP size in frames.

$sel:hrdBufferSize:Mpeg2Settings', mpeg2Settings_hrdBufferSize - Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

$sel:rateControlMode:Mpeg2Settings', mpeg2Settings_rateControlMode - Use Rate control mode (Mpeg2RateControlMode) to specify whether the bitrate is variable (vbr) or constant (cbr).

$sel:telecine:Mpeg2Settings', mpeg2Settings_telecine - When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

$sel:intraDcPrecision:Mpeg2Settings', mpeg2Settings_intraDcPrecision - Use Intra DC precision (Mpeg2IntraDcPrecision) to set quantization precision for intra-block DC coefficients. If you choose the value auto, the service will automatically select the precision based on the per-frame compression ratio.

$sel:dynamicSubGop:Mpeg2Settings', mpeg2Settings_dynamicSubGop - Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

$sel:minIInterval:Mpeg2Settings', mpeg2Settings_minIInterval - Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

$sel:interlaceMode:Mpeg2Settings', mpeg2Settings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:parControl:Mpeg2Settings', mpeg2Settings_parControl - Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

$sel:scanTypeConversionMode:Mpeg2Settings', mpeg2Settings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:softness:Mpeg2Settings', mpeg2Settings_softness - Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, to use the AWS Elemental default matrices. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

$sel:codecProfile:Mpeg2Settings', mpeg2Settings_codecProfile - Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output.

$sel:bitrate:Mpeg2Settings', mpeg2Settings_bitrate - Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

$sel:framerateDenominator:Mpeg2Settings', mpeg2Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:Mpeg2Settings', mpeg2Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:codecLevel:Mpeg2Settings', mpeg2Settings_codecLevel - Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output.

$sel:framerateControl:Mpeg2Settings', mpeg2Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:adaptiveQuantization:Mpeg2Settings', mpeg2Settings_adaptiveQuantization - Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

$sel:framerateNumerator:Mpeg2Settings', mpeg2Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:maxBitrate:Mpeg2Settings', mpeg2Settings_maxBitrate - Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000.

$sel:syntax:Mpeg2Settings', mpeg2Settings_syntax - Specify whether this output's video uses the D10 syntax. Keep the default value to not use the syntax. Related settings: When you choose D10 (D_10) for your MXF profile (profile), you must also set this value to to D10 (D_10).

$sel:gopClosedCadence:Mpeg2Settings', mpeg2Settings_gopClosedCadence - Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

$sel:parDenominator:Mpeg2Settings', mpeg2Settings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

$sel:spatialAdaptiveQuantization:Mpeg2Settings', mpeg2Settings_spatialAdaptiveQuantization - Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

mpeg2Settings_qualityTuningLevel :: Lens' Mpeg2Settings (Maybe Mpeg2QualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

mpeg2Settings_temporalAdaptiveQuantization :: Lens' Mpeg2Settings (Maybe Mpeg2TemporalAdaptiveQuantization) Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

mpeg2Settings_sceneChangeDetect :: Lens' Mpeg2Settings (Maybe Mpeg2SceneChangeDetect) Source #

Enable this setting to insert I-frames at scene changes that the service automatically detects. This improves video quality and is enabled by default.

mpeg2Settings_hrdBufferInitialFillPercentage :: Lens' Mpeg2Settings (Maybe Natural) Source #

Percentage of the buffer that should initially be filled (HRD buffer model).

mpeg2Settings_slowPal :: Lens' Mpeg2Settings (Maybe Mpeg2SlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

mpeg2Settings_parNumerator :: Lens' Mpeg2Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

mpeg2Settings_gopSize :: Lens' Mpeg2Settings (Maybe Double) Source #

Specify the interval between keyframes, in seconds or frames, for this output. Default: 12 Related settings: When you specify the GOP size in seconds, set GOP mode control (GopSizeUnits) to Specified, seconds (SECONDS). The default value for GOP mode control (GopSizeUnits) is Frames (FRAMES).

mpeg2Settings_numberBFramesBetweenReferenceFrames :: Lens' Mpeg2Settings (Maybe Natural) Source #

Specify the number of B-frames that MediaConvert puts between reference frames in this output. Valid values are whole numbers from 0 through 7. When you don't specify a value, MediaConvert defaults to 2.

mpeg2Settings_gopSizeUnits :: Lens' Mpeg2Settings (Maybe Mpeg2GopSizeUnits) Source #

Specify the units for GOP size (GopSize). If you don't specify a value here, by default the encoder measures GOP size in frames.

mpeg2Settings_hrdBufferSize :: Lens' Mpeg2Settings (Maybe Natural) Source #

Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

mpeg2Settings_rateControlMode :: Lens' Mpeg2Settings (Maybe Mpeg2RateControlMode) Source #

Use Rate control mode (Mpeg2RateControlMode) to specify whether the bitrate is variable (vbr) or constant (cbr).

mpeg2Settings_telecine :: Lens' Mpeg2Settings (Maybe Mpeg2Telecine) Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard or soft telecine to create a smoother picture. Hard telecine (HARD) produces a 29.97i output. Soft telecine (SOFT) produces an output with a 23.976 output that signals to the video player device to do the conversion during play back. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

mpeg2Settings_intraDcPrecision :: Lens' Mpeg2Settings (Maybe Mpeg2IntraDcPrecision) Source #

Use Intra DC precision (Mpeg2IntraDcPrecision) to set quantization precision for intra-block DC coefficients. If you choose the value auto, the service will automatically select the precision based on the per-frame compression ratio.

mpeg2Settings_dynamicSubGop :: Lens' Mpeg2Settings (Maybe Mpeg2DynamicSubGop) Source #

Choose Adaptive to improve subjective video quality for high-motion content. This will cause the service to use fewer B-frames (which infer information based on other frames) for high-motion portions of the video and more B-frames for low-motion portions. The maximum number of B-frames is limited by the value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames).

mpeg2Settings_minIInterval :: Lens' Mpeg2Settings (Maybe Natural) Source #

Enforces separation between repeated (cadence) I-frames and I-frames inserted by Scene Change Detection. If a scene change I-frame is within I-interval frames of a cadence I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. GOP stretch requires enabling lookahead as well as setting I-interval. The normal cadence resumes for the next GOP. This setting is only used when Scene Change Detect is enabled. Note: Maximum GOP stretch = GOP size + Min-I-interval - 1

mpeg2Settings_interlaceMode :: Lens' Mpeg2Settings (Maybe Mpeg2InterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

mpeg2Settings_parControl :: Lens' Mpeg2Settings (Maybe Mpeg2ParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

mpeg2Settings_scanTypeConversionMode :: Lens' Mpeg2Settings (Maybe Mpeg2ScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

mpeg2Settings_softness :: Lens' Mpeg2Settings (Maybe Natural) Source #

Ignore this setting unless you need to comply with a specification that requires a specific value. If you don't have a specification requirement, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, to use the AWS Elemental default matrices. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

mpeg2Settings_codecProfile :: Lens' Mpeg2Settings (Maybe Mpeg2CodecProfile) Source #

Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output.

mpeg2Settings_bitrate :: Lens' Mpeg2Settings (Maybe Natural) Source #

Specify the average bitrate in bits per second. Required for VBR and CBR. For MS Smooth outputs, bitrates must be unique when rounded down to the nearest multiple of 1000.

mpeg2Settings_framerateDenominator :: Lens' Mpeg2Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

mpeg2Settings_framerateConversionAlgorithm :: Lens' Mpeg2Settings (Maybe Mpeg2FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

mpeg2Settings_codecLevel :: Lens' Mpeg2Settings (Maybe Mpeg2CodecLevel) Source #

Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output.

mpeg2Settings_framerateControl :: Lens' Mpeg2Settings (Maybe Mpeg2FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

mpeg2Settings_adaptiveQuantization :: Lens' Mpeg2Settings (Maybe Mpeg2AdaptiveQuantization) Source #

Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to the following settings: Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

mpeg2Settings_framerateNumerator :: Lens' Mpeg2Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

mpeg2Settings_maxBitrate :: Lens' Mpeg2Settings (Maybe Natural) Source #

Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000.

mpeg2Settings_syntax :: Lens' Mpeg2Settings (Maybe Mpeg2Syntax) Source #

Specify whether this output's video uses the D10 syntax. Keep the default value to not use the syntax. Related settings: When you choose D10 (D_10) for your MXF profile (profile), you must also set this value to to D10 (D_10).

mpeg2Settings_gopClosedCadence :: Lens' Mpeg2Settings (Maybe Natural) Source #

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

mpeg2Settings_parDenominator :: Lens' Mpeg2Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

mpeg2Settings_spatialAdaptiveQuantization :: Lens' Mpeg2Settings (Maybe Mpeg2SpatialAdaptiveQuantization) Source #

Keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

MsSmoothAdditionalManifest

data MsSmoothAdditionalManifest Source #

Specify the details for each additional Microsoft Smooth Streaming manifest that you want the service to generate for this output group. Each manifest can reference a different subset of outputs in the group.

See: newMsSmoothAdditionalManifest smart constructor.

Constructors

MsSmoothAdditionalManifest' 

Fields

  • manifestNameModifier :: Maybe Text

    Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your Microsoft Smooth group is film-name.ismv. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.ismv.

  • selectedOutputs :: Maybe [Text]

    Specify the outputs that you want this additional top-level manifest to reference.

Instances

Instances details
Eq MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

Read MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

Show MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

Generic MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

Associated Types

type Rep MsSmoothAdditionalManifest :: Type -> Type #

NFData MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

Hashable MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

ToJSON MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

FromJSON MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

type Rep MsSmoothAdditionalManifest Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest

type Rep MsSmoothAdditionalManifest = D1 ('MetaData "MsSmoothAdditionalManifest" "Amazonka.MediaConvert.Types.MsSmoothAdditionalManifest" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MsSmoothAdditionalManifest'" 'PrefixI 'True) (S1 ('MetaSel ('Just "manifestNameModifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "selectedOutputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))

newMsSmoothAdditionalManifest :: MsSmoothAdditionalManifest Source #

Create a value of MsSmoothAdditionalManifest with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:manifestNameModifier:MsSmoothAdditionalManifest', msSmoothAdditionalManifest_manifestNameModifier - Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your Microsoft Smooth group is film-name.ismv. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.ismv.

$sel:selectedOutputs:MsSmoothAdditionalManifest', msSmoothAdditionalManifest_selectedOutputs - Specify the outputs that you want this additional top-level manifest to reference.

msSmoothAdditionalManifest_manifestNameModifier :: Lens' MsSmoothAdditionalManifest (Maybe Text) Source #

Specify a name modifier that the service adds to the name of this manifest to make it different from the file names of the other main manifests in the output group. For example, say that the default main manifest for your Microsoft Smooth group is film-name.ismv. If you enter "-no-premium" for this setting, then the file name the service generates for this top-level manifest is film-name-no-premium.ismv.

msSmoothAdditionalManifest_selectedOutputs :: Lens' MsSmoothAdditionalManifest (Maybe [Text]) Source #

Specify the outputs that you want this additional top-level manifest to reference.

MsSmoothEncryptionSettings

data MsSmoothEncryptionSettings Source #

If you are using DRM, set DRM System (MsSmoothEncryptionSettings) to specify the value SpekeKeyProvider.

See: newMsSmoothEncryptionSettings smart constructor.

Constructors

MsSmoothEncryptionSettings' 

Fields

  • spekeKeyProvider :: Maybe SpekeKeyProvider

    If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

Instances

Instances details
Eq MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

Read MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

Show MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

Generic MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

Associated Types

type Rep MsSmoothEncryptionSettings :: Type -> Type #

NFData MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

Hashable MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

ToJSON MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

FromJSON MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

type Rep MsSmoothEncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings

type Rep MsSmoothEncryptionSettings = D1 ('MetaData "MsSmoothEncryptionSettings" "Amazonka.MediaConvert.Types.MsSmoothEncryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MsSmoothEncryptionSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "spekeKeyProvider") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SpekeKeyProvider))))

newMsSmoothEncryptionSettings :: MsSmoothEncryptionSettings Source #

Create a value of MsSmoothEncryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:spekeKeyProvider:MsSmoothEncryptionSettings', msSmoothEncryptionSettings_spekeKeyProvider - If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

msSmoothEncryptionSettings_spekeKeyProvider :: Lens' MsSmoothEncryptionSettings (Maybe SpekeKeyProvider) Source #

If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

MsSmoothGroupSettings

data MsSmoothGroupSettings Source #

Settings related to your Microsoft Smooth Streaming output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to MS_SMOOTH_GROUP_SETTINGS.

See: newMsSmoothGroupSettings smart constructor.

Constructors

MsSmoothGroupSettings' 

Fields

  • fragmentLength :: Maybe Natural

    Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

  • manifestEncoding :: Maybe MsSmoothManifestEncoding

    Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding format for the server and client manifest. Valid options are utf8 and utf16.

  • destination :: Maybe Text

    Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

  • audioDeduplication :: Maybe MsSmoothAudioDeduplication

    COMBINE_DUPLICATE_STREAMS combines identical audio encoding settings across a Microsoft Smooth output group into a single audio stream.

  • additionalManifests :: Maybe [MsSmoothAdditionalManifest]

    By default, the service creates one .ism Microsoft Smooth Streaming manifest for each Microsoft Smooth Streaming output group in your job. This default manifest references every output in the output group. To create additional manifests that reference a subset of the outputs in the output group, specify a list of them here.

  • fragmentLengthControl :: Maybe MsSmoothFragmentLengthControl

    Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

  • destinationSettings :: Maybe DestinationSettings

    Settings associated with the destination. Will vary based on the type of destination

  • encryption :: Maybe MsSmoothEncryptionSettings

    If you are using DRM, set DRM System (MsSmoothEncryptionSettings) to specify the value SpekeKeyProvider.

Instances

Instances details
Eq MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

Read MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

Show MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

Generic MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

Associated Types

type Rep MsSmoothGroupSettings :: Type -> Type #

NFData MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

Methods

rnf :: MsSmoothGroupSettings -> () #

Hashable MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

ToJSON MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

FromJSON MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

type Rep MsSmoothGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MsSmoothGroupSettings

newMsSmoothGroupSettings :: MsSmoothGroupSettings Source #

Create a value of MsSmoothGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:fragmentLength:MsSmoothGroupSettings', msSmoothGroupSettings_fragmentLength - Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

$sel:manifestEncoding:MsSmoothGroupSettings', msSmoothGroupSettings_manifestEncoding - Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding format for the server and client manifest. Valid options are utf8 and utf16.

$sel:destination:MsSmoothGroupSettings', msSmoothGroupSettings_destination - Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

$sel:audioDeduplication:MsSmoothGroupSettings', msSmoothGroupSettings_audioDeduplication - COMBINE_DUPLICATE_STREAMS combines identical audio encoding settings across a Microsoft Smooth output group into a single audio stream.

$sel:additionalManifests:MsSmoothGroupSettings', msSmoothGroupSettings_additionalManifests - By default, the service creates one .ism Microsoft Smooth Streaming manifest for each Microsoft Smooth Streaming output group in your job. This default manifest references every output in the output group. To create additional manifests that reference a subset of the outputs in the output group, specify a list of them here.

$sel:fragmentLengthControl:MsSmoothGroupSettings', msSmoothGroupSettings_fragmentLengthControl - Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

$sel:destinationSettings:MsSmoothGroupSettings', msSmoothGroupSettings_destinationSettings - Settings associated with the destination. Will vary based on the type of destination

$sel:encryption:MsSmoothGroupSettings', msSmoothGroupSettings_encryption - If you are using DRM, set DRM System (MsSmoothEncryptionSettings) to specify the value SpekeKeyProvider.

msSmoothGroupSettings_fragmentLength :: Lens' MsSmoothGroupSettings (Maybe Natural) Source #

Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

msSmoothGroupSettings_manifestEncoding :: Lens' MsSmoothGroupSettings (Maybe MsSmoothManifestEncoding) Source #

Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding format for the server and client manifest. Valid options are utf8 and utf16.

msSmoothGroupSettings_destination :: Lens' MsSmoothGroupSettings (Maybe Text) Source #

Use Destination (Destination) to specify the S3 output location and the output filename base. Destination accepts format identifiers. If you do not specify the base filename in the URI, the service will use the filename of the input file. If your job has multiple inputs, the service uses the filename of the first input file.

msSmoothGroupSettings_audioDeduplication :: Lens' MsSmoothGroupSettings (Maybe MsSmoothAudioDeduplication) Source #

COMBINE_DUPLICATE_STREAMS combines identical audio encoding settings across a Microsoft Smooth output group into a single audio stream.

msSmoothGroupSettings_additionalManifests :: Lens' MsSmoothGroupSettings (Maybe [MsSmoothAdditionalManifest]) Source #

By default, the service creates one .ism Microsoft Smooth Streaming manifest for each Microsoft Smooth Streaming output group in your job. This default manifest references every output in the output group. To create additional manifests that reference a subset of the outputs in the output group, specify a list of them here.

msSmoothGroupSettings_fragmentLengthControl :: Lens' MsSmoothGroupSettings (Maybe MsSmoothFragmentLengthControl) Source #

Specify how you want MediaConvert to determine the fragment length. Choose Exact (EXACT) to have the encoder use the exact length that you specify with the setting Fragment length (FragmentLength). This might result in extra I-frames. Choose Multiple of GOP (GOP_MULTIPLE) to have the encoder round up the segment lengths to match the next GOP boundary.

msSmoothGroupSettings_destinationSettings :: Lens' MsSmoothGroupSettings (Maybe DestinationSettings) Source #

Settings associated with the destination. Will vary based on the type of destination

msSmoothGroupSettings_encryption :: Lens' MsSmoothGroupSettings (Maybe MsSmoothEncryptionSettings) Source #

If you are using DRM, set DRM System (MsSmoothEncryptionSettings) to specify the value SpekeKeyProvider.

MxfSettings

data MxfSettings Source #

These settings relate to your MXF output container.

See: newMxfSettings smart constructor.

Constructors

MxfSettings' 

Fields

  • xavcProfileSettings :: Maybe MxfXavcProfileSettings

    Specify the XAVC profile settings for MXF outputs when you set your MXF profile to XAVC.

  • afdSignaling :: Maybe MxfAfdSignaling

    Optional. When you have AFD signaling set up in your output video stream, use this setting to choose whether to also include it in the MXF wrapper. Choose Don't copy (NO_COPY) to exclude AFD signaling from the MXF wrapper. Choose Copy from video stream (COPY_FROM_VIDEO) to copy the AFD values from the video stream for this output to the MXF wrapper. Regardless of which option you choose, the AFD values remain in the video stream. Related settings: To set up your output to include or exclude AFD values, see AfdSignaling, under VideoDescription. On the console, find AFD signaling under the output's video encoding settings.

  • profile :: Maybe MxfProfile

    Specify the MXF profile, also called shim, for this output. When you choose Auto, MediaConvert chooses a profile based on the video codec and resolution. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.

Instances

Instances details
Eq MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

Read MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

Show MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

Generic MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

Associated Types

type Rep MxfSettings :: Type -> Type #

NFData MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

Methods

rnf :: MxfSettings -> () #

Hashable MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

ToJSON MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

FromJSON MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

type Rep MxfSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfSettings

type Rep MxfSettings = D1 ('MetaData "MxfSettings" "Amazonka.MediaConvert.Types.MxfSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MxfSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "xavcProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MxfXavcProfileSettings)) :*: (S1 ('MetaSel ('Just "afdSignaling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MxfAfdSignaling)) :*: S1 ('MetaSel ('Just "profile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MxfProfile)))))

newMxfSettings :: MxfSettings Source #

Create a value of MxfSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:xavcProfileSettings:MxfSettings', mxfSettings_xavcProfileSettings - Specify the XAVC profile settings for MXF outputs when you set your MXF profile to XAVC.

$sel:afdSignaling:MxfSettings', mxfSettings_afdSignaling - Optional. When you have AFD signaling set up in your output video stream, use this setting to choose whether to also include it in the MXF wrapper. Choose Don't copy (NO_COPY) to exclude AFD signaling from the MXF wrapper. Choose Copy from video stream (COPY_FROM_VIDEO) to copy the AFD values from the video stream for this output to the MXF wrapper. Regardless of which option you choose, the AFD values remain in the video stream. Related settings: To set up your output to include or exclude AFD values, see AfdSignaling, under VideoDescription. On the console, find AFD signaling under the output's video encoding settings.

$sel:profile:MxfSettings', mxfSettings_profile - Specify the MXF profile, also called shim, for this output. When you choose Auto, MediaConvert chooses a profile based on the video codec and resolution. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.

mxfSettings_xavcProfileSettings :: Lens' MxfSettings (Maybe MxfXavcProfileSettings) Source #

Specify the XAVC profile settings for MXF outputs when you set your MXF profile to XAVC.

mxfSettings_afdSignaling :: Lens' MxfSettings (Maybe MxfAfdSignaling) Source #

Optional. When you have AFD signaling set up in your output video stream, use this setting to choose whether to also include it in the MXF wrapper. Choose Don't copy (NO_COPY) to exclude AFD signaling from the MXF wrapper. Choose Copy from video stream (COPY_FROM_VIDEO) to copy the AFD values from the video stream for this output to the MXF wrapper. Regardless of which option you choose, the AFD values remain in the video stream. Related settings: To set up your output to include or exclude AFD values, see AfdSignaling, under VideoDescription. On the console, find AFD signaling under the output's video encoding settings.

mxfSettings_profile :: Lens' MxfSettings (Maybe MxfProfile) Source #

Specify the MXF profile, also called shim, for this output. When you choose Auto, MediaConvert chooses a profile based on the video codec and resolution. For a list of codecs supported with each MXF profile, see https://docs.aws.amazon.com/mediaconvert/latest/ug/codecs-supported-with-each-mxf-profile.html. For more information about the automatic selection behavior, see https://docs.aws.amazon.com/mediaconvert/latest/ug/default-automatic-selection-of-mxf-profiles.html.

MxfXavcProfileSettings

data MxfXavcProfileSettings Source #

Specify the XAVC profile settings for MXF outputs when you set your MXF profile to XAVC.

See: newMxfXavcProfileSettings smart constructor.

Constructors

MxfXavcProfileSettings' 

Fields

  • maxAncDataSize :: Maybe Natural

    Specify a value for this setting only for outputs that you set up with one of these two XAVC profiles: XAVC HD Intra CBG (XAVC_HD_INTRA_CBG) or XAVC 4K Intra CBG (XAVC_4K_INTRA_CBG). Specify the amount of space in each frame that the service reserves for ancillary data, such as teletext captions. The default value for this setting is 1492 bytes per frame. This should be sufficient to prevent overflow unless you have multiple pages of teletext captions data. If you have a large amount of teletext data, specify a larger number.

  • durationMode :: Maybe MxfXavcDurationMode

    To create an output that complies with the XAVC file format guidelines for interoperability, keep the default value, Drop frames for compliance (DROP_FRAMES_FOR_COMPLIANCE). To include all frames from your input in this output, keep the default setting, Allow any duration (ALLOW_ANY_DURATION). The number of frames that MediaConvert excludes when you set this to Drop frames for compliance depends on the output frame rate and duration.

Instances

Instances details
Eq MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

Read MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

Show MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

Generic MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

Associated Types

type Rep MxfXavcProfileSettings :: Type -> Type #

NFData MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

Methods

rnf :: MxfXavcProfileSettings -> () #

Hashable MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

ToJSON MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

FromJSON MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

type Rep MxfXavcProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.MxfXavcProfileSettings

type Rep MxfXavcProfileSettings = D1 ('MetaData "MxfXavcProfileSettings" "Amazonka.MediaConvert.Types.MxfXavcProfileSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "MxfXavcProfileSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "maxAncDataSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "durationMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MxfXavcDurationMode))))

newMxfXavcProfileSettings :: MxfXavcProfileSettings Source #

Create a value of MxfXavcProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:maxAncDataSize:MxfXavcProfileSettings', mxfXavcProfileSettings_maxAncDataSize - Specify a value for this setting only for outputs that you set up with one of these two XAVC profiles: XAVC HD Intra CBG (XAVC_HD_INTRA_CBG) or XAVC 4K Intra CBG (XAVC_4K_INTRA_CBG). Specify the amount of space in each frame that the service reserves for ancillary data, such as teletext captions. The default value for this setting is 1492 bytes per frame. This should be sufficient to prevent overflow unless you have multiple pages of teletext captions data. If you have a large amount of teletext data, specify a larger number.

$sel:durationMode:MxfXavcProfileSettings', mxfXavcProfileSettings_durationMode - To create an output that complies with the XAVC file format guidelines for interoperability, keep the default value, Drop frames for compliance (DROP_FRAMES_FOR_COMPLIANCE). To include all frames from your input in this output, keep the default setting, Allow any duration (ALLOW_ANY_DURATION). The number of frames that MediaConvert excludes when you set this to Drop frames for compliance depends on the output frame rate and duration.

mxfXavcProfileSettings_maxAncDataSize :: Lens' MxfXavcProfileSettings (Maybe Natural) Source #

Specify a value for this setting only for outputs that you set up with one of these two XAVC profiles: XAVC HD Intra CBG (XAVC_HD_INTRA_CBG) or XAVC 4K Intra CBG (XAVC_4K_INTRA_CBG). Specify the amount of space in each frame that the service reserves for ancillary data, such as teletext captions. The default value for this setting is 1492 bytes per frame. This should be sufficient to prevent overflow unless you have multiple pages of teletext captions data. If you have a large amount of teletext data, specify a larger number.

mxfXavcProfileSettings_durationMode :: Lens' MxfXavcProfileSettings (Maybe MxfXavcDurationMode) Source #

To create an output that complies with the XAVC file format guidelines for interoperability, keep the default value, Drop frames for compliance (DROP_FRAMES_FOR_COMPLIANCE). To include all frames from your input in this output, keep the default setting, Allow any duration (ALLOW_ANY_DURATION). The number of frames that MediaConvert excludes when you set this to Drop frames for compliance depends on the output frame rate and duration.

NexGuardFileMarkerSettings

data NexGuardFileMarkerSettings Source #

For forensic video watermarking, MediaConvert supports Nagra NexGuard File Marker watermarking. MediaConvert supports both PreRelease Content (NGPR/G2) and OTT Streaming workflows.

See: newNexGuardFileMarkerSettings smart constructor.

Constructors

NexGuardFileMarkerSettings' 

Fields

  • strength :: Maybe WatermarkingStrength

    Optional. Ignore this setting unless Nagra support directs you to specify a value. When you don't specify a value here, the Nagra NexGuard library uses its default value.

  • payload :: Maybe Natural

    Specify the payload ID that you want associated with this output. Valid values vary depending on your Nagra NexGuard forensic watermarking workflow. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job. For PreRelease Content (NGPR/G2), specify an integer from 1 through 4,194,303. You must generate a unique ID for each asset you watermark, and keep a record of which ID you have assigned to each asset. Neither Nagra nor MediaConvert keep track of the relationship between output files and your IDs. For OTT Streaming, create two adaptive bitrate (ABR) stacks for each asset. Do this by setting up two output groups. For one output group, set the value of Payload ID (payload) to 0 in every output. For the other output group, set Payload ID (payload) to 1 in every output.

  • preset :: Maybe Text

    Enter one of the watermarking preset strings that Nagra provides you. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

  • license :: Maybe Text

    Use the base64 license string that Nagra provides you. Enter it directly in your JSON job specification or in the console. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

Instances

Instances details
Eq NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

Read NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

Show NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

Generic NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

Associated Types

type Rep NexGuardFileMarkerSettings :: Type -> Type #

NFData NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

Hashable NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

ToJSON NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

FromJSON NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

type Rep NexGuardFileMarkerSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings

type Rep NexGuardFileMarkerSettings = D1 ('MetaData "NexGuardFileMarkerSettings" "Amazonka.MediaConvert.Types.NexGuardFileMarkerSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NexGuardFileMarkerSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "strength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WatermarkingStrength)) :*: S1 ('MetaSel ('Just "payload") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "preset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "license") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newNexGuardFileMarkerSettings :: NexGuardFileMarkerSettings Source #

Create a value of NexGuardFileMarkerSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:strength:NexGuardFileMarkerSettings', nexGuardFileMarkerSettings_strength - Optional. Ignore this setting unless Nagra support directs you to specify a value. When you don't specify a value here, the Nagra NexGuard library uses its default value.

$sel:payload:NexGuardFileMarkerSettings', nexGuardFileMarkerSettings_payload - Specify the payload ID that you want associated with this output. Valid values vary depending on your Nagra NexGuard forensic watermarking workflow. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job. For PreRelease Content (NGPR/G2), specify an integer from 1 through 4,194,303. You must generate a unique ID for each asset you watermark, and keep a record of which ID you have assigned to each asset. Neither Nagra nor MediaConvert keep track of the relationship between output files and your IDs. For OTT Streaming, create two adaptive bitrate (ABR) stacks for each asset. Do this by setting up two output groups. For one output group, set the value of Payload ID (payload) to 0 in every output. For the other output group, set Payload ID (payload) to 1 in every output.

$sel:preset:NexGuardFileMarkerSettings', nexGuardFileMarkerSettings_preset - Enter one of the watermarking preset strings that Nagra provides you. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

$sel:license:NexGuardFileMarkerSettings', nexGuardFileMarkerSettings_license - Use the base64 license string that Nagra provides you. Enter it directly in your JSON job specification or in the console. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

nexGuardFileMarkerSettings_strength :: Lens' NexGuardFileMarkerSettings (Maybe WatermarkingStrength) Source #

Optional. Ignore this setting unless Nagra support directs you to specify a value. When you don't specify a value here, the Nagra NexGuard library uses its default value.

nexGuardFileMarkerSettings_payload :: Lens' NexGuardFileMarkerSettings (Maybe Natural) Source #

Specify the payload ID that you want associated with this output. Valid values vary depending on your Nagra NexGuard forensic watermarking workflow. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job. For PreRelease Content (NGPR/G2), specify an integer from 1 through 4,194,303. You must generate a unique ID for each asset you watermark, and keep a record of which ID you have assigned to each asset. Neither Nagra nor MediaConvert keep track of the relationship between output files and your IDs. For OTT Streaming, create two adaptive bitrate (ABR) stacks for each asset. Do this by setting up two output groups. For one output group, set the value of Payload ID (payload) to 0 in every output. For the other output group, set Payload ID (payload) to 1 in every output.

nexGuardFileMarkerSettings_preset :: Lens' NexGuardFileMarkerSettings (Maybe Text) Source #

Enter one of the watermarking preset strings that Nagra provides you. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

nexGuardFileMarkerSettings_license :: Lens' NexGuardFileMarkerSettings (Maybe Text) Source #

Use the base64 license string that Nagra provides you. Enter it directly in your JSON job specification or in the console. Required when you include Nagra NexGuard File Marker watermarking (NexGuardWatermarkingSettings) in your job.

NielsenConfiguration

data NielsenConfiguration Source #

Settings for your Nielsen configuration. If you don't do Nielsen measurement and analytics, ignore these settings. When you enable Nielsen configuration (nielsenConfiguration), MediaConvert enables PCM to ID3 tagging for all outputs in the job. To enable Nielsen configuration programmatically, include an instance of nielsenConfiguration in your JSON job specification. Even if you don't include any children of nielsenConfiguration, you still enable the setting.

See: newNielsenConfiguration smart constructor.

Constructors

NielsenConfiguration' 

Fields

  • breakoutCode :: Maybe Natural

    Nielsen has discontinued the use of breakout code functionality. If you must include this property, set the value to zero.

  • distributorId :: Maybe Text

    Use Distributor ID (DistributorID) to specify the distributor ID that is assigned to your organization by Neilsen.

Instances

Instances details
Eq NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

Read NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

Show NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

Generic NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

Associated Types

type Rep NielsenConfiguration :: Type -> Type #

NFData NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

Methods

rnf :: NielsenConfiguration -> () #

Hashable NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

ToJSON NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

FromJSON NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

type Rep NielsenConfiguration Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenConfiguration

type Rep NielsenConfiguration = D1 ('MetaData "NielsenConfiguration" "Amazonka.MediaConvert.Types.NielsenConfiguration" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NielsenConfiguration'" 'PrefixI 'True) (S1 ('MetaSel ('Just "breakoutCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "distributorId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newNielsenConfiguration :: NielsenConfiguration Source #

Create a value of NielsenConfiguration with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:breakoutCode:NielsenConfiguration', nielsenConfiguration_breakoutCode - Nielsen has discontinued the use of breakout code functionality. If you must include this property, set the value to zero.

$sel:distributorId:NielsenConfiguration', nielsenConfiguration_distributorId - Use Distributor ID (DistributorID) to specify the distributor ID that is assigned to your organization by Neilsen.

nielsenConfiguration_breakoutCode :: Lens' NielsenConfiguration (Maybe Natural) Source #

Nielsen has discontinued the use of breakout code functionality. If you must include this property, set the value to zero.

nielsenConfiguration_distributorId :: Lens' NielsenConfiguration (Maybe Text) Source #

Use Distributor ID (DistributorID) to specify the distributor ID that is assigned to your organization by Neilsen.

NielsenNonLinearWatermarkSettings

data NielsenNonLinearWatermarkSettings Source #

Ignore these settings unless you are using Nielsen non-linear watermarking. Specify the values that MediaConvert uses to generate and place Nielsen watermarks in your output audio. In addition to specifying these values, you also need to set up your cloud TIC server. These settings apply to every output in your job. The MediaConvert implementation is currently with the following Nielsen versions: Nielsen Watermark SDK Version 5.2.1 Nielsen NLM Watermark Engine Version 1.2.7 Nielsen Watermark Authenticator [SID_TIC] Version [5.0.0]

See: newNielsenNonLinearWatermarkSettings smart constructor.

Constructors

NielsenNonLinearWatermarkSettings' 

Fields

  • episodeId :: Maybe Text

    Optional. If this asset uses an episode ID with Nielsen, provide it here.

  • activeWatermarkProcess :: Maybe NielsenActiveWatermarkProcessType

    Choose the type of Nielsen watermarks that you want in your outputs. When you choose NAES 2 and NW (NAES2_AND_NW), you must provide a value for the setting SID (sourceId). When you choose CBET (CBET), you must provide a value for the setting CSID (cbetSourceId). When you choose NAES 2, NW, and CBET (NAES2_AND_NW_AND_CBET), you must provide values for both of these settings.

  • sourceId :: Maybe Natural

    Use the SID that Nielsen provides to you. This source ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking. This ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking.

  • cbetSourceId :: Maybe Text

    Use the CSID that Nielsen provides to you. This CBET source ID should be unique to your Nielsen account but common to all of your output assets that have CBET watermarking. Required when you choose a value for the setting Watermark types (ActiveWatermarkProcess) that includes CBET.

  • ticServerUrl :: Maybe Text

    Specify the endpoint for the TIC server that you have deployed and configured in the AWS Cloud. Required for all Nielsen non-linear watermarking. MediaConvert can't connect directly to a TIC server. Instead, you must use API Gateway to provide a RESTful interface between MediaConvert and a TIC server that you deploy in your AWS account. For more information on deploying a TIC server in your AWS account and the required API Gateway, contact Nielsen support.

  • metadataDestination :: Maybe Text

    Specify the Amazon S3 location where you want MediaConvert to save your Nielsen non-linear metadata .zip file. This Amazon S3 bucket must be in the same Region as the one where you do your MediaConvert transcoding. If you want to include an ADI file in this .zip file, use the setting ADI file (adiFilename) to specify it. MediaConvert delivers the Nielsen metadata .zip files only to your metadata destination Amazon S3 bucket. It doesn't deliver the .zip files to Nielsen. You are responsible for delivering the metadata .zip files to Nielsen.

  • assetName :: Maybe Text

    Use the asset name that you provide to Nielsen for this asset. Required for all Nielsen non-linear watermarking.

  • adiFilename :: Maybe Text

    Optional. Use this setting when you want the service to include an ADI file in the Nielsen metadata .zip file. To provide an ADI file, store it in Amazon S3 and provide a URL to it here. The URL should be in the following format: S3://bucket/path/ADI-file. For more information about the metadata .zip file, see the setting Metadata destination (metadataDestination).

  • assetId :: Maybe Text

    Use the asset ID that you provide to Nielsen to uniquely identify this asset. Required for all Nielsen non-linear watermarking.

  • uniqueTicPerAudioTrack :: Maybe NielsenUniqueTicPerAudioTrackType

    To create assets that have the same TIC values in each audio track, keep the default value Share TICs (SAME_TICS_PER_TRACK). To create assets that have unique TIC values for each audio track, choose Use unique TICs (RESERVE_UNIQUE_TICS_PER_TRACK).

  • sourceWatermarkStatus :: Maybe NielsenSourceWatermarkStatusType

    Required. Specify whether your source content already contains Nielsen non-linear watermarks. When you set this value to Watermarked (WATERMARKED), the service fails the job. Nielsen requires that you add non-linear watermarking to only clean content that doesn't already have non-linear Nielsen watermarks.

Instances

Instances details
Eq NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

Read NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

Show NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

Generic NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

Associated Types

type Rep NielsenNonLinearWatermarkSettings :: Type -> Type #

NFData NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

Hashable NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

ToJSON NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

FromJSON NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

type Rep NielsenNonLinearWatermarkSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings

type Rep NielsenNonLinearWatermarkSettings = D1 ('MetaData "NielsenNonLinearWatermarkSettings" "Amazonka.MediaConvert.Types.NielsenNonLinearWatermarkSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NielsenNonLinearWatermarkSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "episodeId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "activeWatermarkProcess") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenActiveWatermarkProcessType))) :*: (S1 ('MetaSel ('Just "sourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "cbetSourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "ticServerUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: ((S1 ('MetaSel ('Just "metadataDestination") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "assetName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "adiFilename") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: (S1 ('MetaSel ('Just "assetId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "uniqueTicPerAudioTrack") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenUniqueTicPerAudioTrackType)) :*: S1 ('MetaSel ('Just "sourceWatermarkStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NielsenSourceWatermarkStatusType)))))))

newNielsenNonLinearWatermarkSettings :: NielsenNonLinearWatermarkSettings Source #

Create a value of NielsenNonLinearWatermarkSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:episodeId:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_episodeId - Optional. If this asset uses an episode ID with Nielsen, provide it here.

$sel:activeWatermarkProcess:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_activeWatermarkProcess - Choose the type of Nielsen watermarks that you want in your outputs. When you choose NAES 2 and NW (NAES2_AND_NW), you must provide a value for the setting SID (sourceId). When you choose CBET (CBET), you must provide a value for the setting CSID (cbetSourceId). When you choose NAES 2, NW, and CBET (NAES2_AND_NW_AND_CBET), you must provide values for both of these settings.

$sel:sourceId:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_sourceId - Use the SID that Nielsen provides to you. This source ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking. This ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking.

$sel:cbetSourceId:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_cbetSourceId - Use the CSID that Nielsen provides to you. This CBET source ID should be unique to your Nielsen account but common to all of your output assets that have CBET watermarking. Required when you choose a value for the setting Watermark types (ActiveWatermarkProcess) that includes CBET.

$sel:ticServerUrl:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_ticServerUrl - Specify the endpoint for the TIC server that you have deployed and configured in the AWS Cloud. Required for all Nielsen non-linear watermarking. MediaConvert can't connect directly to a TIC server. Instead, you must use API Gateway to provide a RESTful interface between MediaConvert and a TIC server that you deploy in your AWS account. For more information on deploying a TIC server in your AWS account and the required API Gateway, contact Nielsen support.

$sel:metadataDestination:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_metadataDestination - Specify the Amazon S3 location where you want MediaConvert to save your Nielsen non-linear metadata .zip file. This Amazon S3 bucket must be in the same Region as the one where you do your MediaConvert transcoding. If you want to include an ADI file in this .zip file, use the setting ADI file (adiFilename) to specify it. MediaConvert delivers the Nielsen metadata .zip files only to your metadata destination Amazon S3 bucket. It doesn't deliver the .zip files to Nielsen. You are responsible for delivering the metadata .zip files to Nielsen.

$sel:assetName:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_assetName - Use the asset name that you provide to Nielsen for this asset. Required for all Nielsen non-linear watermarking.

$sel:adiFilename:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_adiFilename - Optional. Use this setting when you want the service to include an ADI file in the Nielsen metadata .zip file. To provide an ADI file, store it in Amazon S3 and provide a URL to it here. The URL should be in the following format: S3://bucket/path/ADI-file. For more information about the metadata .zip file, see the setting Metadata destination (metadataDestination).

$sel:assetId:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_assetId - Use the asset ID that you provide to Nielsen to uniquely identify this asset. Required for all Nielsen non-linear watermarking.

$sel:uniqueTicPerAudioTrack:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_uniqueTicPerAudioTrack - To create assets that have the same TIC values in each audio track, keep the default value Share TICs (SAME_TICS_PER_TRACK). To create assets that have unique TIC values for each audio track, choose Use unique TICs (RESERVE_UNIQUE_TICS_PER_TRACK).

$sel:sourceWatermarkStatus:NielsenNonLinearWatermarkSettings', nielsenNonLinearWatermarkSettings_sourceWatermarkStatus - Required. Specify whether your source content already contains Nielsen non-linear watermarks. When you set this value to Watermarked (WATERMARKED), the service fails the job. Nielsen requires that you add non-linear watermarking to only clean content that doesn't already have non-linear Nielsen watermarks.

nielsenNonLinearWatermarkSettings_episodeId :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Optional. If this asset uses an episode ID with Nielsen, provide it here.

nielsenNonLinearWatermarkSettings_activeWatermarkProcess :: Lens' NielsenNonLinearWatermarkSettings (Maybe NielsenActiveWatermarkProcessType) Source #

Choose the type of Nielsen watermarks that you want in your outputs. When you choose NAES 2 and NW (NAES2_AND_NW), you must provide a value for the setting SID (sourceId). When you choose CBET (CBET), you must provide a value for the setting CSID (cbetSourceId). When you choose NAES 2, NW, and CBET (NAES2_AND_NW_AND_CBET), you must provide values for both of these settings.

nielsenNonLinearWatermarkSettings_sourceId :: Lens' NielsenNonLinearWatermarkSettings (Maybe Natural) Source #

Use the SID that Nielsen provides to you. This source ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking. This ID should be unique to your Nielsen account but common to all of your output assets. Required for all Nielsen non-linear watermarking.

nielsenNonLinearWatermarkSettings_cbetSourceId :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Use the CSID that Nielsen provides to you. This CBET source ID should be unique to your Nielsen account but common to all of your output assets that have CBET watermarking. Required when you choose a value for the setting Watermark types (ActiveWatermarkProcess) that includes CBET.

nielsenNonLinearWatermarkSettings_ticServerUrl :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Specify the endpoint for the TIC server that you have deployed and configured in the AWS Cloud. Required for all Nielsen non-linear watermarking. MediaConvert can't connect directly to a TIC server. Instead, you must use API Gateway to provide a RESTful interface between MediaConvert and a TIC server that you deploy in your AWS account. For more information on deploying a TIC server in your AWS account and the required API Gateway, contact Nielsen support.

nielsenNonLinearWatermarkSettings_metadataDestination :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Specify the Amazon S3 location where you want MediaConvert to save your Nielsen non-linear metadata .zip file. This Amazon S3 bucket must be in the same Region as the one where you do your MediaConvert transcoding. If you want to include an ADI file in this .zip file, use the setting ADI file (adiFilename) to specify it. MediaConvert delivers the Nielsen metadata .zip files only to your metadata destination Amazon S3 bucket. It doesn't deliver the .zip files to Nielsen. You are responsible for delivering the metadata .zip files to Nielsen.

nielsenNonLinearWatermarkSettings_assetName :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Use the asset name that you provide to Nielsen for this asset. Required for all Nielsen non-linear watermarking.

nielsenNonLinearWatermarkSettings_adiFilename :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Optional. Use this setting when you want the service to include an ADI file in the Nielsen metadata .zip file. To provide an ADI file, store it in Amazon S3 and provide a URL to it here. The URL should be in the following format: S3://bucket/path/ADI-file. For more information about the metadata .zip file, see the setting Metadata destination (metadataDestination).

nielsenNonLinearWatermarkSettings_assetId :: Lens' NielsenNonLinearWatermarkSettings (Maybe Text) Source #

Use the asset ID that you provide to Nielsen to uniquely identify this asset. Required for all Nielsen non-linear watermarking.

nielsenNonLinearWatermarkSettings_uniqueTicPerAudioTrack :: Lens' NielsenNonLinearWatermarkSettings (Maybe NielsenUniqueTicPerAudioTrackType) Source #

To create assets that have the same TIC values in each audio track, keep the default value Share TICs (SAME_TICS_PER_TRACK). To create assets that have unique TIC values for each audio track, choose Use unique TICs (RESERVE_UNIQUE_TICS_PER_TRACK).

nielsenNonLinearWatermarkSettings_sourceWatermarkStatus :: Lens' NielsenNonLinearWatermarkSettings (Maybe NielsenSourceWatermarkStatusType) Source #

Required. Specify whether your source content already contains Nielsen non-linear watermarks. When you set this value to Watermarked (WATERMARKED), the service fails the job. Nielsen requires that you add non-linear watermarking to only clean content that doesn't already have non-linear Nielsen watermarks.

NoiseReducer

data NoiseReducer Source #

Enable the Noise reducer (NoiseReducer) feature to remove noise from your video output if necessary. Enable or disable this feature for each output individually. This setting is disabled by default. When you enable Noise reducer (NoiseReducer), you must also select a value for Noise reducer filter (NoiseReducerFilter).

See: newNoiseReducer smart constructor.

Constructors

NoiseReducer' 

Fields

Instances

Instances details
Eq NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

Read NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

Show NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

Generic NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

Associated Types

type Rep NoiseReducer :: Type -> Type #

NFData NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

Methods

rnf :: NoiseReducer -> () #

Hashable NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

ToJSON NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

FromJSON NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

type Rep NoiseReducer Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducer

type Rep NoiseReducer = D1 ('MetaData "NoiseReducer" "Amazonka.MediaConvert.Types.NoiseReducer" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NoiseReducer'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "temporalFilterSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NoiseReducerTemporalFilterSettings)) :*: S1 ('MetaSel ('Just "spatialFilterSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NoiseReducerSpatialFilterSettings))) :*: (S1 ('MetaSel ('Just "filterSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NoiseReducerFilterSettings)) :*: S1 ('MetaSel ('Just "filter'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NoiseReducerFilter)))))

newNoiseReducer :: NoiseReducer Source #

Create a value of NoiseReducer with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:temporalFilterSettings:NoiseReducer', noiseReducer_temporalFilterSettings - Noise reducer filter settings for temporal filter.

$sel:spatialFilterSettings:NoiseReducer', noiseReducer_spatialFilterSettings - Noise reducer filter settings for spatial filter.

$sel:filterSettings:NoiseReducer', noiseReducer_filterSettings - Settings for a noise reducer filter

$sel:filter':NoiseReducer', noiseReducer_filter - Use Noise reducer filter (NoiseReducerFilter) to select one of the following spatial image filtering functions. To use this setting, you must also enable Noise reducer (NoiseReducer). * Bilateral preserves edges while reducing noise. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) do convolution filtering. * Conserve does min/max noise reduction. * Spatial does frequency-domain filtering based on JND principles. * Temporal optimizes video quality for complex motion.

noiseReducer_filter :: Lens' NoiseReducer (Maybe NoiseReducerFilter) Source #

Use Noise reducer filter (NoiseReducerFilter) to select one of the following spatial image filtering functions. To use this setting, you must also enable Noise reducer (NoiseReducer). * Bilateral preserves edges while reducing noise. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) do convolution filtering. * Conserve does min/max noise reduction. * Spatial does frequency-domain filtering based on JND principles. * Temporal optimizes video quality for complex motion.

NoiseReducerFilterSettings

data NoiseReducerFilterSettings Source #

Settings for a noise reducer filter

See: newNoiseReducerFilterSettings smart constructor.

Constructors

NoiseReducerFilterSettings' 

Fields

  • strength :: Maybe Natural

    Relative strength of noise reducing filter. Higher values produce stronger filtering.

Instances

Instances details
Eq NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

Read NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

Show NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

Generic NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

Associated Types

type Rep NoiseReducerFilterSettings :: Type -> Type #

NFData NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

Hashable NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

ToJSON NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

FromJSON NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

type Rep NoiseReducerFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerFilterSettings

type Rep NoiseReducerFilterSettings = D1 ('MetaData "NoiseReducerFilterSettings" "Amazonka.MediaConvert.Types.NoiseReducerFilterSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NoiseReducerFilterSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "strength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newNoiseReducerFilterSettings :: NoiseReducerFilterSettings Source #

Create a value of NoiseReducerFilterSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:strength:NoiseReducerFilterSettings', noiseReducerFilterSettings_strength - Relative strength of noise reducing filter. Higher values produce stronger filtering.

noiseReducerFilterSettings_strength :: Lens' NoiseReducerFilterSettings (Maybe Natural) Source #

Relative strength of noise reducing filter. Higher values produce stronger filtering.

NoiseReducerSpatialFilterSettings

data NoiseReducerSpatialFilterSettings Source #

Noise reducer filter settings for spatial filter.

See: newNoiseReducerSpatialFilterSettings smart constructor.

Constructors

NoiseReducerSpatialFilterSettings' 

Fields

  • strength :: Maybe Natural

    Relative strength of noise reducing filter. Higher values produce stronger filtering.

  • postFilterSharpenStrength :: Maybe Natural

    Specify strength of post noise reduction sharpening filter, with 0 disabling the filter and 3 enabling it at maximum strength.

  • speed :: Maybe Int

    The speed of the filter, from -2 (lower speed) to 3 (higher speed), with 0 being the nominal value.

Instances

Instances details
Eq NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

Read NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

Show NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

Generic NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

Associated Types

type Rep NoiseReducerSpatialFilterSettings :: Type -> Type #

NFData NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

Hashable NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

ToJSON NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

FromJSON NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

type Rep NoiseReducerSpatialFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings

type Rep NoiseReducerSpatialFilterSettings = D1 ('MetaData "NoiseReducerSpatialFilterSettings" "Amazonka.MediaConvert.Types.NoiseReducerSpatialFilterSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NoiseReducerSpatialFilterSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "strength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "postFilterSharpenStrength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "speed") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newNoiseReducerSpatialFilterSettings :: NoiseReducerSpatialFilterSettings Source #

Create a value of NoiseReducerSpatialFilterSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:strength:NoiseReducerSpatialFilterSettings', noiseReducerSpatialFilterSettings_strength - Relative strength of noise reducing filter. Higher values produce stronger filtering.

$sel:postFilterSharpenStrength:NoiseReducerSpatialFilterSettings', noiseReducerSpatialFilterSettings_postFilterSharpenStrength - Specify strength of post noise reduction sharpening filter, with 0 disabling the filter and 3 enabling it at maximum strength.

$sel:speed:NoiseReducerSpatialFilterSettings', noiseReducerSpatialFilterSettings_speed - The speed of the filter, from -2 (lower speed) to 3 (higher speed), with 0 being the nominal value.

noiseReducerSpatialFilterSettings_strength :: Lens' NoiseReducerSpatialFilterSettings (Maybe Natural) Source #

Relative strength of noise reducing filter. Higher values produce stronger filtering.

noiseReducerSpatialFilterSettings_postFilterSharpenStrength :: Lens' NoiseReducerSpatialFilterSettings (Maybe Natural) Source #

Specify strength of post noise reduction sharpening filter, with 0 disabling the filter and 3 enabling it at maximum strength.

noiseReducerSpatialFilterSettings_speed :: Lens' NoiseReducerSpatialFilterSettings (Maybe Int) Source #

The speed of the filter, from -2 (lower speed) to 3 (higher speed), with 0 being the nominal value.

NoiseReducerTemporalFilterSettings

data NoiseReducerTemporalFilterSettings Source #

Noise reducer filter settings for temporal filter.

See: newNoiseReducerTemporalFilterSettings smart constructor.

Constructors

NoiseReducerTemporalFilterSettings' 

Fields

  • postTemporalSharpening :: Maybe NoiseFilterPostTemporalSharpening

    Optional. When you set Noise reducer (noiseReducer) to Temporal (TEMPORAL), you can use this setting to apply sharpening. The default behavior, Auto (AUTO), allows the transcoder to determine whether to apply filtering, depending on input type and quality. When you set Noise reducer to Temporal, your output bandwidth is reduced. When Post temporal sharpening is also enabled, that bandwidth reduction is smaller.

  • aggressiveMode :: Maybe Natural

    Use Aggressive mode for content that has complex motion. Higher values produce stronger temporal filtering. This filters highly complex scenes more aggressively and creates better VQ for low bitrate outputs.

  • strength :: Maybe Natural

    Specify the strength of the noise reducing filter on this output. Higher values produce stronger filtering. We recommend the following value ranges, depending on the result that you want: * 0-2 for complexity reduction with minimal sharpness loss * 2-8 for complexity reduction with image preservation * 8-16 for a high level of complexity reduction

  • speed :: Maybe Int

    The speed of the filter (higher number is faster). Low setting reduces bit rate at the cost of transcode time, high setting improves transcode time at the cost of bit rate.

Instances

Instances details
Eq NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

Read NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

Show NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

Generic NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

Associated Types

type Rep NoiseReducerTemporalFilterSettings :: Type -> Type #

NFData NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

Hashable NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

ToJSON NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

FromJSON NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

type Rep NoiseReducerTemporalFilterSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings

type Rep NoiseReducerTemporalFilterSettings = D1 ('MetaData "NoiseReducerTemporalFilterSettings" "Amazonka.MediaConvert.Types.NoiseReducerTemporalFilterSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "NoiseReducerTemporalFilterSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "postTemporalSharpening") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NoiseFilterPostTemporalSharpening)) :*: S1 ('MetaSel ('Just "aggressiveMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "strength") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "speed") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newNoiseReducerTemporalFilterSettings :: NoiseReducerTemporalFilterSettings Source #

Create a value of NoiseReducerTemporalFilterSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:postTemporalSharpening:NoiseReducerTemporalFilterSettings', noiseReducerTemporalFilterSettings_postTemporalSharpening - Optional. When you set Noise reducer (noiseReducer) to Temporal (TEMPORAL), you can use this setting to apply sharpening. The default behavior, Auto (AUTO), allows the transcoder to determine whether to apply filtering, depending on input type and quality. When you set Noise reducer to Temporal, your output bandwidth is reduced. When Post temporal sharpening is also enabled, that bandwidth reduction is smaller.

$sel:aggressiveMode:NoiseReducerTemporalFilterSettings', noiseReducerTemporalFilterSettings_aggressiveMode - Use Aggressive mode for content that has complex motion. Higher values produce stronger temporal filtering. This filters highly complex scenes more aggressively and creates better VQ for low bitrate outputs.

$sel:strength:NoiseReducerTemporalFilterSettings', noiseReducerTemporalFilterSettings_strength - Specify the strength of the noise reducing filter on this output. Higher values produce stronger filtering. We recommend the following value ranges, depending on the result that you want: * 0-2 for complexity reduction with minimal sharpness loss * 2-8 for complexity reduction with image preservation * 8-16 for a high level of complexity reduction

$sel:speed:NoiseReducerTemporalFilterSettings', noiseReducerTemporalFilterSettings_speed - The speed of the filter (higher number is faster). Low setting reduces bit rate at the cost of transcode time, high setting improves transcode time at the cost of bit rate.

noiseReducerTemporalFilterSettings_postTemporalSharpening :: Lens' NoiseReducerTemporalFilterSettings (Maybe NoiseFilterPostTemporalSharpening) Source #

Optional. When you set Noise reducer (noiseReducer) to Temporal (TEMPORAL), you can use this setting to apply sharpening. The default behavior, Auto (AUTO), allows the transcoder to determine whether to apply filtering, depending on input type and quality. When you set Noise reducer to Temporal, your output bandwidth is reduced. When Post temporal sharpening is also enabled, that bandwidth reduction is smaller.

noiseReducerTemporalFilterSettings_aggressiveMode :: Lens' NoiseReducerTemporalFilterSettings (Maybe Natural) Source #

Use Aggressive mode for content that has complex motion. Higher values produce stronger temporal filtering. This filters highly complex scenes more aggressively and creates better VQ for low bitrate outputs.

noiseReducerTemporalFilterSettings_strength :: Lens' NoiseReducerTemporalFilterSettings (Maybe Natural) Source #

Specify the strength of the noise reducing filter on this output. Higher values produce stronger filtering. We recommend the following value ranges, depending on the result that you want: * 0-2 for complexity reduction with minimal sharpness loss * 2-8 for complexity reduction with image preservation * 8-16 for a high level of complexity reduction

noiseReducerTemporalFilterSettings_speed :: Lens' NoiseReducerTemporalFilterSettings (Maybe Int) Source #

The speed of the filter (higher number is faster). Low setting reduces bit rate at the cost of transcode time, high setting improves transcode time at the cost of bit rate.

OpusSettings

data OpusSettings Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value OPUS.

See: newOpusSettings smart constructor.

Constructors

OpusSettings' 

Fields

  • channels :: Maybe Natural

    Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

  • sampleRate :: Maybe Natural

    Optional. Sample rate in hz. Valid values are 16000, 24000, and 48000. The default value is 48000.

  • bitrate :: Maybe Natural

    Optional. Specify the average bitrate in bits per second. Valid values are multiples of 8000, from 32000 through 192000. The default value is 96000, which we recommend for quality and bandwidth.

Instances

Instances details
Eq OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

Read OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

Show OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

Generic OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

Associated Types

type Rep OpusSettings :: Type -> Type #

NFData OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

Methods

rnf :: OpusSettings -> () #

Hashable OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

ToJSON OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

FromJSON OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

type Rep OpusSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OpusSettings

type Rep OpusSettings = D1 ('MetaData "OpusSettings" "Amazonka.MediaConvert.Types.OpusSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OpusSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newOpusSettings :: OpusSettings Source #

Create a value of OpusSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channels:OpusSettings', opusSettings_channels - Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

$sel:sampleRate:OpusSettings', opusSettings_sampleRate - Optional. Sample rate in hz. Valid values are 16000, 24000, and 48000. The default value is 48000.

$sel:bitrate:OpusSettings', opusSettings_bitrate - Optional. Specify the average bitrate in bits per second. Valid values are multiples of 8000, from 32000 through 192000. The default value is 96000, which we recommend for quality and bandwidth.

opusSettings_channels :: Lens' OpusSettings (Maybe Natural) Source #

Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2.

opusSettings_sampleRate :: Lens' OpusSettings (Maybe Natural) Source #

Optional. Sample rate in hz. Valid values are 16000, 24000, and 48000. The default value is 48000.

opusSettings_bitrate :: Lens' OpusSettings (Maybe Natural) Source #

Optional. Specify the average bitrate in bits per second. Valid values are multiples of 8000, from 32000 through 192000. The default value is 96000, which we recommend for quality and bandwidth.

Output

data Output Source #

Each output in your job is a collection of settings that describes how you want MediaConvert to encode a single output file or stream. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/create-outputs.html.

See: newOutput smart constructor.

Constructors

Output' 

Fields

  • captionDescriptions :: Maybe [CaptionDescription]

    (CaptionDescriptions) contains groups of captions settings. For each output that has captions, include one instance of (CaptionDescriptions). (CaptionDescriptions) can contain multiple groups of captions settings.

  • extension :: Maybe Text

    Use Extension (Extension) to specify the file extension for outputs in File output groups. If you do not specify a value, the service will use default extensions by container type as follows * MPEG-2 transport stream, m2ts * Quicktime, mov * MXF container, mxf * MPEG-4 container, mp4 * WebM container, webm * No Container, the service will use codec extensions (e.g. AAC, H265, H265, AC3)

  • videoDescription :: Maybe VideoDescription

    VideoDescription contains a group of video encoding settings. The specific video settings depend on the video codec that you choose for the property codec. Include one instance of VideoDescription per output.

  • containerSettings :: Maybe ContainerSettings

    Container specific settings.

  • outputSettings :: Maybe OutputSettings

    Specific settings for this type of output.

  • preset :: Maybe Text

    Use Preset (Preset) to specify a preset for your transcoding settings. Provide the system or custom preset name. You can specify either Preset (Preset) or Container settings (ContainerSettings), but not both.

  • nameModifier :: Maybe Text

    Use Name modifier (NameModifier) to have the service add a string to the end of each output filename. You specify the base filename as part of your destination URI. When you create multiple outputs in the same output group, Name modifier (NameModifier) is required. Name modifier also accepts format identifiers. For DASH ISO outputs, if you use the format identifiers $Number$ or $Time$ in one output, you must use them in the same way in all outputs of the output group.

  • audioDescriptions :: Maybe [AudioDescription]

    (AudioDescriptions) contains groups of audio encoding settings organized by audio codec. Include one instance of (AudioDescriptions) per output. (AudioDescriptions) can contain multiple groups of encoding settings.

Instances

Instances details
Eq Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Methods

(==) :: Output -> Output -> Bool #

(/=) :: Output -> Output -> Bool #

Read Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Show Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Generic Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Associated Types

type Rep Output :: Type -> Type #

Methods

from :: Output -> Rep Output x #

to :: Rep Output x -> Output #

NFData Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Methods

rnf :: Output -> () #

Hashable Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

Methods

hashWithSalt :: Int -> Output -> Int #

hash :: Output -> Int #

ToJSON Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

FromJSON Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

type Rep Output Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Output

newOutput :: Output Source #

Create a value of Output with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:captionDescriptions:Output', output_captionDescriptions - (CaptionDescriptions) contains groups of captions settings. For each output that has captions, include one instance of (CaptionDescriptions). (CaptionDescriptions) can contain multiple groups of captions settings.

$sel:extension:Output', output_extension - Use Extension (Extension) to specify the file extension for outputs in File output groups. If you do not specify a value, the service will use default extensions by container type as follows * MPEG-2 transport stream, m2ts * Quicktime, mov * MXF container, mxf * MPEG-4 container, mp4 * WebM container, webm * No Container, the service will use codec extensions (e.g. AAC, H265, H265, AC3)

$sel:videoDescription:Output', output_videoDescription - VideoDescription contains a group of video encoding settings. The specific video settings depend on the video codec that you choose for the property codec. Include one instance of VideoDescription per output.

$sel:containerSettings:Output', output_containerSettings - Container specific settings.

$sel:outputSettings:Output', output_outputSettings - Specific settings for this type of output.

$sel:preset:Output', output_preset - Use Preset (Preset) to specify a preset for your transcoding settings. Provide the system or custom preset name. You can specify either Preset (Preset) or Container settings (ContainerSettings), but not both.

$sel:nameModifier:Output', output_nameModifier - Use Name modifier (NameModifier) to have the service add a string to the end of each output filename. You specify the base filename as part of your destination URI. When you create multiple outputs in the same output group, Name modifier (NameModifier) is required. Name modifier also accepts format identifiers. For DASH ISO outputs, if you use the format identifiers $Number$ or $Time$ in one output, you must use them in the same way in all outputs of the output group.

$sel:audioDescriptions:Output', output_audioDescriptions - (AudioDescriptions) contains groups of audio encoding settings organized by audio codec. Include one instance of (AudioDescriptions) per output. (AudioDescriptions) can contain multiple groups of encoding settings.

output_captionDescriptions :: Lens' Output (Maybe [CaptionDescription]) Source #

(CaptionDescriptions) contains groups of captions settings. For each output that has captions, include one instance of (CaptionDescriptions). (CaptionDescriptions) can contain multiple groups of captions settings.

output_extension :: Lens' Output (Maybe Text) Source #

Use Extension (Extension) to specify the file extension for outputs in File output groups. If you do not specify a value, the service will use default extensions by container type as follows * MPEG-2 transport stream, m2ts * Quicktime, mov * MXF container, mxf * MPEG-4 container, mp4 * WebM container, webm * No Container, the service will use codec extensions (e.g. AAC, H265, H265, AC3)

output_videoDescription :: Lens' Output (Maybe VideoDescription) Source #

VideoDescription contains a group of video encoding settings. The specific video settings depend on the video codec that you choose for the property codec. Include one instance of VideoDescription per output.

output_outputSettings :: Lens' Output (Maybe OutputSettings) Source #

Specific settings for this type of output.

output_preset :: Lens' Output (Maybe Text) Source #

Use Preset (Preset) to specify a preset for your transcoding settings. Provide the system or custom preset name. You can specify either Preset (Preset) or Container settings (ContainerSettings), but not both.

output_nameModifier :: Lens' Output (Maybe Text) Source #

Use Name modifier (NameModifier) to have the service add a string to the end of each output filename. You specify the base filename as part of your destination URI. When you create multiple outputs in the same output group, Name modifier (NameModifier) is required. Name modifier also accepts format identifiers. For DASH ISO outputs, if you use the format identifiers $Number$ or $Time$ in one output, you must use them in the same way in all outputs of the output group.

output_audioDescriptions :: Lens' Output (Maybe [AudioDescription]) Source #

(AudioDescriptions) contains groups of audio encoding settings organized by audio codec. Include one instance of (AudioDescriptions) per output. (AudioDescriptions) can contain multiple groups of encoding settings.

OutputChannelMapping

data OutputChannelMapping Source #

OutputChannel mapping settings.

See: newOutputChannelMapping smart constructor.

Constructors

OutputChannelMapping' 

Fields

  • inputChannelsFineTune :: Maybe [Double]

    Use this setting to specify your remix values when they have a decimal component, such as -10.312, 0.08, or 4.9. MediaConvert rounds your remixing values to the nearest thousandth.

  • inputChannels :: Maybe [Int]

    Use this setting to specify your remix values when they are integers, such as -10, 0, or 4.

Instances

Instances details
Eq OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

Read OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

Show OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

Generic OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

Associated Types

type Rep OutputChannelMapping :: Type -> Type #

NFData OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

Methods

rnf :: OutputChannelMapping -> () #

Hashable OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

ToJSON OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

FromJSON OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

type Rep OutputChannelMapping Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputChannelMapping

type Rep OutputChannelMapping = D1 ('MetaData "OutputChannelMapping" "Amazonka.MediaConvert.Types.OutputChannelMapping" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputChannelMapping'" 'PrefixI 'True) (S1 ('MetaSel ('Just "inputChannelsFineTune") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Double])) :*: S1 ('MetaSel ('Just "inputChannels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Int]))))

newOutputChannelMapping :: OutputChannelMapping Source #

Create a value of OutputChannelMapping with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:inputChannelsFineTune:OutputChannelMapping', outputChannelMapping_inputChannelsFineTune - Use this setting to specify your remix values when they have a decimal component, such as -10.312, 0.08, or 4.9. MediaConvert rounds your remixing values to the nearest thousandth.

$sel:inputChannels:OutputChannelMapping', outputChannelMapping_inputChannels - Use this setting to specify your remix values when they are integers, such as -10, 0, or 4.

outputChannelMapping_inputChannelsFineTune :: Lens' OutputChannelMapping (Maybe [Double]) Source #

Use this setting to specify your remix values when they have a decimal component, such as -10.312, 0.08, or 4.9. MediaConvert rounds your remixing values to the nearest thousandth.

outputChannelMapping_inputChannels :: Lens' OutputChannelMapping (Maybe [Int]) Source #

Use this setting to specify your remix values when they are integers, such as -10, 0, or 4.

OutputDetail

data OutputDetail Source #

Details regarding output

See: newOutputDetail smart constructor.

Constructors

OutputDetail' 

Fields

Instances

Instances details
Eq OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

Read OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

Show OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

Generic OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

Associated Types

type Rep OutputDetail :: Type -> Type #

NFData OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

Methods

rnf :: OutputDetail -> () #

Hashable OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

FromJSON OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

type Rep OutputDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputDetail

type Rep OutputDetail = D1 ('MetaData "OutputDetail" "Amazonka.MediaConvert.Types.OutputDetail" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputDetail'" 'PrefixI 'True) (S1 ('MetaSel ('Just "videoDetails") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoDetail)) :*: S1 ('MetaSel ('Just "durationInMs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))))

newOutputDetail :: OutputDetail Source #

Create a value of OutputDetail with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:videoDetails:OutputDetail', outputDetail_videoDetails - Contains details about the output's video stream

$sel:durationInMs:OutputDetail', outputDetail_durationInMs - Duration in milliseconds

outputDetail_videoDetails :: Lens' OutputDetail (Maybe VideoDetail) Source #

Contains details about the output's video stream

OutputGroup

data OutputGroup Source #

Group of outputs

See: newOutputGroup smart constructor.

Constructors

OutputGroup' 

Fields

Instances

Instances details
Eq OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

Read OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

Show OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

Generic OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

Associated Types

type Rep OutputGroup :: Type -> Type #

NFData OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

Methods

rnf :: OutputGroup -> () #

Hashable OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

ToJSON OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

FromJSON OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

type Rep OutputGroup Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroup

type Rep OutputGroup = D1 ('MetaData "OutputGroup" "Amazonka.MediaConvert.Types.OutputGroup" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputGroup'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "outputGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OutputGroupSettings)) :*: S1 ('MetaSel ('Just "outputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Output]))) :*: (S1 ('MetaSel ('Just "customName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "automatedEncodingSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AutomatedEncodingSettings))))))

newOutputGroup :: OutputGroup Source #

Create a value of OutputGroup with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:outputGroupSettings:OutputGroup', outputGroup_outputGroupSettings - Output Group settings, including type

$sel:outputs:OutputGroup', outputGroup_outputs - This object holds groups of encoding settings, one group of settings per output.

$sel:customName:OutputGroup', outputGroup_customName - Use Custom Group Name (CustomName) to specify a name for the output group. This value is displayed on the console and can make your job settings JSON more human-readable. It does not affect your outputs. Use up to twelve characters that are either letters, numbers, spaces, or underscores.

$sel:name:OutputGroup', outputGroup_name - Name of the output group

$sel:automatedEncodingSettings:OutputGroup', outputGroup_automatedEncodingSettings - Use automated encoding to have MediaConvert choose your encoding settings for you, based on characteristics of your input video.

outputGroup_outputs :: Lens' OutputGroup (Maybe [Output]) Source #

This object holds groups of encoding settings, one group of settings per output.

outputGroup_customName :: Lens' OutputGroup (Maybe Text) Source #

Use Custom Group Name (CustomName) to specify a name for the output group. This value is displayed on the console and can make your job settings JSON more human-readable. It does not affect your outputs. Use up to twelve characters that are either letters, numbers, spaces, or underscores.

outputGroup_name :: Lens' OutputGroup (Maybe Text) Source #

Name of the output group

outputGroup_automatedEncodingSettings :: Lens' OutputGroup (Maybe AutomatedEncodingSettings) Source #

Use automated encoding to have MediaConvert choose your encoding settings for you, based on characteristics of your input video.

OutputGroupDetail

data OutputGroupDetail Source #

Contains details about the output groups specified in the job settings.

See: newOutputGroupDetail smart constructor.

Constructors

OutputGroupDetail' 

Fields

Instances

Instances details
Eq OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

Read OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

Show OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

Generic OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

Associated Types

type Rep OutputGroupDetail :: Type -> Type #

NFData OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

Methods

rnf :: OutputGroupDetail -> () #

Hashable OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

FromJSON OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

type Rep OutputGroupDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupDetail

type Rep OutputGroupDetail = D1 ('MetaData "OutputGroupDetail" "Amazonka.MediaConvert.Types.OutputGroupDetail" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputGroupDetail'" 'PrefixI 'True) (S1 ('MetaSel ('Just "outputDetails") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [OutputDetail]))))

newOutputGroupDetail :: OutputGroupDetail Source #

Create a value of OutputGroupDetail with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:outputDetails:OutputGroupDetail', outputGroupDetail_outputDetails - Details about the output

OutputGroupSettings

data OutputGroupSettings Source #

Output Group settings, including type

See: newOutputGroupSettings smart constructor.

Constructors

OutputGroupSettings' 

Fields

  • fileGroupSettings :: Maybe FileGroupSettings

    Settings related to your File output group. MediaConvert uses this group of settings to generate a single standalone file, rather than a streaming package. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to FILE_GROUP_SETTINGS.

  • cmafGroupSettings :: Maybe CmafGroupSettings

    Settings related to your CMAF output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to CMAF_GROUP_SETTINGS.

  • msSmoothGroupSettings :: Maybe MsSmoothGroupSettings

    Settings related to your Microsoft Smooth Streaming output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to MS_SMOOTH_GROUP_SETTINGS.

  • hlsGroupSettings :: Maybe HlsGroupSettings

    Settings related to your HLS output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to HLS_GROUP_SETTINGS.

  • type' :: Maybe OutputGroupType

    Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, CMAF)

  • dashIsoGroupSettings :: Maybe DashIsoGroupSettings

    Settings related to your DASH output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to DASH_ISO_GROUP_SETTINGS.

Instances

Instances details
Eq OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

Read OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

Show OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

Generic OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

Associated Types

type Rep OutputGroupSettings :: Type -> Type #

NFData OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

Methods

rnf :: OutputGroupSettings -> () #

Hashable OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

ToJSON OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

FromJSON OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

type Rep OutputGroupSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputGroupSettings

type Rep OutputGroupSettings = D1 ('MetaData "OutputGroupSettings" "Amazonka.MediaConvert.Types.OutputGroupSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputGroupSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "fileGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FileGroupSettings)) :*: (S1 ('MetaSel ('Just "cmafGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CmafGroupSettings)) :*: S1 ('MetaSel ('Just "msSmoothGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MsSmoothGroupSettings)))) :*: (S1 ('MetaSel ('Just "hlsGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsGroupSettings)) :*: (S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OutputGroupType)) :*: S1 ('MetaSel ('Just "dashIsoGroupSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DashIsoGroupSettings))))))

newOutputGroupSettings :: OutputGroupSettings Source #

Create a value of OutputGroupSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:fileGroupSettings:OutputGroupSettings', outputGroupSettings_fileGroupSettings - Settings related to your File output group. MediaConvert uses this group of settings to generate a single standalone file, rather than a streaming package. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to FILE_GROUP_SETTINGS.

$sel:cmafGroupSettings:OutputGroupSettings', outputGroupSettings_cmafGroupSettings - Settings related to your CMAF output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to CMAF_GROUP_SETTINGS.

$sel:msSmoothGroupSettings:OutputGroupSettings', outputGroupSettings_msSmoothGroupSettings - Settings related to your Microsoft Smooth Streaming output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to MS_SMOOTH_GROUP_SETTINGS.

$sel:hlsGroupSettings:OutputGroupSettings', outputGroupSettings_hlsGroupSettings - Settings related to your HLS output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to HLS_GROUP_SETTINGS.

$sel:type':OutputGroupSettings', outputGroupSettings_type - Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, CMAF)

$sel:dashIsoGroupSettings:OutputGroupSettings', outputGroupSettings_dashIsoGroupSettings - Settings related to your DASH output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to DASH_ISO_GROUP_SETTINGS.

outputGroupSettings_fileGroupSettings :: Lens' OutputGroupSettings (Maybe FileGroupSettings) Source #

Settings related to your File output group. MediaConvert uses this group of settings to generate a single standalone file, rather than a streaming package. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to FILE_GROUP_SETTINGS.

outputGroupSettings_cmafGroupSettings :: Lens' OutputGroupSettings (Maybe CmafGroupSettings) Source #

Settings related to your CMAF output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to CMAF_GROUP_SETTINGS.

outputGroupSettings_msSmoothGroupSettings :: Lens' OutputGroupSettings (Maybe MsSmoothGroupSettings) Source #

Settings related to your Microsoft Smooth Streaming output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to MS_SMOOTH_GROUP_SETTINGS.

outputGroupSettings_hlsGroupSettings :: Lens' OutputGroupSettings (Maybe HlsGroupSettings) Source #

Settings related to your HLS output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to HLS_GROUP_SETTINGS.

outputGroupSettings_type :: Lens' OutputGroupSettings (Maybe OutputGroupType) Source #

Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, CMAF)

outputGroupSettings_dashIsoGroupSettings :: Lens' OutputGroupSettings (Maybe DashIsoGroupSettings) Source #

Settings related to your DASH output package. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/outputs-file-ABR.html. When you work directly in your JSON job specification, include this object and any required children when you set Type, under OutputGroupSettings, to DASH_ISO_GROUP_SETTINGS.

OutputSettings

data OutputSettings Source #

Specific settings for this type of output.

See: newOutputSettings smart constructor.

Constructors

OutputSettings' 

Fields

Instances

Instances details
Eq OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

Read OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

Show OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

Generic OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

Associated Types

type Rep OutputSettings :: Type -> Type #

NFData OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

Methods

rnf :: OutputSettings -> () #

Hashable OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

ToJSON OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

FromJSON OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

type Rep OutputSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.OutputSettings

type Rep OutputSettings = D1 ('MetaData "OutputSettings" "Amazonka.MediaConvert.Types.OutputSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "OutputSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "hlsSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe HlsSettings))))

newOutputSettings :: OutputSettings Source #

Create a value of OutputSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:hlsSettings:OutputSettings', outputSettings_hlsSettings - Settings for HLS output groups

PartnerWatermarking

data PartnerWatermarking Source #

If you work with a third party video watermarking partner, use the group of settings that correspond with your watermarking partner to include watermarks in your output.

See: newPartnerWatermarking smart constructor.

Constructors

PartnerWatermarking' 

Fields

Instances

Instances details
Eq PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

Read PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

Show PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

Generic PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

Associated Types

type Rep PartnerWatermarking :: Type -> Type #

NFData PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

Methods

rnf :: PartnerWatermarking -> () #

Hashable PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

ToJSON PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

FromJSON PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

type Rep PartnerWatermarking Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PartnerWatermarking

type Rep PartnerWatermarking = D1 ('MetaData "PartnerWatermarking" "Amazonka.MediaConvert.Types.PartnerWatermarking" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "PartnerWatermarking'" 'PrefixI 'True) (S1 ('MetaSel ('Just "nexguardFileMarkerSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NexGuardFileMarkerSettings))))

newPartnerWatermarking :: PartnerWatermarking Source #

Create a value of PartnerWatermarking with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:nexguardFileMarkerSettings:PartnerWatermarking', partnerWatermarking_nexguardFileMarkerSettings - For forensic video watermarking, MediaConvert supports Nagra NexGuard File Marker watermarking. MediaConvert supports both PreRelease Content (NGPR/G2) and OTT Streaming workflows.

partnerWatermarking_nexguardFileMarkerSettings :: Lens' PartnerWatermarking (Maybe NexGuardFileMarkerSettings) Source #

For forensic video watermarking, MediaConvert supports Nagra NexGuard File Marker watermarking. MediaConvert supports both PreRelease Content (NGPR/G2) and OTT Streaming workflows.

Policy

data Policy Source #

A policy configures behavior that you allow or disallow for your account. For information about MediaConvert policies, see the user guide at http://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html

See: newPolicy smart constructor.

Constructors

Policy' 

Fields

Instances

Instances details
Eq Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Methods

(==) :: Policy -> Policy -> Bool #

(/=) :: Policy -> Policy -> Bool #

Read Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Show Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Generic Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Associated Types

type Rep Policy :: Type -> Type #

Methods

from :: Policy -> Rep Policy x #

to :: Rep Policy x -> Policy #

NFData Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Methods

rnf :: Policy -> () #

Hashable Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

Methods

hashWithSalt :: Int -> Policy -> Int #

hash :: Policy -> Int #

ToJSON Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

FromJSON Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

type Rep Policy Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Policy

type Rep Policy = D1 ('MetaData "Policy" "Amazonka.MediaConvert.Types.Policy" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Policy'" 'PrefixI 'True) (S1 ('MetaSel ('Just "s3Inputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputPolicy)) :*: (S1 ('MetaSel ('Just "httpInputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputPolicy)) :*: S1 ('MetaSel ('Just "httpsInputs") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputPolicy)))))

newPolicy :: Policy Source #

Create a value of Policy with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:s3Inputs:Policy', policy_s3Inputs - Allow or disallow jobs that specify Amazon S3 inputs.

$sel:httpInputs:Policy', policy_httpInputs - Allow or disallow jobs that specify HTTP inputs.

$sel:httpsInputs:Policy', policy_httpsInputs - Allow or disallow jobs that specify HTTPS inputs.

policy_s3Inputs :: Lens' Policy (Maybe InputPolicy) Source #

Allow or disallow jobs that specify Amazon S3 inputs.

policy_httpInputs :: Lens' Policy (Maybe InputPolicy) Source #

Allow or disallow jobs that specify HTTP inputs.

policy_httpsInputs :: Lens' Policy (Maybe InputPolicy) Source #

Allow or disallow jobs that specify HTTPS inputs.

Preset

data Preset Source #

A preset is a collection of preconfigured media conversion settings that you want MediaConvert to apply to the output during the conversion process.

See: newPreset smart constructor.

Constructors

Preset' 

Fields

  • lastUpdated :: Maybe POSIX

    The timestamp in epoch seconds when the preset was last updated.

  • arn :: Maybe Text

    An identifier for this resource that is unique within all of AWS.

  • createdAt :: Maybe POSIX

    The timestamp in epoch seconds for preset creation.

  • category :: Maybe Text

    An optional category you create to organize your presets.

  • type' :: Maybe Type

    A preset can be of two types: system or custom. System or built-in preset can't be modified or deleted by the user.

  • description :: Maybe Text

    An optional description you create for each preset.

  • settings :: PresetSettings

    Settings for preset

  • name :: Text

    A name you create for each preset. Each name must be unique within your account.

Instances

Instances details
Eq Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Methods

(==) :: Preset -> Preset -> Bool #

(/=) :: Preset -> Preset -> Bool #

Read Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Show Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Generic Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Associated Types

type Rep Preset :: Type -> Type #

Methods

from :: Preset -> Rep Preset x #

to :: Rep Preset x -> Preset #

NFData Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Methods

rnf :: Preset -> () #

Hashable Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

Methods

hashWithSalt :: Int -> Preset -> Int #

hash :: Preset -> Int #

FromJSON Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

type Rep Preset Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Preset

newPreset Source #

Create a value of Preset with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:lastUpdated:Preset', preset_lastUpdated - The timestamp in epoch seconds when the preset was last updated.

$sel:arn:Preset', preset_arn - An identifier for this resource that is unique within all of AWS.

$sel:createdAt:Preset', preset_createdAt - The timestamp in epoch seconds for preset creation.

$sel:category:Preset', preset_category - An optional category you create to organize your presets.

$sel:type':Preset', preset_type - A preset can be of two types: system or custom. System or built-in preset can't be modified or deleted by the user.

$sel:description:Preset', preset_description - An optional description you create for each preset.

$sel:settings:Preset', preset_settings - Settings for preset

$sel:name:Preset', preset_name - A name you create for each preset. Each name must be unique within your account.

preset_lastUpdated :: Lens' Preset (Maybe UTCTime) Source #

The timestamp in epoch seconds when the preset was last updated.

preset_arn :: Lens' Preset (Maybe Text) Source #

An identifier for this resource that is unique within all of AWS.

preset_createdAt :: Lens' Preset (Maybe UTCTime) Source #

The timestamp in epoch seconds for preset creation.

preset_category :: Lens' Preset (Maybe Text) Source #

An optional category you create to organize your presets.

preset_type :: Lens' Preset (Maybe Type) Source #

A preset can be of two types: system or custom. System or built-in preset can't be modified or deleted by the user.

preset_description :: Lens' Preset (Maybe Text) Source #

An optional description you create for each preset.

preset_name :: Lens' Preset Text Source #

A name you create for each preset. Each name must be unique within your account.

PresetSettings

data PresetSettings Source #

Settings for preset

See: newPresetSettings smart constructor.

Constructors

PresetSettings' 

Fields

Instances

Instances details
Eq PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

Read PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

Show PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

Generic PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

Associated Types

type Rep PresetSettings :: Type -> Type #

NFData PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

Methods

rnf :: PresetSettings -> () #

Hashable PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

ToJSON PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

FromJSON PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

type Rep PresetSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.PresetSettings

type Rep PresetSettings = D1 ('MetaData "PresetSettings" "Amazonka.MediaConvert.Types.PresetSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "PresetSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "captionDescriptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [CaptionDescriptionPreset])) :*: S1 ('MetaSel ('Just "videoDescription") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoDescription))) :*: (S1 ('MetaSel ('Just "containerSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ContainerSettings)) :*: S1 ('MetaSel ('Just "audioDescriptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [AudioDescription])))))

newPresetSettings :: PresetSettings Source #

Create a value of PresetSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:captionDescriptions:PresetSettings', presetSettings_captionDescriptions - This object holds groups of settings related to captions for one output. For each output that has captions, include one instance of CaptionDescriptions.

$sel:videoDescription:PresetSettings', presetSettings_videoDescription - VideoDescription contains a group of video encoding settings. The specific video settings depend on the video codec that you choose for the property codec. Include one instance of VideoDescription per output.

$sel:containerSettings:PresetSettings', presetSettings_containerSettings - Container specific settings.

$sel:audioDescriptions:PresetSettings', presetSettings_audioDescriptions - (AudioDescriptions) contains groups of audio encoding settings organized by audio codec. Include one instance of (AudioDescriptions) per output. (AudioDescriptions) can contain multiple groups of encoding settings.

presetSettings_captionDescriptions :: Lens' PresetSettings (Maybe [CaptionDescriptionPreset]) Source #

This object holds groups of settings related to captions for one output. For each output that has captions, include one instance of CaptionDescriptions.

presetSettings_videoDescription :: Lens' PresetSettings (Maybe VideoDescription) Source #

VideoDescription contains a group of video encoding settings. The specific video settings depend on the video codec that you choose for the property codec. Include one instance of VideoDescription per output.

presetSettings_audioDescriptions :: Lens' PresetSettings (Maybe [AudioDescription]) Source #

(AudioDescriptions) contains groups of audio encoding settings organized by audio codec. Include one instance of (AudioDescriptions) per output. (AudioDescriptions) can contain multiple groups of encoding settings.

ProresSettings

data ProresSettings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value PRORES.

See: newProresSettings smart constructor.

Constructors

ProresSettings' 

Fields

  • slowPal :: Maybe ProresSlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • telecine :: Maybe ProresTelecine

    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

  • interlaceMode :: Maybe ProresInterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • parControl :: Maybe ProresParControl

    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

  • scanTypeConversionMode :: Maybe ProresScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • codecProfile :: Maybe ProresCodecProfile

    Use Profile (ProResCodecProfile) to specify the type of Apple ProRes codec to use for this output.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe ProresFramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe ProresFramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • chromaSampling :: Maybe ProresChromaSampling

    This setting applies only to ProRes 4444 and ProRes 4444 XQ outputs that you create from inputs that use 4:4:4 chroma sampling. Set Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING) to allow outputs to also use 4:4:4 chroma sampling. You must specify a value for this setting when your output codec profile supports 4:4:4 chroma sampling. Related Settings: When you set Chroma sampling to Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING), you must choose an output codec profile that supports 4:4:4 chroma sampling. These values for Profile (CodecProfile) support 4:4:4 chroma sampling: Apple ProRes 4444 (APPLE_PRORES_4444) or Apple ProRes 4444 XQ (APPLE_PRORES_4444_XQ). When you set Chroma sampling to Preserve 4:4:4 sampling, you must disable all video preprocessors except for Nexguard file marker (PartnerWatermarking). When you set Chroma sampling to Preserve 4:4:4 sampling and use framerate conversion, you must set Frame rate conversion algorithm (FramerateConversionAlgorithm) to Drop duplicate (DUPLICATE_DROP).

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Instances

Instances details
Eq ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

Read ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

Show ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

Generic ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

Associated Types

type Rep ProresSettings :: Type -> Type #

NFData ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

Methods

rnf :: ProresSettings -> () #

Hashable ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

ToJSON ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

FromJSON ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

type Rep ProresSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ProresSettings

type Rep ProresSettings = D1 ('MetaData "ProresSettings" "Amazonka.MediaConvert.Types.ProresSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ProresSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresSlowPal)) :*: (S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "telecine") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresTelecine)))) :*: (S1 ('MetaSel ('Just "interlaceMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresInterlaceMode)) :*: (S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresParControl)) :*: S1 ('MetaSel ('Just "scanTypeConversionMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresScanTypeConversionMode))))) :*: ((S1 ('MetaSel ('Just "codecProfile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresCodecProfile)) :*: (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresFramerateConversionAlgorithm)))) :*: ((S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresFramerateControl)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "chromaSampling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresChromaSampling)) :*: S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newProresSettings :: ProresSettings Source #

Create a value of ProresSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:slowPal:ProresSettings', proresSettings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:parNumerator:ProresSettings', proresSettings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:telecine:ProresSettings', proresSettings_telecine - When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

$sel:interlaceMode:ProresSettings', proresSettings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:parControl:ProresSettings', proresSettings_parControl - Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

$sel:scanTypeConversionMode:ProresSettings', proresSettings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:codecProfile:ProresSettings', proresSettings_codecProfile - Use Profile (ProResCodecProfile) to specify the type of Apple ProRes codec to use for this output.

$sel:framerateDenominator:ProresSettings', proresSettings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:ProresSettings', proresSettings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:ProresSettings', proresSettings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:framerateNumerator:ProresSettings', proresSettings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:chromaSampling:ProresSettings', proresSettings_chromaSampling - This setting applies only to ProRes 4444 and ProRes 4444 XQ outputs that you create from inputs that use 4:4:4 chroma sampling. Set Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING) to allow outputs to also use 4:4:4 chroma sampling. You must specify a value for this setting when your output codec profile supports 4:4:4 chroma sampling. Related Settings: When you set Chroma sampling to Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING), you must choose an output codec profile that supports 4:4:4 chroma sampling. These values for Profile (CodecProfile) support 4:4:4 chroma sampling: Apple ProRes 4444 (APPLE_PRORES_4444) or Apple ProRes 4444 XQ (APPLE_PRORES_4444_XQ). When you set Chroma sampling to Preserve 4:4:4 sampling, you must disable all video preprocessors except for Nexguard file marker (PartnerWatermarking). When you set Chroma sampling to Preserve 4:4:4 sampling and use framerate conversion, you must set Frame rate conversion algorithm (FramerateConversionAlgorithm) to Drop duplicate (DUPLICATE_DROP).

$sel:parDenominator:ProresSettings', proresSettings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

proresSettings_slowPal :: Lens' ProresSettings (Maybe ProresSlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output. When you enable slow PAL, MediaConvert relabels the video frames to 25 fps and resamples your audio to keep it synchronized with the video. Note that enabling this setting will slightly reduce the duration of your video. Required settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

proresSettings_parNumerator :: Lens' ProresSettings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

proresSettings_telecine :: Lens' ProresSettings (Maybe ProresTelecine) Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

proresSettings_interlaceMode :: Lens' ProresSettings (Maybe ProresInterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

proresSettings_parControl :: Lens' ProresSettings (Maybe ProresParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

proresSettings_scanTypeConversionMode :: Lens' ProresSettings (Maybe ProresScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

proresSettings_codecProfile :: Lens' ProresSettings (Maybe ProresCodecProfile) Source #

Use Profile (ProResCodecProfile) to specify the type of Apple ProRes codec to use for this output.

proresSettings_framerateDenominator :: Lens' ProresSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

proresSettings_framerateConversionAlgorithm :: Lens' ProresSettings (Maybe ProresFramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

proresSettings_framerateControl :: Lens' ProresSettings (Maybe ProresFramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

proresSettings_framerateNumerator :: Lens' ProresSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

proresSettings_chromaSampling :: Lens' ProresSettings (Maybe ProresChromaSampling) Source #

This setting applies only to ProRes 4444 and ProRes 4444 XQ outputs that you create from inputs that use 4:4:4 chroma sampling. Set Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING) to allow outputs to also use 4:4:4 chroma sampling. You must specify a value for this setting when your output codec profile supports 4:4:4 chroma sampling. Related Settings: When you set Chroma sampling to Preserve 4:4:4 sampling (PRESERVE_444_SAMPLING), you must choose an output codec profile that supports 4:4:4 chroma sampling. These values for Profile (CodecProfile) support 4:4:4 chroma sampling: Apple ProRes 4444 (APPLE_PRORES_4444) or Apple ProRes 4444 XQ (APPLE_PRORES_4444_XQ). When you set Chroma sampling to Preserve 4:4:4 sampling, you must disable all video preprocessors except for Nexguard file marker (PartnerWatermarking). When you set Chroma sampling to Preserve 4:4:4 sampling and use framerate conversion, you must set Frame rate conversion algorithm (FramerateConversionAlgorithm) to Drop duplicate (DUPLICATE_DROP).

proresSettings_parDenominator :: Lens' ProresSettings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Queue

data Queue Source #

You can use queues to manage the resources that are available to your AWS account for running multiple transcoding jobs at the same time. If you don't specify a queue, the service sends all jobs through the default queue. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-queues.html.

See: newQueue smart constructor.

Constructors

Queue' 

Fields

  • status :: Maybe QueueStatus

    Queues can be ACTIVE or PAUSED. If you pause a queue, the service won't begin processing jobs in that queue. Jobs that are running when you pause the queue continue to run until they finish or result in an error.

  • lastUpdated :: Maybe POSIX

    The timestamp in epoch seconds for when you most recently updated the queue.

  • arn :: Maybe Text

    An identifier for this resource that is unique within all of AWS.

  • createdAt :: Maybe POSIX

    The timestamp in epoch seconds for when you created the queue.

  • reservationPlan :: Maybe ReservationPlan

    Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.

  • pricingPlan :: Maybe PricingPlan

    Specifies whether the pricing plan for the queue is on-demand or reserved. For on-demand, you pay per minute, billed in increments of .01 minute. For reserved, you pay for the transcoding capacity of the entire queue, regardless of how much or how little you use it. Reserved pricing requires a 12-month commitment.

  • submittedJobsCount :: Maybe Int

    The estimated number of jobs with a SUBMITTED status.

  • progressingJobsCount :: Maybe Int

    The estimated number of jobs with a PROGRESSING status.

  • type' :: Maybe Type

    Specifies whether this on-demand queue is system or custom. System queues are built in. You can't modify or delete system queues. You can create and modify custom queues.

  • description :: Maybe Text

    An optional description that you create for each queue.

  • name :: Text

    A name that you create for each queue. Each name must be unique within your account.

Instances

Instances details
Eq Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Methods

(==) :: Queue -> Queue -> Bool #

(/=) :: Queue -> Queue -> Bool #

Read Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Show Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Methods

showsPrec :: Int -> Queue -> ShowS #

show :: Queue -> String #

showList :: [Queue] -> ShowS #

Generic Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Associated Types

type Rep Queue :: Type -> Type #

Methods

from :: Queue -> Rep Queue x #

to :: Rep Queue x -> Queue #

NFData Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Methods

rnf :: Queue -> () #

Hashable Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

Methods

hashWithSalt :: Int -> Queue -> Int #

hash :: Queue -> Int #

FromJSON Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

type Rep Queue Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Queue

newQueue Source #

Arguments

:: Text

$sel:name:Queue'

-> Queue 

Create a value of Queue with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:status:Queue', queue_status - Queues can be ACTIVE or PAUSED. If you pause a queue, the service won't begin processing jobs in that queue. Jobs that are running when you pause the queue continue to run until they finish or result in an error.

$sel:lastUpdated:Queue', queue_lastUpdated - The timestamp in epoch seconds for when you most recently updated the queue.

$sel:arn:Queue', queue_arn - An identifier for this resource that is unique within all of AWS.

$sel:createdAt:Queue', queue_createdAt - The timestamp in epoch seconds for when you created the queue.

$sel:reservationPlan:Queue', queue_reservationPlan - Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.

$sel:pricingPlan:Queue', queue_pricingPlan - Specifies whether the pricing plan for the queue is on-demand or reserved. For on-demand, you pay per minute, billed in increments of .01 minute. For reserved, you pay for the transcoding capacity of the entire queue, regardless of how much or how little you use it. Reserved pricing requires a 12-month commitment.

$sel:submittedJobsCount:Queue', queue_submittedJobsCount - The estimated number of jobs with a SUBMITTED status.

$sel:progressingJobsCount:Queue', queue_progressingJobsCount - The estimated number of jobs with a PROGRESSING status.

$sel:type':Queue', queue_type - Specifies whether this on-demand queue is system or custom. System queues are built in. You can't modify or delete system queues. You can create and modify custom queues.

$sel:description:Queue', queue_description - An optional description that you create for each queue.

$sel:name:Queue', queue_name - A name that you create for each queue. Each name must be unique within your account.

queue_status :: Lens' Queue (Maybe QueueStatus) Source #

Queues can be ACTIVE or PAUSED. If you pause a queue, the service won't begin processing jobs in that queue. Jobs that are running when you pause the queue continue to run until they finish or result in an error.

queue_lastUpdated :: Lens' Queue (Maybe UTCTime) Source #

The timestamp in epoch seconds for when you most recently updated the queue.

queue_arn :: Lens' Queue (Maybe Text) Source #

An identifier for this resource that is unique within all of AWS.

queue_createdAt :: Lens' Queue (Maybe UTCTime) Source #

The timestamp in epoch seconds for when you created the queue.

queue_reservationPlan :: Lens' Queue (Maybe ReservationPlan) Source #

Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.

queue_pricingPlan :: Lens' Queue (Maybe PricingPlan) Source #

Specifies whether the pricing plan for the queue is on-demand or reserved. For on-demand, you pay per minute, billed in increments of .01 minute. For reserved, you pay for the transcoding capacity of the entire queue, regardless of how much or how little you use it. Reserved pricing requires a 12-month commitment.

queue_submittedJobsCount :: Lens' Queue (Maybe Int) Source #

The estimated number of jobs with a SUBMITTED status.

queue_progressingJobsCount :: Lens' Queue (Maybe Int) Source #

The estimated number of jobs with a PROGRESSING status.

queue_type :: Lens' Queue (Maybe Type) Source #

Specifies whether this on-demand queue is system or custom. System queues are built in. You can't modify or delete system queues. You can create and modify custom queues.

queue_description :: Lens' Queue (Maybe Text) Source #

An optional description that you create for each queue.

queue_name :: Lens' Queue Text Source #

A name that you create for each queue. Each name must be unique within your account.

QueueTransition

data QueueTransition Source #

Description of the source and destination queues between which the job has moved, along with the timestamp of the move

See: newQueueTransition smart constructor.

Constructors

QueueTransition' 

Fields

Instances

Instances details
Eq QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

Read QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

Show QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

Generic QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

Associated Types

type Rep QueueTransition :: Type -> Type #

NFData QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

Methods

rnf :: QueueTransition -> () #

Hashable QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

FromJSON QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

type Rep QueueTransition Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.QueueTransition

type Rep QueueTransition = D1 ('MetaData "QueueTransition" "Amazonka.MediaConvert.Types.QueueTransition" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "QueueTransition'" 'PrefixI 'True) (S1 ('MetaSel ('Just "sourceQueue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "destinationQueue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "timestamp") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))))

newQueueTransition :: QueueTransition Source #

Create a value of QueueTransition with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:sourceQueue:QueueTransition', queueTransition_sourceQueue - The queue that the job was on before the transition.

$sel:destinationQueue:QueueTransition', queueTransition_destinationQueue - The queue that the job was on after the transition.

$sel:timestamp:QueueTransition', queueTransition_timestamp - The time, in Unix epoch format, that the job moved from the source queue to the destination queue.

queueTransition_sourceQueue :: Lens' QueueTransition (Maybe Text) Source #

The queue that the job was on before the transition.

queueTransition_destinationQueue :: Lens' QueueTransition (Maybe Text) Source #

The queue that the job was on after the transition.

queueTransition_timestamp :: Lens' QueueTransition (Maybe UTCTime) Source #

The time, in Unix epoch format, that the job moved from the source queue to the destination queue.

Rectangle

data Rectangle Source #

Use Rectangle to identify a specific area of the video frame.

See: newRectangle smart constructor.

Constructors

Rectangle' 

Fields

  • height :: Maybe Natural

    Height of rectangle in pixels. Specify only even numbers.

  • width :: Maybe Natural

    Width of rectangle in pixels. Specify only even numbers.

  • x :: Maybe Natural

    The distance, in pixels, between the rectangle and the left edge of the video frame. Specify only even numbers.

  • y :: Maybe Natural

    The distance, in pixels, between the rectangle and the top edge of the video frame. Specify only even numbers.

Instances

Instances details
Eq Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

Read Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

Show Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

Generic Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

Associated Types

type Rep Rectangle :: Type -> Type #

NFData Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

Methods

rnf :: Rectangle -> () #

Hashable Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

ToJSON Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

FromJSON Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

type Rep Rectangle Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Rectangle

type Rep Rectangle = D1 ('MetaData "Rectangle" "Amazonka.MediaConvert.Types.Rectangle" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Rectangle'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "height") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "width") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "x") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "y") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newRectangle :: Rectangle Source #

Create a value of Rectangle with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:height:Rectangle', rectangle_height - Height of rectangle in pixels. Specify only even numbers.

$sel:width:Rectangle', rectangle_width - Width of rectangle in pixels. Specify only even numbers.

$sel:x:Rectangle', rectangle_x - The distance, in pixels, between the rectangle and the left edge of the video frame. Specify only even numbers.

$sel:y:Rectangle', rectangle_y - The distance, in pixels, between the rectangle and the top edge of the video frame. Specify only even numbers.

rectangle_height :: Lens' Rectangle (Maybe Natural) Source #

Height of rectangle in pixels. Specify only even numbers.

rectangle_width :: Lens' Rectangle (Maybe Natural) Source #

Width of rectangle in pixels. Specify only even numbers.

rectangle_x :: Lens' Rectangle (Maybe Natural) Source #

The distance, in pixels, between the rectangle and the left edge of the video frame. Specify only even numbers.

rectangle_y :: Lens' Rectangle (Maybe Natural) Source #

The distance, in pixels, between the rectangle and the top edge of the video frame. Specify only even numbers.

RemixSettings

data RemixSettings Source #

Use Manual audio remixing (RemixSettings) to adjust audio levels for each audio channel in each output of your job. With audio remixing, you can output more or fewer audio channels than your input audio source provides.

See: newRemixSettings smart constructor.

Constructors

RemixSettings' 

Fields

  • channelMapping :: Maybe ChannelMapping

    Channel mapping (ChannelMapping) contains the group of fields that hold the remixing value for each channel, in dB. Specify remix values to indicate how much of the content from your input audio channel you want in your output audio channels. Each instance of the InputChannels or InputChannelsFineTune array specifies these values for one output channel. Use one instance of this array for each output channel. In the console, each array corresponds to a column in the graphical depiction of the mapping matrix. The rows of the graphical matrix correspond to input channels. Valid values are within the range from -60 (mute) through 6. A setting of 0 passes the input channel unchanged to the output channel (no attenuation or amplification). Use InputChannels or InputChannelsFineTune to specify your remix values. Don't use both.

  • channelsIn :: Maybe Natural

    Specify the number of audio channels from your input that you want to use in your output. With remixing, you might combine or split the data in these channels, so the number of channels in your final output might be different. If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

  • channelsOut :: Maybe Natural

    Specify the number of channels in this output after remixing. Valid values: 1, 2, 4, 6, 8... 64. (1 and even numbers to 64.) If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

Instances

Instances details
Eq RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

Read RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

Show RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

Generic RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

Associated Types

type Rep RemixSettings :: Type -> Type #

NFData RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

Methods

rnf :: RemixSettings -> () #

Hashable RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

ToJSON RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

FromJSON RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

type Rep RemixSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.RemixSettings

type Rep RemixSettings = D1 ('MetaData "RemixSettings" "Amazonka.MediaConvert.Types.RemixSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "RemixSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channelMapping") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ChannelMapping)) :*: (S1 ('MetaSel ('Just "channelsIn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "channelsOut") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newRemixSettings :: RemixSettings Source #

Create a value of RemixSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channelMapping:RemixSettings', remixSettings_channelMapping - Channel mapping (ChannelMapping) contains the group of fields that hold the remixing value for each channel, in dB. Specify remix values to indicate how much of the content from your input audio channel you want in your output audio channels. Each instance of the InputChannels or InputChannelsFineTune array specifies these values for one output channel. Use one instance of this array for each output channel. In the console, each array corresponds to a column in the graphical depiction of the mapping matrix. The rows of the graphical matrix correspond to input channels. Valid values are within the range from -60 (mute) through 6. A setting of 0 passes the input channel unchanged to the output channel (no attenuation or amplification). Use InputChannels or InputChannelsFineTune to specify your remix values. Don't use both.

$sel:channelsIn:RemixSettings', remixSettings_channelsIn - Specify the number of audio channels from your input that you want to use in your output. With remixing, you might combine or split the data in these channels, so the number of channels in your final output might be different. If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

$sel:channelsOut:RemixSettings', remixSettings_channelsOut - Specify the number of channels in this output after remixing. Valid values: 1, 2, 4, 6, 8... 64. (1 and even numbers to 64.) If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

remixSettings_channelMapping :: Lens' RemixSettings (Maybe ChannelMapping) Source #

Channel mapping (ChannelMapping) contains the group of fields that hold the remixing value for each channel, in dB. Specify remix values to indicate how much of the content from your input audio channel you want in your output audio channels. Each instance of the InputChannels or InputChannelsFineTune array specifies these values for one output channel. Use one instance of this array for each output channel. In the console, each array corresponds to a column in the graphical depiction of the mapping matrix. The rows of the graphical matrix correspond to input channels. Valid values are within the range from -60 (mute) through 6. A setting of 0 passes the input channel unchanged to the output channel (no attenuation or amplification). Use InputChannels or InputChannelsFineTune to specify your remix values. Don't use both.

remixSettings_channelsIn :: Lens' RemixSettings (Maybe Natural) Source #

Specify the number of audio channels from your input that you want to use in your output. With remixing, you might combine or split the data in these channels, so the number of channels in your final output might be different. If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

remixSettings_channelsOut :: Lens' RemixSettings (Maybe Natural) Source #

Specify the number of channels in this output after remixing. Valid values: 1, 2, 4, 6, 8... 64. (1 and even numbers to 64.) If you are doing both input channel mapping and output channel mapping, the number of output channels in your input mapping must be the same as the number of input channels in your output mapping.

ReservationPlan

data ReservationPlan Source #

Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.

See: newReservationPlan smart constructor.

Constructors

ReservationPlan' 

Fields

  • status :: Maybe ReservationPlanStatus

    Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.

  • expiresAt :: Maybe POSIX

    The timestamp in epoch seconds for when the current pricing plan term for this reserved queue expires.

  • purchasedAt :: Maybe POSIX

    The timestamp in epoch seconds for when you set up the current pricing plan for this reserved queue.

  • commitment :: Maybe Commitment

    The length of the term of your reserved queue pricing plan commitment.

  • reservedSlots :: Maybe Int

    Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. When you increase this number, you extend your existing commitment with a new 12-month commitment for a larger number of RTS. The new commitment begins when you purchase the additional capacity. You can't decrease the number of RTS in your reserved queue.

  • renewalType :: Maybe RenewalType

    Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term.

Instances

Instances details
Eq ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

Read ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

Show ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

Generic ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

Associated Types

type Rep ReservationPlan :: Type -> Type #

NFData ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

Methods

rnf :: ReservationPlan -> () #

Hashable ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

FromJSON ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

type Rep ReservationPlan Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlan

type Rep ReservationPlan = D1 ('MetaData "ReservationPlan" "Amazonka.MediaConvert.Types.ReservationPlan" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ReservationPlan'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ReservationPlanStatus)) :*: (S1 ('MetaSel ('Just "expiresAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "purchasedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))) :*: (S1 ('MetaSel ('Just "commitment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Commitment)) :*: (S1 ('MetaSel ('Just "reservedSlots") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "renewalType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RenewalType))))))

newReservationPlan :: ReservationPlan Source #

Create a value of ReservationPlan with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:status:ReservationPlan', reservationPlan_status - Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.

$sel:expiresAt:ReservationPlan', reservationPlan_expiresAt - The timestamp in epoch seconds for when the current pricing plan term for this reserved queue expires.

$sel:purchasedAt:ReservationPlan', reservationPlan_purchasedAt - The timestamp in epoch seconds for when you set up the current pricing plan for this reserved queue.

$sel:commitment:ReservationPlan', reservationPlan_commitment - The length of the term of your reserved queue pricing plan commitment.

$sel:reservedSlots:ReservationPlan', reservationPlan_reservedSlots - Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. When you increase this number, you extend your existing commitment with a new 12-month commitment for a larger number of RTS. The new commitment begins when you purchase the additional capacity. You can't decrease the number of RTS in your reserved queue.

$sel:renewalType:ReservationPlan', reservationPlan_renewalType - Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term.

reservationPlan_status :: Lens' ReservationPlan (Maybe ReservationPlanStatus) Source #

Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.

reservationPlan_expiresAt :: Lens' ReservationPlan (Maybe UTCTime) Source #

The timestamp in epoch seconds for when the current pricing plan term for this reserved queue expires.

reservationPlan_purchasedAt :: Lens' ReservationPlan (Maybe UTCTime) Source #

The timestamp in epoch seconds for when you set up the current pricing plan for this reserved queue.

reservationPlan_commitment :: Lens' ReservationPlan (Maybe Commitment) Source #

The length of the term of your reserved queue pricing plan commitment.

reservationPlan_reservedSlots :: Lens' ReservationPlan (Maybe Int) Source #

Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. When you increase this number, you extend your existing commitment with a new 12-month commitment for a larger number of RTS. The new commitment begins when you purchase the additional capacity. You can't decrease the number of RTS in your reserved queue.

reservationPlan_renewalType :: Lens' ReservationPlan (Maybe RenewalType) Source #

Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term.

ReservationPlanSettings

data ReservationPlanSettings Source #

Details about the pricing plan for your reserved queue. Required for reserved queues and not applicable to on-demand queues.

See: newReservationPlanSettings smart constructor.

Constructors

ReservationPlanSettings' 

Fields

  • commitment :: Commitment

    The length of the term of your reserved queue pricing plan commitment.

  • reservedSlots :: Int

    Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. You can't decrease the number of RTS in your reserved queue. You can increase the number of RTS by extending your existing commitment with a new 12-month commitment for the larger number. The new commitment begins when you purchase the additional capacity. You can't cancel your commitment or revert to your original commitment after you increase the capacity.

  • renewalType :: RenewalType

    Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term. When your term is auto renewed, you extend your commitment by 12 months from the auto renew date. You can cancel this commitment.

Instances

Instances details
Eq ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

Read ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

Show ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

Generic ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

Associated Types

type Rep ReservationPlanSettings :: Type -> Type #

NFData ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

Methods

rnf :: ReservationPlanSettings -> () #

Hashable ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

ToJSON ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

type Rep ReservationPlanSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ReservationPlanSettings

type Rep ReservationPlanSettings = D1 ('MetaData "ReservationPlanSettings" "Amazonka.MediaConvert.Types.ReservationPlanSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ReservationPlanSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "commitment") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Commitment) :*: (S1 ('MetaSel ('Just "reservedSlots") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Int) :*: S1 ('MetaSel ('Just "renewalType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 RenewalType))))

newReservationPlanSettings Source #

Create a value of ReservationPlanSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:commitment:ReservationPlanSettings', reservationPlanSettings_commitment - The length of the term of your reserved queue pricing plan commitment.

$sel:reservedSlots:ReservationPlanSettings', reservationPlanSettings_reservedSlots - Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. You can't decrease the number of RTS in your reserved queue. You can increase the number of RTS by extending your existing commitment with a new 12-month commitment for the larger number. The new commitment begins when you purchase the additional capacity. You can't cancel your commitment or revert to your original commitment after you increase the capacity.

$sel:renewalType:ReservationPlanSettings', reservationPlanSettings_renewalType - Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term. When your term is auto renewed, you extend your commitment by 12 months from the auto renew date. You can cancel this commitment.

reservationPlanSettings_commitment :: Lens' ReservationPlanSettings Commitment Source #

The length of the term of your reserved queue pricing plan commitment.

reservationPlanSettings_reservedSlots :: Lens' ReservationPlanSettings Int Source #

Specifies the number of reserved transcode slots (RTS) for this queue. The number of RTS determines how many jobs the queue can process in parallel; each RTS can process one job at a time. You can't decrease the number of RTS in your reserved queue. You can increase the number of RTS by extending your existing commitment with a new 12-month commitment for the larger number. The new commitment begins when you purchase the additional capacity. You can't cancel your commitment or revert to your original commitment after you increase the capacity.

reservationPlanSettings_renewalType :: Lens' ReservationPlanSettings RenewalType Source #

Specifies whether the term of your reserved queue pricing plan is automatically extended (AUTO_RENEW) or expires (EXPIRE) at the end of the term. When your term is auto renewed, you extend your commitment by 12 months from the auto renew date. You can cancel this commitment.

ResourceTags

data ResourceTags Source #

The Amazon Resource Name (ARN) and tags for an AWS Elemental MediaConvert resource.

See: newResourceTags smart constructor.

Constructors

ResourceTags' 

Fields

Instances

Instances details
Eq ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

Read ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

Show ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

Generic ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

Associated Types

type Rep ResourceTags :: Type -> Type #

NFData ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

Methods

rnf :: ResourceTags -> () #

Hashable ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

FromJSON ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

type Rep ResourceTags Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.ResourceTags

type Rep ResourceTags = D1 ('MetaData "ResourceTags" "Amazonka.MediaConvert.Types.ResourceTags" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "ResourceTags'" 'PrefixI 'True) (S1 ('MetaSel ('Just "arn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "tags") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text Text)))))

newResourceTags :: ResourceTags Source #

Create a value of ResourceTags with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:arn:ResourceTags', resourceTags_arn - The Amazon Resource Name (ARN) of the resource.

$sel:tags:ResourceTags', resourceTags_tags - The tags for the resource.

resourceTags_arn :: Lens' ResourceTags (Maybe Text) Source #

The Amazon Resource Name (ARN) of the resource.

resourceTags_tags :: Lens' ResourceTags (Maybe (HashMap Text Text)) Source #

The tags for the resource.

S3DestinationAccessControl

data S3DestinationAccessControl Source #

Optional. Have MediaConvert automatically apply Amazon S3 access control for the outputs in this output group. When you don't use this setting, S3 automatically applies the default access control list PRIVATE.

See: newS3DestinationAccessControl smart constructor.

Constructors

S3DestinationAccessControl' 

Fields

Instances

Instances details
Eq S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

Read S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

Show S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

Generic S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

Associated Types

type Rep S3DestinationAccessControl :: Type -> Type #

NFData S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

Hashable S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

ToJSON S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

FromJSON S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

type Rep S3DestinationAccessControl Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationAccessControl

type Rep S3DestinationAccessControl = D1 ('MetaData "S3DestinationAccessControl" "Amazonka.MediaConvert.Types.S3DestinationAccessControl" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "S3DestinationAccessControl'" 'PrefixI 'True) (S1 ('MetaSel ('Just "cannedAcl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe S3ObjectCannedAcl))))

newS3DestinationAccessControl :: S3DestinationAccessControl Source #

Create a value of S3DestinationAccessControl with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:cannedAcl:S3DestinationAccessControl', s3DestinationAccessControl_cannedAcl - Choose an Amazon S3 canned ACL for MediaConvert to apply to this output.

s3DestinationAccessControl_cannedAcl :: Lens' S3DestinationAccessControl (Maybe S3ObjectCannedAcl) Source #

Choose an Amazon S3 canned ACL for MediaConvert to apply to this output.

S3DestinationSettings

data S3DestinationSettings Source #

Settings associated with S3 destination

See: newS3DestinationSettings smart constructor.

Constructors

S3DestinationSettings' 

Fields

Instances

Instances details
Eq S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

Read S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

Show S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

Generic S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

Associated Types

type Rep S3DestinationSettings :: Type -> Type #

NFData S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

Methods

rnf :: S3DestinationSettings -> () #

Hashable S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

ToJSON S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

FromJSON S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

type Rep S3DestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3DestinationSettings

type Rep S3DestinationSettings = D1 ('MetaData "S3DestinationSettings" "Amazonka.MediaConvert.Types.S3DestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "S3DestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "accessControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe S3DestinationAccessControl)) :*: S1 ('MetaSel ('Just "encryption") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe S3EncryptionSettings))))

newS3DestinationSettings :: S3DestinationSettings Source #

Create a value of S3DestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:accessControl:S3DestinationSettings', s3DestinationSettings_accessControl - Optional. Have MediaConvert automatically apply Amazon S3 access control for the outputs in this output group. When you don't use this setting, S3 automatically applies the default access control list PRIVATE.

$sel:encryption:S3DestinationSettings', s3DestinationSettings_encryption - Settings for how your job outputs are encrypted as they are uploaded to Amazon S3.

s3DestinationSettings_accessControl :: Lens' S3DestinationSettings (Maybe S3DestinationAccessControl) Source #

Optional. Have MediaConvert automatically apply Amazon S3 access control for the outputs in this output group. When you don't use this setting, S3 automatically applies the default access control list PRIVATE.

s3DestinationSettings_encryption :: Lens' S3DestinationSettings (Maybe S3EncryptionSettings) Source #

Settings for how your job outputs are encrypted as they are uploaded to Amazon S3.

S3EncryptionSettings

data S3EncryptionSettings Source #

Settings for how your job outputs are encrypted as they are uploaded to Amazon S3.

See: newS3EncryptionSettings smart constructor.

Constructors

S3EncryptionSettings' 

Fields

  • encryptionType :: Maybe S3ServerSideEncryptionType

    Specify how you want your data keys managed. AWS uses data keys to encrypt your content. AWS also encrypts the data keys themselves, using a customer master key (CMK), and then stores the encrypted data keys alongside your encrypted content. Use this setting to specify which AWS service manages the CMK. For simplest set up, choose Amazon S3 (SERVER_SIDE_ENCRYPTION_S3). If you want your master key to be managed by AWS Key Management Service (KMS), choose AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). By default, when you choose AWS KMS, KMS uses the AWS managed customer master key (CMK) associated with Amazon S3 to encrypt your data keys. You can optionally choose to specify a different, customer managed CMK. Do so by specifying the Amazon Resource Name (ARN) of the key for the setting KMS ARN (kmsKeyArn).

  • kmsKeyArn :: Maybe Text

    Optionally, specify the customer master key (CMK) that you want to use to encrypt the data key that AWS uses to encrypt your output content. Enter the Amazon Resource Name (ARN) of the CMK. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). If you set Server-side encryption to AWS KMS but don't specify a CMK here, AWS uses the AWS managed CMK associated with Amazon S3.

  • kmsEncryptionContext :: Maybe Text

    Optionally, specify the encryption context that you want to use alongside your KMS key. AWS KMS uses this encryption context as additional authenticated data (AAD) to support authenticated encryption. This value must be a base64-encoded UTF-8 string holding JSON which represents a string-string map. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). For more information about encryption context, see: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context.

Instances

Instances details
Eq S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

Read S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

Show S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

Generic S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

Associated Types

type Rep S3EncryptionSettings :: Type -> Type #

NFData S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

Methods

rnf :: S3EncryptionSettings -> () #

Hashable S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

ToJSON S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

FromJSON S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

type Rep S3EncryptionSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.S3EncryptionSettings

type Rep S3EncryptionSettings = D1 ('MetaData "S3EncryptionSettings" "Amazonka.MediaConvert.Types.S3EncryptionSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "S3EncryptionSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "encryptionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe S3ServerSideEncryptionType)) :*: (S1 ('MetaSel ('Just "kmsKeyArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "kmsEncryptionContext") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newS3EncryptionSettings :: S3EncryptionSettings Source #

Create a value of S3EncryptionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:encryptionType:S3EncryptionSettings', s3EncryptionSettings_encryptionType - Specify how you want your data keys managed. AWS uses data keys to encrypt your content. AWS also encrypts the data keys themselves, using a customer master key (CMK), and then stores the encrypted data keys alongside your encrypted content. Use this setting to specify which AWS service manages the CMK. For simplest set up, choose Amazon S3 (SERVER_SIDE_ENCRYPTION_S3). If you want your master key to be managed by AWS Key Management Service (KMS), choose AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). By default, when you choose AWS KMS, KMS uses the AWS managed customer master key (CMK) associated with Amazon S3 to encrypt your data keys. You can optionally choose to specify a different, customer managed CMK. Do so by specifying the Amazon Resource Name (ARN) of the key for the setting KMS ARN (kmsKeyArn).

$sel:kmsKeyArn:S3EncryptionSettings', s3EncryptionSettings_kmsKeyArn - Optionally, specify the customer master key (CMK) that you want to use to encrypt the data key that AWS uses to encrypt your output content. Enter the Amazon Resource Name (ARN) of the CMK. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). If you set Server-side encryption to AWS KMS but don't specify a CMK here, AWS uses the AWS managed CMK associated with Amazon S3.

$sel:kmsEncryptionContext:S3EncryptionSettings', s3EncryptionSettings_kmsEncryptionContext - Optionally, specify the encryption context that you want to use alongside your KMS key. AWS KMS uses this encryption context as additional authenticated data (AAD) to support authenticated encryption. This value must be a base64-encoded UTF-8 string holding JSON which represents a string-string map. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). For more information about encryption context, see: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context.

s3EncryptionSettings_encryptionType :: Lens' S3EncryptionSettings (Maybe S3ServerSideEncryptionType) Source #

Specify how you want your data keys managed. AWS uses data keys to encrypt your content. AWS also encrypts the data keys themselves, using a customer master key (CMK), and then stores the encrypted data keys alongside your encrypted content. Use this setting to specify which AWS service manages the CMK. For simplest set up, choose Amazon S3 (SERVER_SIDE_ENCRYPTION_S3). If you want your master key to be managed by AWS Key Management Service (KMS), choose AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). By default, when you choose AWS KMS, KMS uses the AWS managed customer master key (CMK) associated with Amazon S3 to encrypt your data keys. You can optionally choose to specify a different, customer managed CMK. Do so by specifying the Amazon Resource Name (ARN) of the key for the setting KMS ARN (kmsKeyArn).

s3EncryptionSettings_kmsKeyArn :: Lens' S3EncryptionSettings (Maybe Text) Source #

Optionally, specify the customer master key (CMK) that you want to use to encrypt the data key that AWS uses to encrypt your output content. Enter the Amazon Resource Name (ARN) of the CMK. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). If you set Server-side encryption to AWS KMS but don't specify a CMK here, AWS uses the AWS managed CMK associated with Amazon S3.

s3EncryptionSettings_kmsEncryptionContext :: Lens' S3EncryptionSettings (Maybe Text) Source #

Optionally, specify the encryption context that you want to use alongside your KMS key. AWS KMS uses this encryption context as additional authenticated data (AAD) to support authenticated encryption. This value must be a base64-encoded UTF-8 string holding JSON which represents a string-string map. To use this setting, you must also set Server-side encryption (S3ServerSideEncryptionType) to AWS KMS (SERVER_SIDE_ENCRYPTION_KMS). For more information about encryption context, see: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context.

SccDestinationSettings

data SccDestinationSettings Source #

Settings related to SCC captions. SCC is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/scc-srt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SCC.

See: newSccDestinationSettings smart constructor.

Constructors

SccDestinationSettings' 

Fields

  • framerate :: Maybe SccDestinationFramerate

    Set Framerate (SccDestinationFramerate) to make sure that the captions and the video are synchronized in the output. Specify a frame rate that matches the frame rate of the associated video. If the video frame rate is 29.97, choose 29.97 dropframe (FRAMERATE_29_97_DROPFRAME) only if the video has video_insertion=true and drop_frame_timecode=true; otherwise, choose 29.97 non-dropframe (FRAMERATE_29_97_NON_DROPFRAME).

Instances

Instances details
Eq SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

Read SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

Show SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

Generic SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

Associated Types

type Rep SccDestinationSettings :: Type -> Type #

NFData SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

Methods

rnf :: SccDestinationSettings -> () #

Hashable SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

ToJSON SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

FromJSON SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

type Rep SccDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SccDestinationSettings

type Rep SccDestinationSettings = D1 ('MetaData "SccDestinationSettings" "Amazonka.MediaConvert.Types.SccDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "SccDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "framerate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SccDestinationFramerate))))

newSccDestinationSettings :: SccDestinationSettings Source #

Create a value of SccDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:framerate:SccDestinationSettings', sccDestinationSettings_framerate - Set Framerate (SccDestinationFramerate) to make sure that the captions and the video are synchronized in the output. Specify a frame rate that matches the frame rate of the associated video. If the video frame rate is 29.97, choose 29.97 dropframe (FRAMERATE_29_97_DROPFRAME) only if the video has video_insertion=true and drop_frame_timecode=true; otherwise, choose 29.97 non-dropframe (FRAMERATE_29_97_NON_DROPFRAME).

sccDestinationSettings_framerate :: Lens' SccDestinationSettings (Maybe SccDestinationFramerate) Source #

Set Framerate (SccDestinationFramerate) to make sure that the captions and the video are synchronized in the output. Specify a frame rate that matches the frame rate of the associated video. If the video frame rate is 29.97, choose 29.97 dropframe (FRAMERATE_29_97_DROPFRAME) only if the video has video_insertion=true and drop_frame_timecode=true; otherwise, choose 29.97 non-dropframe (FRAMERATE_29_97_NON_DROPFRAME).

SpekeKeyProvider

data SpekeKeyProvider Source #

If your output group type is HLS, DASH, or Microsoft Smooth, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is CMAF, use the SpekeKeyProviderCmaf settings instead.

See: newSpekeKeyProvider smart constructor.

Constructors

SpekeKeyProvider' 

Fields

  • resourceId :: Maybe Text

    Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

  • certificateArn :: Maybe Text

    If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

  • url :: Maybe Text

    Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

  • systemIds :: Maybe [Text]

    Relates to SPEKE implementation. DRM system identifiers. DASH output groups support a max of two system ids. Other group types support one system id. See https://dashif.org/identifiers/content_protection/ for more details.

Instances

Instances details
Eq SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

Read SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

Show SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

Generic SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

Associated Types

type Rep SpekeKeyProvider :: Type -> Type #

NFData SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

Methods

rnf :: SpekeKeyProvider -> () #

Hashable SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

ToJSON SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

FromJSON SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

type Rep SpekeKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProvider

type Rep SpekeKeyProvider = D1 ('MetaData "SpekeKeyProvider" "Amazonka.MediaConvert.Types.SpekeKeyProvider" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "SpekeKeyProvider'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "resourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "certificateArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "url") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "systemIds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text])))))

newSpekeKeyProvider :: SpekeKeyProvider Source #

Create a value of SpekeKeyProvider with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:resourceId:SpekeKeyProvider', spekeKeyProvider_resourceId - Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

$sel:certificateArn:SpekeKeyProvider', spekeKeyProvider_certificateArn - If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

$sel:url:SpekeKeyProvider', spekeKeyProvider_url - Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

$sel:systemIds:SpekeKeyProvider', spekeKeyProvider_systemIds - Relates to SPEKE implementation. DRM system identifiers. DASH output groups support a max of two system ids. Other group types support one system id. See https://dashif.org/identifiers/content_protection/ for more details.

spekeKeyProvider_resourceId :: Lens' SpekeKeyProvider (Maybe Text) Source #

Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

spekeKeyProvider_certificateArn :: Lens' SpekeKeyProvider (Maybe Text) Source #

If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

spekeKeyProvider_url :: Lens' SpekeKeyProvider (Maybe Text) Source #

Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

spekeKeyProvider_systemIds :: Lens' SpekeKeyProvider (Maybe [Text]) Source #

Relates to SPEKE implementation. DRM system identifiers. DASH output groups support a max of two system ids. Other group types support one system id. See https://dashif.org/identifiers/content_protection/ for more details.

SpekeKeyProviderCmaf

data SpekeKeyProviderCmaf Source #

If your output group type is CMAF, use these settings when doing DRM encryption with a SPEKE-compliant key provider. If your output group type is HLS, DASH, or Microsoft Smooth, use the SpekeKeyProvider settings instead.

See: newSpekeKeyProviderCmaf smart constructor.

Constructors

SpekeKeyProviderCmaf' 

Fields

  • resourceId :: Maybe Text

    Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

  • dashSignaledSystemIds :: Maybe [Text]

    Specify the DRM system IDs that you want signaled in the DASH manifest that MediaConvert creates as part of this CMAF package. The DASH manifest can currently signal up to three system IDs. For more information, see https://dashif.org/identifiers/content_protection/.

  • certificateArn :: Maybe Text

    If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

  • url :: Maybe Text

    Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

  • hlsSignaledSystemIds :: Maybe [Text]

    Specify the DRM system ID that you want signaled in the HLS manifest that MediaConvert creates as part of this CMAF package. The HLS manifest can currently signal only one system ID. For more information, see https://dashif.org/identifiers/content_protection/.

Instances

Instances details
Eq SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

Read SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

Show SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

Generic SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

Associated Types

type Rep SpekeKeyProviderCmaf :: Type -> Type #

NFData SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

Methods

rnf :: SpekeKeyProviderCmaf -> () #

Hashable SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

ToJSON SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

FromJSON SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

type Rep SpekeKeyProviderCmaf Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf

type Rep SpekeKeyProviderCmaf = D1 ('MetaData "SpekeKeyProviderCmaf" "Amazonka.MediaConvert.Types.SpekeKeyProviderCmaf" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "SpekeKeyProviderCmaf'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "resourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "dashSignaledSystemIds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))) :*: (S1 ('MetaSel ('Just "certificateArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "url") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "hlsSignaledSystemIds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text]))))))

newSpekeKeyProviderCmaf :: SpekeKeyProviderCmaf Source #

Create a value of SpekeKeyProviderCmaf with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:resourceId:SpekeKeyProviderCmaf', spekeKeyProviderCmaf_resourceId - Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

$sel:dashSignaledSystemIds:SpekeKeyProviderCmaf', spekeKeyProviderCmaf_dashSignaledSystemIds - Specify the DRM system IDs that you want signaled in the DASH manifest that MediaConvert creates as part of this CMAF package. The DASH manifest can currently signal up to three system IDs. For more information, see https://dashif.org/identifiers/content_protection/.

$sel:certificateArn:SpekeKeyProviderCmaf', spekeKeyProviderCmaf_certificateArn - If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

$sel:url:SpekeKeyProviderCmaf', spekeKeyProviderCmaf_url - Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

$sel:hlsSignaledSystemIds:SpekeKeyProviderCmaf', spekeKeyProviderCmaf_hlsSignaledSystemIds - Specify the DRM system ID that you want signaled in the HLS manifest that MediaConvert creates as part of this CMAF package. The HLS manifest can currently signal only one system ID. For more information, see https://dashif.org/identifiers/content_protection/.

spekeKeyProviderCmaf_resourceId :: Lens' SpekeKeyProviderCmaf (Maybe Text) Source #

Specify the resource ID that your SPEKE-compliant key provider uses to identify this content.

spekeKeyProviderCmaf_dashSignaledSystemIds :: Lens' SpekeKeyProviderCmaf (Maybe [Text]) Source #

Specify the DRM system IDs that you want signaled in the DASH manifest that MediaConvert creates as part of this CMAF package. The DASH manifest can currently signal up to three system IDs. For more information, see https://dashif.org/identifiers/content_protection/.

spekeKeyProviderCmaf_certificateArn :: Lens' SpekeKeyProviderCmaf (Maybe Text) Source #

If you want your key provider to encrypt the content keys that it provides to MediaConvert, set up a certificate with a master key using AWS Certificate Manager. Specify the certificate's Amazon Resource Name (ARN) here.

spekeKeyProviderCmaf_url :: Lens' SpekeKeyProviderCmaf (Maybe Text) Source #

Specify the URL to the key server that your SPEKE-compliant DRM key provider uses to provide keys for encrypting your content.

spekeKeyProviderCmaf_hlsSignaledSystemIds :: Lens' SpekeKeyProviderCmaf (Maybe [Text]) Source #

Specify the DRM system ID that you want signaled in the HLS manifest that MediaConvert creates as part of this CMAF package. The HLS manifest can currently signal only one system ID. For more information, see https://dashif.org/identifiers/content_protection/.

SrtDestinationSettings

data SrtDestinationSettings Source #

Settings related to SRT captions. SRT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to SRT.

See: newSrtDestinationSettings smart constructor.

Constructors

SrtDestinationSettings' 

Fields

  • stylePassthrough :: Maybe SrtStylePassthrough

    Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

Instances

Instances details
Eq SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

Read SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

Show SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

Generic SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

Associated Types

type Rep SrtDestinationSettings :: Type -> Type #

NFData SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

Methods

rnf :: SrtDestinationSettings -> () #

Hashable SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

ToJSON SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

FromJSON SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

type Rep SrtDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.SrtDestinationSettings

type Rep SrtDestinationSettings = D1 ('MetaData "SrtDestinationSettings" "Amazonka.MediaConvert.Types.SrtDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "SrtDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SrtStylePassthrough))))

newSrtDestinationSettings :: SrtDestinationSettings Source #

Create a value of SrtDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stylePassthrough:SrtDestinationSettings', srtDestinationSettings_stylePassthrough - Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

srtDestinationSettings_stylePassthrough :: Lens' SrtDestinationSettings (Maybe SrtStylePassthrough) Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

StaticKeyProvider

data StaticKeyProvider Source #

Use these settings to set up encryption with a static key provider.

See: newStaticKeyProvider smart constructor.

Constructors

StaticKeyProvider' 

Fields

  • staticKeyValue :: Maybe Text

    Relates to DRM implementation. Use a 32-character hexidecimal string to specify Key Value (StaticKeyValue).

  • url :: Maybe Text

    Relates to DRM implementation. The location of the license server used for protecting content.

  • keyFormat :: Maybe Text

    Relates to DRM implementation. Sets the value of the KEYFORMAT attribute. Must be 'identity' or a reverse DNS string. May be omitted to indicate an implicit value of 'identity'.

  • keyFormatVersions :: Maybe Text

    Relates to DRM implementation. Either a single positive integer version value or a slash delimited list of version values (1/2/3).

Instances

Instances details
Eq StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

Read StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

Show StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

Generic StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

Associated Types

type Rep StaticKeyProvider :: Type -> Type #

NFData StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

Methods

rnf :: StaticKeyProvider -> () #

Hashable StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

ToJSON StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

FromJSON StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

type Rep StaticKeyProvider Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.StaticKeyProvider

type Rep StaticKeyProvider = D1 ('MetaData "StaticKeyProvider" "Amazonka.MediaConvert.Types.StaticKeyProvider" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "StaticKeyProvider'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "staticKeyValue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "url") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "keyFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "keyFormatVersions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newStaticKeyProvider :: StaticKeyProvider Source #

Create a value of StaticKeyProvider with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:staticKeyValue:StaticKeyProvider', staticKeyProvider_staticKeyValue - Relates to DRM implementation. Use a 32-character hexidecimal string to specify Key Value (StaticKeyValue).

$sel:url:StaticKeyProvider', staticKeyProvider_url - Relates to DRM implementation. The location of the license server used for protecting content.

$sel:keyFormat:StaticKeyProvider', staticKeyProvider_keyFormat - Relates to DRM implementation. Sets the value of the KEYFORMAT attribute. Must be 'identity' or a reverse DNS string. May be omitted to indicate an implicit value of 'identity'.

$sel:keyFormatVersions:StaticKeyProvider', staticKeyProvider_keyFormatVersions - Relates to DRM implementation. Either a single positive integer version value or a slash delimited list of version values (1/2/3).

staticKeyProvider_staticKeyValue :: Lens' StaticKeyProvider (Maybe Text) Source #

Relates to DRM implementation. Use a 32-character hexidecimal string to specify Key Value (StaticKeyValue).

staticKeyProvider_url :: Lens' StaticKeyProvider (Maybe Text) Source #

Relates to DRM implementation. The location of the license server used for protecting content.

staticKeyProvider_keyFormat :: Lens' StaticKeyProvider (Maybe Text) Source #

Relates to DRM implementation. Sets the value of the KEYFORMAT attribute. Must be 'identity' or a reverse DNS string. May be omitted to indicate an implicit value of 'identity'.

staticKeyProvider_keyFormatVersions :: Lens' StaticKeyProvider (Maybe Text) Source #

Relates to DRM implementation. Either a single positive integer version value or a slash delimited list of version values (1/2/3).

TeletextDestinationSettings

data TeletextDestinationSettings Source #

Settings related to teletext captions. Set up teletext captions in the same output as your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/teletext-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TELETEXT.

See: newTeletextDestinationSettings smart constructor.

Constructors

TeletextDestinationSettings' 

Fields

  • pageTypes :: Maybe [TeletextPageType]

    Specify the page types for this Teletext page. If you don't specify a value here, the service sets the page type to the default value Subtitle (PAGE_TYPE_SUBTITLE). If you pass through the entire set of Teletext data, don't use this field. When you pass through a set of Teletext pages, your output has the same page types as your input.

  • pageNumber :: Maybe Text

    Set pageNumber to the Teletext page number for the destination captions for this output. This value must be a three-digit hexadecimal string; strings ending in -FF are invalid. If you are passing through the entire set of Teletext data, do not use this field.

Instances

Instances details
Eq TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

Read TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

Show TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

Generic TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

Associated Types

type Rep TeletextDestinationSettings :: Type -> Type #

NFData TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

Hashable TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

ToJSON TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

FromJSON TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

type Rep TeletextDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextDestinationSettings

type Rep TeletextDestinationSettings = D1 ('MetaData "TeletextDestinationSettings" "Amazonka.MediaConvert.Types.TeletextDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TeletextDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "pageTypes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [TeletextPageType])) :*: S1 ('MetaSel ('Just "pageNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newTeletextDestinationSettings :: TeletextDestinationSettings Source #

Create a value of TeletextDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:pageTypes:TeletextDestinationSettings', teletextDestinationSettings_pageTypes - Specify the page types for this Teletext page. If you don't specify a value here, the service sets the page type to the default value Subtitle (PAGE_TYPE_SUBTITLE). If you pass through the entire set of Teletext data, don't use this field. When you pass through a set of Teletext pages, your output has the same page types as your input.

$sel:pageNumber:TeletextDestinationSettings', teletextDestinationSettings_pageNumber - Set pageNumber to the Teletext page number for the destination captions for this output. This value must be a three-digit hexadecimal string; strings ending in -FF are invalid. If you are passing through the entire set of Teletext data, do not use this field.

teletextDestinationSettings_pageTypes :: Lens' TeletextDestinationSettings (Maybe [TeletextPageType]) Source #

Specify the page types for this Teletext page. If you don't specify a value here, the service sets the page type to the default value Subtitle (PAGE_TYPE_SUBTITLE). If you pass through the entire set of Teletext data, don't use this field. When you pass through a set of Teletext pages, your output has the same page types as your input.

teletextDestinationSettings_pageNumber :: Lens' TeletextDestinationSettings (Maybe Text) Source #

Set pageNumber to the Teletext page number for the destination captions for this output. This value must be a three-digit hexadecimal string; strings ending in -FF are invalid. If you are passing through the entire set of Teletext data, do not use this field.

TeletextSourceSettings

data TeletextSourceSettings Source #

Settings specific to Teletext caption sources, including Page number.

See: newTeletextSourceSettings smart constructor.

Constructors

TeletextSourceSettings' 

Fields

  • pageNumber :: Maybe Text

    Use Page Number (PageNumber) to specify the three-digit hexadecimal page number that will be used for Teletext captions. Do not use this setting if you are passing through teletext from the input source to output.

Instances

Instances details
Eq TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

Read TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

Show TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

Generic TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

Associated Types

type Rep TeletextSourceSettings :: Type -> Type #

NFData TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

Methods

rnf :: TeletextSourceSettings -> () #

Hashable TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

ToJSON TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

FromJSON TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

type Rep TeletextSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TeletextSourceSettings

type Rep TeletextSourceSettings = D1 ('MetaData "TeletextSourceSettings" "Amazonka.MediaConvert.Types.TeletextSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TeletextSourceSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "pageNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newTeletextSourceSettings :: TeletextSourceSettings Source #

Create a value of TeletextSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:pageNumber:TeletextSourceSettings', teletextSourceSettings_pageNumber - Use Page Number (PageNumber) to specify the three-digit hexadecimal page number that will be used for Teletext captions. Do not use this setting if you are passing through teletext from the input source to output.

teletextSourceSettings_pageNumber :: Lens' TeletextSourceSettings (Maybe Text) Source #

Use Page Number (PageNumber) to specify the three-digit hexadecimal page number that will be used for Teletext captions. Do not use this setting if you are passing through teletext from the input source to output.

TimecodeBurnin

data TimecodeBurnin Source #

Settings for burning the output timecode and specified prefix into the output.

See: newTimecodeBurnin smart constructor.

Constructors

TimecodeBurnin' 

Fields

  • prefix :: Maybe Text

    Use Prefix (Prefix) to place ASCII characters before any burned-in timecode. For example, a prefix of "EZ-" will result in the timecode "EZ-00:00:00:00". Provide either the characters themselves or the ASCII code equivalents. The supported range of characters is 0x20 through 0x7e. This includes letters, numbers, and all special characters represented on a standard English keyboard.

  • fontSize :: Maybe Natural

    Use Font Size (FontSize) to set the font size of any burned-in timecode. Valid values are 10, 16, 32, 48.

  • position :: Maybe TimecodeBurninPosition

    Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to specify the location the burned-in timecode on output video.

Instances

Instances details
Eq TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

Read TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

Show TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

Generic TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

Associated Types

type Rep TimecodeBurnin :: Type -> Type #

NFData TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

Methods

rnf :: TimecodeBurnin -> () #

Hashable TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

ToJSON TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

FromJSON TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

type Rep TimecodeBurnin Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeBurnin

type Rep TimecodeBurnin = D1 ('MetaData "TimecodeBurnin" "Amazonka.MediaConvert.Types.TimecodeBurnin" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TimecodeBurnin'" 'PrefixI 'True) (S1 ('MetaSel ('Just "prefix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "fontSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "position") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimecodeBurninPosition)))))

newTimecodeBurnin :: TimecodeBurnin Source #

Create a value of TimecodeBurnin with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:prefix:TimecodeBurnin', timecodeBurnin_prefix - Use Prefix (Prefix) to place ASCII characters before any burned-in timecode. For example, a prefix of "EZ-" will result in the timecode "EZ-00:00:00:00". Provide either the characters themselves or the ASCII code equivalents. The supported range of characters is 0x20 through 0x7e. This includes letters, numbers, and all special characters represented on a standard English keyboard.

$sel:fontSize:TimecodeBurnin', timecodeBurnin_fontSize - Use Font Size (FontSize) to set the font size of any burned-in timecode. Valid values are 10, 16, 32, 48.

$sel:position:TimecodeBurnin', timecodeBurnin_position - Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to specify the location the burned-in timecode on output video.

timecodeBurnin_prefix :: Lens' TimecodeBurnin (Maybe Text) Source #

Use Prefix (Prefix) to place ASCII characters before any burned-in timecode. For example, a prefix of "EZ-" will result in the timecode "EZ-00:00:00:00". Provide either the characters themselves or the ASCII code equivalents. The supported range of characters is 0x20 through 0x7e. This includes letters, numbers, and all special characters represented on a standard English keyboard.

timecodeBurnin_fontSize :: Lens' TimecodeBurnin (Maybe Natural) Source #

Use Font Size (FontSize) to set the font size of any burned-in timecode. Valid values are 10, 16, 32, 48.

timecodeBurnin_position :: Lens' TimecodeBurnin (Maybe TimecodeBurninPosition) Source #

Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to specify the location the burned-in timecode on output video.

TimecodeConfig

data TimecodeConfig Source #

These settings control how the service handles timecodes throughout the job. These settings don't affect input clipping.

See: newTimecodeConfig smart constructor.

Constructors

TimecodeConfig' 

Fields

  • start :: Maybe Text

    Only use when you set Source (TimecodeSource) to Specified start (SPECIFIEDSTART). Use Start timecode (Start) to specify the timecode for the initial frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF).

  • timestampOffset :: Maybe Text

    Only applies to outputs that support program-date-time stamp. Use Timestamp offset (TimestampOffset) to overwrite the timecode date without affecting the time and frame number. Provide the new date as a string in the format "yyyy-mm-dd". To use Time stamp offset, you must also enable Insert program-date-time (InsertProgramDateTime) in the output settings. For example, if the date part of your timecodes is 2002-1-25 and you want to change it to one year later, set Timestamp offset (TimestampOffset) to 2003-1-25.

  • anchor :: Maybe Text

    If you use an editing platform that relies on an anchor timecode, use Anchor Timecode (Anchor) to specify a timecode that will match the input video frame to the output video frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF). This setting ignores frame rate conversion. System behavior for Anchor Timecode varies depending on your setting for Source (TimecodeSource). * If Source (TimecodeSource) is set to Specified Start (SPECIFIEDSTART), the first input frame is the specified value in Start Timecode (Start). Anchor Timecode (Anchor) and Start Timecode (Start) are used calculate output timecode. * If Source (TimecodeSource) is set to Start at 0 (ZEROBASED) the first frame is 00:00:00:00. * If Source (TimecodeSource) is set to Embedded (EMBEDDED), the first frame is the timecode value on the first input frame of the input.

  • source :: Maybe TimecodeSource

    Use Source (TimecodeSource) to set how timecodes are handled within this job. To make sure that your video, audio, captions, and markers are synchronized and that time-based features, such as image inserter, work correctly, choose the Timecode source option that matches your assets. All timecodes are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - Use the timecode that is in the input video. If no embedded timecode is in the source, the service will use Start at 0 (ZEROBASED) instead. * Start at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame to a value other than zero. You use Start timecode (Start) to provide this value.

Instances

Instances details
Eq TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

Read TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

Show TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

Generic TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

Associated Types

type Rep TimecodeConfig :: Type -> Type #

NFData TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

Methods

rnf :: TimecodeConfig -> () #

Hashable TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

ToJSON TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

FromJSON TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

type Rep TimecodeConfig Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimecodeConfig

type Rep TimecodeConfig = D1 ('MetaData "TimecodeConfig" "Amazonka.MediaConvert.Types.TimecodeConfig" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TimecodeConfig'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "start") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "timestampOffset") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "anchor") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "source") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TimecodeSource)))))

newTimecodeConfig :: TimecodeConfig Source #

Create a value of TimecodeConfig with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:start:TimecodeConfig', timecodeConfig_start - Only use when you set Source (TimecodeSource) to Specified start (SPECIFIEDSTART). Use Start timecode (Start) to specify the timecode for the initial frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF).

$sel:timestampOffset:TimecodeConfig', timecodeConfig_timestampOffset - Only applies to outputs that support program-date-time stamp. Use Timestamp offset (TimestampOffset) to overwrite the timecode date without affecting the time and frame number. Provide the new date as a string in the format "yyyy-mm-dd". To use Time stamp offset, you must also enable Insert program-date-time (InsertProgramDateTime) in the output settings. For example, if the date part of your timecodes is 2002-1-25 and you want to change it to one year later, set Timestamp offset (TimestampOffset) to 2003-1-25.

$sel:anchor:TimecodeConfig', timecodeConfig_anchor - If you use an editing platform that relies on an anchor timecode, use Anchor Timecode (Anchor) to specify a timecode that will match the input video frame to the output video frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF). This setting ignores frame rate conversion. System behavior for Anchor Timecode varies depending on your setting for Source (TimecodeSource). * If Source (TimecodeSource) is set to Specified Start (SPECIFIEDSTART), the first input frame is the specified value in Start Timecode (Start). Anchor Timecode (Anchor) and Start Timecode (Start) are used calculate output timecode. * If Source (TimecodeSource) is set to Start at 0 (ZEROBASED) the first frame is 00:00:00:00. * If Source (TimecodeSource) is set to Embedded (EMBEDDED), the first frame is the timecode value on the first input frame of the input.

$sel:source:TimecodeConfig', timecodeConfig_source - Use Source (TimecodeSource) to set how timecodes are handled within this job. To make sure that your video, audio, captions, and markers are synchronized and that time-based features, such as image inserter, work correctly, choose the Timecode source option that matches your assets. All timecodes are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - Use the timecode that is in the input video. If no embedded timecode is in the source, the service will use Start at 0 (ZEROBASED) instead. * Start at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame to a value other than zero. You use Start timecode (Start) to provide this value.

timecodeConfig_start :: Lens' TimecodeConfig (Maybe Text) Source #

Only use when you set Source (TimecodeSource) to Specified start (SPECIFIEDSTART). Use Start timecode (Start) to specify the timecode for the initial frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF).

timecodeConfig_timestampOffset :: Lens' TimecodeConfig (Maybe Text) Source #

Only applies to outputs that support program-date-time stamp. Use Timestamp offset (TimestampOffset) to overwrite the timecode date without affecting the time and frame number. Provide the new date as a string in the format "yyyy-mm-dd". To use Time stamp offset, you must also enable Insert program-date-time (InsertProgramDateTime) in the output settings. For example, if the date part of your timecodes is 2002-1-25 and you want to change it to one year later, set Timestamp offset (TimestampOffset) to 2003-1-25.

timecodeConfig_anchor :: Lens' TimecodeConfig (Maybe Text) Source #

If you use an editing platform that relies on an anchor timecode, use Anchor Timecode (Anchor) to specify a timecode that will match the input video frame to the output video frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF). This setting ignores frame rate conversion. System behavior for Anchor Timecode varies depending on your setting for Source (TimecodeSource). * If Source (TimecodeSource) is set to Specified Start (SPECIFIEDSTART), the first input frame is the specified value in Start Timecode (Start). Anchor Timecode (Anchor) and Start Timecode (Start) are used calculate output timecode. * If Source (TimecodeSource) is set to Start at 0 (ZEROBASED) the first frame is 00:00:00:00. * If Source (TimecodeSource) is set to Embedded (EMBEDDED), the first frame is the timecode value on the first input frame of the input.

timecodeConfig_source :: Lens' TimecodeConfig (Maybe TimecodeSource) Source #

Use Source (TimecodeSource) to set how timecodes are handled within this job. To make sure that your video, audio, captions, and markers are synchronized and that time-based features, such as image inserter, work correctly, choose the Timecode source option that matches your assets. All timecodes are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - Use the timecode that is in the input video. If no embedded timecode is in the source, the service will use Start at 0 (ZEROBASED) instead. * Start at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame to a value other than zero. You use Start timecode (Start) to provide this value.

TimedMetadataInsertion

data TimedMetadataInsertion Source #

Enable Timed metadata insertion (TimedMetadataInsertion) to include ID3 tags in any HLS outputs. To include timed metadata, you must enable it here, enable it in each output container, and specify tags and timecodes in ID3 insertion (Id3Insertion) objects.

See: newTimedMetadataInsertion smart constructor.

Constructors

TimedMetadataInsertion' 

Fields

Instances

Instances details
Eq TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

Read TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

Show TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

Generic TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

Associated Types

type Rep TimedMetadataInsertion :: Type -> Type #

NFData TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

Methods

rnf :: TimedMetadataInsertion -> () #

Hashable TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

ToJSON TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

FromJSON TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

type Rep TimedMetadataInsertion Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TimedMetadataInsertion

type Rep TimedMetadataInsertion = D1 ('MetaData "TimedMetadataInsertion" "Amazonka.MediaConvert.Types.TimedMetadataInsertion" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TimedMetadataInsertion'" 'PrefixI 'True) (S1 ('MetaSel ('Just "id3Insertions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Id3Insertion]))))

newTimedMetadataInsertion :: TimedMetadataInsertion Source #

Create a value of TimedMetadataInsertion with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:id3Insertions:TimedMetadataInsertion', timedMetadataInsertion_id3Insertions - Id3Insertions contains the array of Id3Insertion instances.

timedMetadataInsertion_id3Insertions :: Lens' TimedMetadataInsertion (Maybe [Id3Insertion]) Source #

Id3Insertions contains the array of Id3Insertion instances.

Timing

data Timing Source #

Information about when jobs are submitted, started, and finished is specified in Unix epoch format in seconds.

See: newTiming smart constructor.

Constructors

Timing' 

Fields

Instances

Instances details
Eq Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Methods

(==) :: Timing -> Timing -> Bool #

(/=) :: Timing -> Timing -> Bool #

Read Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Show Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Generic Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Associated Types

type Rep Timing :: Type -> Type #

Methods

from :: Timing -> Rep Timing x #

to :: Rep Timing x -> Timing #

NFData Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Methods

rnf :: Timing -> () #

Hashable Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

Methods

hashWithSalt :: Int -> Timing -> Int #

hash :: Timing -> Int #

FromJSON Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

type Rep Timing Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Timing

type Rep Timing = D1 ('MetaData "Timing" "Amazonka.MediaConvert.Types.Timing" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Timing'" 'PrefixI 'True) (S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "finishTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "submitTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))))

newTiming :: Timing Source #

Create a value of Timing with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:startTime:Timing', timing_startTime - The time, in Unix epoch format, that transcoding for the job began.

$sel:finishTime:Timing', timing_finishTime - The time, in Unix epoch format, that the transcoding job finished

$sel:submitTime:Timing', timing_submitTime - The time, in Unix epoch format, that you submitted the job.

timing_startTime :: Lens' Timing (Maybe UTCTime) Source #

The time, in Unix epoch format, that transcoding for the job began.

timing_finishTime :: Lens' Timing (Maybe UTCTime) Source #

The time, in Unix epoch format, that the transcoding job finished

timing_submitTime :: Lens' Timing (Maybe UTCTime) Source #

The time, in Unix epoch format, that you submitted the job.

TrackSourceSettings

data TrackSourceSettings Source #

Settings specific to caption sources that are specified by track number. Currently, this is only IMSC captions in an IMF package. If your caption source is IMSC 1.1 in a separate xml file, use FileSourceSettings instead of TrackSourceSettings.

See: newTrackSourceSettings smart constructor.

Constructors

TrackSourceSettings' 

Fields

  • trackNumber :: Maybe Natural

    Use this setting to select a single captions track from a source. Track numbers correspond to the order in the captions source file. For IMF sources, track numbering is based on the order that the captions appear in the CPL. For example, use 1 to select the captions asset that is listed first in the CPL. To include more than one captions track in your job outputs, create multiple input captions selectors. Specify one track per selector.

Instances

Instances details
Eq TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

Read TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

Show TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

Generic TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

Associated Types

type Rep TrackSourceSettings :: Type -> Type #

NFData TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

Methods

rnf :: TrackSourceSettings -> () #

Hashable TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

ToJSON TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

FromJSON TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

type Rep TrackSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TrackSourceSettings

type Rep TrackSourceSettings = D1 ('MetaData "TrackSourceSettings" "Amazonka.MediaConvert.Types.TrackSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TrackSourceSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "trackNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newTrackSourceSettings :: TrackSourceSettings Source #

Create a value of TrackSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:trackNumber:TrackSourceSettings', trackSourceSettings_trackNumber - Use this setting to select a single captions track from a source. Track numbers correspond to the order in the captions source file. For IMF sources, track numbering is based on the order that the captions appear in the CPL. For example, use 1 to select the captions asset that is listed first in the CPL. To include more than one captions track in your job outputs, create multiple input captions selectors. Specify one track per selector.

trackSourceSettings_trackNumber :: Lens' TrackSourceSettings (Maybe Natural) Source #

Use this setting to select a single captions track from a source. Track numbers correspond to the order in the captions source file. For IMF sources, track numbering is based on the order that the captions appear in the CPL. For example, use 1 to select the captions asset that is listed first in the CPL. To include more than one captions track in your job outputs, create multiple input captions selectors. Specify one track per selector.

TtmlDestinationSettings

data TtmlDestinationSettings Source #

Settings related to TTML captions. TTML is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to TTML.

See: newTtmlDestinationSettings smart constructor.

Constructors

TtmlDestinationSettings' 

Fields

Instances

Instances details
Eq TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

Read TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

Show TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

Generic TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

Associated Types

type Rep TtmlDestinationSettings :: Type -> Type #

NFData TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

Methods

rnf :: TtmlDestinationSettings -> () #

Hashable TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

ToJSON TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

FromJSON TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

type Rep TtmlDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.TtmlDestinationSettings

type Rep TtmlDestinationSettings = D1 ('MetaData "TtmlDestinationSettings" "Amazonka.MediaConvert.Types.TtmlDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "TtmlDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TtmlStylePassthrough))))

newTtmlDestinationSettings :: TtmlDestinationSettings Source #

Create a value of TtmlDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stylePassthrough:TtmlDestinationSettings', ttmlDestinationSettings_stylePassthrough - Pass through style and position information from a TTML-like input source (TTML, IMSC, SMPTE-TT) to the TTML output.

ttmlDestinationSettings_stylePassthrough :: Lens' TtmlDestinationSettings (Maybe TtmlStylePassthrough) Source #

Pass through style and position information from a TTML-like input source (TTML, IMSC, SMPTE-TT) to the TTML output.

Vc3Settings

data Vc3Settings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VC3

See: newVc3Settings smart constructor.

Constructors

Vc3Settings' 

Fields

  • slowPal :: Maybe Vc3SlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • telecine :: Maybe Vc3Telecine

    When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

  • interlaceMode :: Maybe Vc3InterlaceMode

    Optional. Choose the scan line type for this output. If you don't specify a value, MediaConvert will create a progressive output.

  • scanTypeConversionMode :: Maybe Vc3ScanTypeConversionMode

    Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • vc3Class :: Maybe Vc3Class

    Specify the VC3 class to choose the quality characteristics for this output. VC3 class, together with the settings Framerate (framerateNumerator and framerateDenominator) and Resolution (height and width), determine your output bitrate. For example, say that your video resolution is 1920x1080 and your framerate is 29.97. Then Class 145 (CLASS_145) gives you an output with a bitrate of approximately 145 Mbps and Class 220 (CLASS_220) gives you and output with a bitrate of approximately 220 Mbps. VC3 class also specifies the color bit depth of your output.

  • framerateConversionAlgorithm :: Maybe Vc3FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe Vc3FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

Instances

Instances details
Eq Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

Read Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

Show Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

Generic Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

Associated Types

type Rep Vc3Settings :: Type -> Type #

NFData Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

Methods

rnf :: Vc3Settings -> () #

Hashable Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

ToJSON Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

FromJSON Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

type Rep Vc3Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vc3Settings

newVc3Settings :: Vc3Settings Source #

Create a value of Vc3Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:slowPal:Vc3Settings', vc3Settings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:telecine:Vc3Settings', vc3Settings_telecine - When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

$sel:interlaceMode:Vc3Settings', vc3Settings_interlaceMode - Optional. Choose the scan line type for this output. If you don't specify a value, MediaConvert will create a progressive output.

$sel:scanTypeConversionMode:Vc3Settings', vc3Settings_scanTypeConversionMode - Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

$sel:framerateDenominator:Vc3Settings', vc3Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:vc3Class:Vc3Settings', vc3Settings_vc3Class - Specify the VC3 class to choose the quality characteristics for this output. VC3 class, together with the settings Framerate (framerateNumerator and framerateDenominator) and Resolution (height and width), determine your output bitrate. For example, say that your video resolution is 1920x1080 and your framerate is 29.97. Then Class 145 (CLASS_145) gives you an output with a bitrate of approximately 145 Mbps and Class 220 (CLASS_220) gives you and output with a bitrate of approximately 220 Mbps. VC3 class also specifies the color bit depth of your output.

$sel:framerateConversionAlgorithm:Vc3Settings', vc3Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:Vc3Settings', vc3Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:framerateNumerator:Vc3Settings', vc3Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vc3Settings_slowPal :: Lens' Vc3Settings (Maybe Vc3SlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Framerate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

vc3Settings_telecine :: Lens' Vc3Settings (Maybe Vc3Telecine) Source #

When you do frame rate conversion from 23.976 frames per second (fps) to 29.97 fps, and your output scan type is interlaced, you can optionally enable hard telecine (HARD) to create a smoother picture. When you keep the default value, None (NONE), MediaConvert does a standard frame rate conversion to 29.97 without doing anything with the field polarity to create a smoother picture.

vc3Settings_interlaceMode :: Lens' Vc3Settings (Maybe Vc3InterlaceMode) Source #

Optional. Choose the scan line type for this output. If you don't specify a value, MediaConvert will create a progressive output.

vc3Settings_scanTypeConversionMode :: Lens' Vc3Settings (Maybe Vc3ScanTypeConversionMode) Source #

Use this setting for interlaced outputs, when your output frame rate is half of your input frame rate. In this situation, choose Optimized interlacing (INTERLACED_OPTIMIZE) to create a better quality interlaced output. In this case, each progressive frame from the input corresponds to an interlaced field in the output. Keep the default value, Basic interlacing (INTERLACED), for all other output frame rates. With basic interlacing, MediaConvert performs any frame rate conversion first and then interlaces the frames. When you choose Optimized interlacing and you set your output frame rate to a value that isn't suitable for optimized interlacing, MediaConvert automatically falls back to basic interlacing. Required settings: To use optimized interlacing, you must set Telecine (telecine) to None (NONE) or Soft (SOFT). You can't use optimized interlacing for hard telecine outputs. You must also set Interlace mode (interlaceMode) to a value other than Progressive (PROGRESSIVE).

vc3Settings_framerateDenominator :: Lens' Vc3Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vc3Settings_vc3Class :: Lens' Vc3Settings (Maybe Vc3Class) Source #

Specify the VC3 class to choose the quality characteristics for this output. VC3 class, together with the settings Framerate (framerateNumerator and framerateDenominator) and Resolution (height and width), determine your output bitrate. For example, say that your video resolution is 1920x1080 and your framerate is 29.97. Then Class 145 (CLASS_145) gives you an output with a bitrate of approximately 145 Mbps and Class 220 (CLASS_220) gives you and output with a bitrate of approximately 220 Mbps. VC3 class also specifies the color bit depth of your output.

vc3Settings_framerateConversionAlgorithm :: Lens' Vc3Settings (Maybe Vc3FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

vc3Settings_framerateControl :: Lens' Vc3Settings (Maybe Vc3FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

vc3Settings_framerateNumerator :: Lens' Vc3Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

VideoCodecSettings

data VideoCodecSettings Source #

Video codec settings, (CodecSettings) under (VideoDescription), contains the group of settings related to video encoding. The settings in this group vary depending on the value that you choose for Video codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AV1, Av1Settings * AVC_INTRA, AvcIntraSettings * FRAME_CAPTURE, FrameCaptureSettings * H_264, H264Settings * H_265, H265Settings * MPEG2, Mpeg2Settings * PRORES, ProresSettings * VC3, Vc3Settings * VP8, Vp8Settings * VP9, Vp9Settings * XAVC, XavcSettings

See: newVideoCodecSettings smart constructor.

Constructors

VideoCodecSettings' 

Fields

Instances

Instances details
Eq VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

Read VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

Show VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

Generic VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

Associated Types

type Rep VideoCodecSettings :: Type -> Type #

NFData VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

Methods

rnf :: VideoCodecSettings -> () #

Hashable VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

ToJSON VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

FromJSON VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

type Rep VideoCodecSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoCodecSettings

type Rep VideoCodecSettings = D1 ('MetaData "VideoCodecSettings" "Amazonka.MediaConvert.Types.VideoCodecSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "VideoCodecSettings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "frameCaptureSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe FrameCaptureSettings)) :*: (S1 ('MetaSel ('Just "av1Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Av1Settings)) :*: S1 ('MetaSel ('Just "codec") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoCodec)))) :*: (S1 ('MetaSel ('Just "xavcSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcSettings)) :*: (S1 ('MetaSel ('Just "h265Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H265Settings)) :*: S1 ('MetaSel ('Just "proresSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ProresSettings))))) :*: ((S1 ('MetaSel ('Just "vp9Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9Settings)) :*: (S1 ('MetaSel ('Just "h264Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe H264Settings)) :*: S1 ('MetaSel ('Just "mpeg2Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Mpeg2Settings)))) :*: (S1 ('MetaSel ('Just "vp8Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8Settings)) :*: (S1 ('MetaSel ('Just "vc3Settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vc3Settings)) :*: S1 ('MetaSel ('Just "avcIntraSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AvcIntraSettings)))))))

newVideoCodecSettings :: VideoCodecSettings Source #

Create a value of VideoCodecSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:frameCaptureSettings:VideoCodecSettings', videoCodecSettings_frameCaptureSettings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value FRAME_CAPTURE.

$sel:av1Settings:VideoCodecSettings', videoCodecSettings_av1Settings - Required when you set Codec, under VideoDescription>CodecSettings to the value AV1.

$sel:codec:VideoCodecSettings', videoCodecSettings_codec - Specifies the video codec. This must be equal to one of the enum values defined by the object VideoCodec.

$sel:xavcSettings:VideoCodecSettings', videoCodecSettings_xavcSettings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value XAVC.

$sel:h265Settings:VideoCodecSettings', videoCodecSettings_h265Settings - Settings for H265 codec

$sel:proresSettings:VideoCodecSettings', videoCodecSettings_proresSettings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value PRORES.

$sel:vp9Settings:VideoCodecSettings', videoCodecSettings_vp9Settings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP9.

$sel:h264Settings:VideoCodecSettings', videoCodecSettings_h264Settings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value H_264.

$sel:mpeg2Settings:VideoCodecSettings', videoCodecSettings_mpeg2Settings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value MPEG2.

$sel:vp8Settings:VideoCodecSettings', videoCodecSettings_vp8Settings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP8.

$sel:vc3Settings:VideoCodecSettings', videoCodecSettings_vc3Settings - Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VC3

$sel:avcIntraSettings:VideoCodecSettings', videoCodecSettings_avcIntraSettings - Required when you choose AVC-Intra for your output video codec. For more information about the AVC-Intra settings, see the relevant specification. For detailed information about SD and HD in AVC-Intra, see https://ieeexplore.ieee.org/document/7290936. For information about 4K/2K in AVC-Intra, see https://pro-av.panasonic.net/en/avc-ultra/AVC-ULTRAoverview.pdf.

videoCodecSettings_frameCaptureSettings :: Lens' VideoCodecSettings (Maybe FrameCaptureSettings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value FRAME_CAPTURE.

videoCodecSettings_av1Settings :: Lens' VideoCodecSettings (Maybe Av1Settings) Source #

Required when you set Codec, under VideoDescription>CodecSettings to the value AV1.

videoCodecSettings_codec :: Lens' VideoCodecSettings (Maybe VideoCodec) Source #

Specifies the video codec. This must be equal to one of the enum values defined by the object VideoCodec.

videoCodecSettings_xavcSettings :: Lens' VideoCodecSettings (Maybe XavcSettings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value XAVC.

videoCodecSettings_proresSettings :: Lens' VideoCodecSettings (Maybe ProresSettings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value PRORES.

videoCodecSettings_vp9Settings :: Lens' VideoCodecSettings (Maybe Vp9Settings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP9.

videoCodecSettings_h264Settings :: Lens' VideoCodecSettings (Maybe H264Settings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value H_264.

videoCodecSettings_mpeg2Settings :: Lens' VideoCodecSettings (Maybe Mpeg2Settings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value MPEG2.

videoCodecSettings_vp8Settings :: Lens' VideoCodecSettings (Maybe Vp8Settings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP8.

videoCodecSettings_vc3Settings :: Lens' VideoCodecSettings (Maybe Vc3Settings) Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VC3

videoCodecSettings_avcIntraSettings :: Lens' VideoCodecSettings (Maybe AvcIntraSettings) Source #

Required when you choose AVC-Intra for your output video codec. For more information about the AVC-Intra settings, see the relevant specification. For detailed information about SD and HD in AVC-Intra, see https://ieeexplore.ieee.org/document/7290936. For information about 4K/2K in AVC-Intra, see https://pro-av.panasonic.net/en/avc-ultra/AVC-ULTRAoverview.pdf.

VideoDescription

data VideoDescription Source #

Settings related to video encoding of your output. The specific video settings depend on the video codec that you choose. When you work directly in your JSON job specification, include one instance of Video description (VideoDescription) per output.

See: newVideoDescription smart constructor.

Constructors

VideoDescription' 

Fields

  • timecodeInsertion :: Maybe VideoTimecodeInsertion

    Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode insertion when the input frame rate is identical to the output frame rate. To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. When the service inserts timecodes in an output, by default, it uses any embedded timecodes from the input. If none are present, the service will set the timecode for the first output frame to zero. To change this default behavior, adjust the settings under Timecode configuration (TimecodeConfig). In the console, these settings are located under Job > Job settings > Timecode configuration. Note - Timecode source under input settings (InputTimecodeSource) does not affect the timecodes that are inserted in the output. Source under Job settings > Timecode configuration (TimecodeSource) does.

  • height :: Maybe Natural

    Use the Height (Height) setting to define the video resolution height for this output. Specify in pixels. If you don't provide a value here, the service will use the input height.

  • afdSignaling :: Maybe AfdSignaling

    This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert AFD signaling (AfdSignaling) to specify whether the service includes AFD values in the output video data and what those values are. * Choose None to remove all AFD values from this output. * Choose Fixed to ignore input AFD values and instead encode the value specified in the job. * Choose Auto to calculate output AFD values based on the input AFD scaler data.

  • sharpness :: Maybe Natural

    Use Sharpness (Sharpness) setting to specify the strength of anti-aliasing. This setting changes the width of the anti-alias filter kernel used for scaling. Sharpness only applies if your output resolution is different from your input resolution. 0 is the softest setting, 100 the sharpest, and 50 recommended for most content.

  • crop :: Maybe Rectangle

    Use Cropping selection (crop) to specify the video area that the service will include in the output video frame.

  • width :: Maybe Natural

    Use Width (Width) to define the video resolution width, in pixels, for this output. If you don't provide a value here, the service will use the input width.

  • scalingBehavior :: Maybe ScalingBehavior

    Specify how the service handles outputs that have a different aspect ratio from the input aspect ratio. Choose Stretch to output (STRETCH_TO_OUTPUT) to have the service stretch your video image to fit. Keep the setting Default (DEFAULT) to have the service letterbox your video instead. This setting overrides any value that you specify for the setting Selection placement (position) in this output.

  • respondToAfd :: Maybe RespondToAfd

    Use Respond to AFD (RespondToAfd) to specify how the service changes the video itself in response to AFD values in the input. * Choose Respond to clip the input video frame according to the AFD value, input display aspect ratio, and output display aspect ratio. * Choose Passthrough to include the input AFD values. Do not choose this when AfdSignaling is set to (NONE). A preferred implementation of this workflow is to set RespondToAfd to (NONE) and set AfdSignaling to (AUTO). * Choose None to remove all input AFD values from this output.

  • dropFrameTimecode :: Maybe DropFrameTimecode

    Applies only to 29.97 fps outputs. When this feature is enabled, the service will use drop-frame timecode on outputs. If it is not possible to use drop-frame timecode, the system will fall back to non-drop-frame. This setting is enabled by default when Timecode insertion (TimecodeInsertion) is enabled.

  • antiAlias :: Maybe AntiAlias

    The anti-alias filter is automatically applied to all outputs. The service no longer accepts the value DISABLED for AntiAlias. If you specify that in your job, the service will ignore the setting.

  • fixedAfd :: Maybe Natural

    Applies only if you set AFD Signaling(AfdSignaling) to Fixed (FIXED). Use Fixed (FixedAfd) to specify a four-bit AFD value which the service will write on all frames of this video output.

  • colorMetadata :: Maybe ColorMetadata

    Choose Insert (INSERT) for this setting to include color metadata in this output. Choose Ignore (IGNORE) to exclude color metadata from this output. If you don't specify a value, the service sets this to Insert by default.

  • codecSettings :: Maybe VideoCodecSettings

    Video codec settings, (CodecSettings) under (VideoDescription), contains the group of settings related to video encoding. The settings in this group vary depending on the value that you choose for Video codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AV1, Av1Settings * AVC_INTRA, AvcIntraSettings * FRAME_CAPTURE, FrameCaptureSettings * H_264, H264Settings * H_265, H265Settings * MPEG2, Mpeg2Settings * PRORES, ProresSettings * VC3, Vc3Settings * VP8, Vp8Settings * VP9, Vp9Settings * XAVC, XavcSettings

  • videoPreprocessors :: Maybe VideoPreprocessor

    Find additional transcoding features under Preprocessors (VideoPreprocessors). Enable the features at each output individually. These features are disabled by default.

  • position :: Maybe Rectangle

    Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black.

Instances

Instances details
Eq VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

Read VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

Show VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

Generic VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

Associated Types

type Rep VideoDescription :: Type -> Type #

NFData VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

Methods

rnf :: VideoDescription -> () #

Hashable VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

ToJSON VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

FromJSON VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

type Rep VideoDescription Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDescription

type Rep VideoDescription = D1 ('MetaData "VideoDescription" "Amazonka.MediaConvert.Types.VideoDescription" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "VideoDescription'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "timecodeInsertion") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoTimecodeInsertion)) :*: (S1 ('MetaSel ('Just "height") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "afdSignaling") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AfdSignaling)))) :*: ((S1 ('MetaSel ('Just "sharpness") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "crop") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle))) :*: (S1 ('MetaSel ('Just "width") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "scalingBehavior") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ScalingBehavior))))) :*: (((S1 ('MetaSel ('Just "respondToAfd") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RespondToAfd)) :*: S1 ('MetaSel ('Just "dropFrameTimecode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe DropFrameTimecode))) :*: (S1 ('MetaSel ('Just "antiAlias") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AntiAlias)) :*: S1 ('MetaSel ('Just "fixedAfd") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: ((S1 ('MetaSel ('Just "colorMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ColorMetadata)) :*: S1 ('MetaSel ('Just "codecSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoCodecSettings))) :*: (S1 ('MetaSel ('Just "videoPreprocessors") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VideoPreprocessor)) :*: S1 ('MetaSel ('Just "position") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Rectangle)))))))

newVideoDescription :: VideoDescription Source #

Create a value of VideoDescription with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:timecodeInsertion:VideoDescription', videoDescription_timecodeInsertion - Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode insertion when the input frame rate is identical to the output frame rate. To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. When the service inserts timecodes in an output, by default, it uses any embedded timecodes from the input. If none are present, the service will set the timecode for the first output frame to zero. To change this default behavior, adjust the settings under Timecode configuration (TimecodeConfig). In the console, these settings are located under Job > Job settings > Timecode configuration. Note - Timecode source under input settings (InputTimecodeSource) does not affect the timecodes that are inserted in the output. Source under Job settings > Timecode configuration (TimecodeSource) does.

$sel:height:VideoDescription', videoDescription_height - Use the Height (Height) setting to define the video resolution height for this output. Specify in pixels. If you don't provide a value here, the service will use the input height.

$sel:afdSignaling:VideoDescription', videoDescription_afdSignaling - This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert AFD signaling (AfdSignaling) to specify whether the service includes AFD values in the output video data and what those values are. * Choose None to remove all AFD values from this output. * Choose Fixed to ignore input AFD values and instead encode the value specified in the job. * Choose Auto to calculate output AFD values based on the input AFD scaler data.

$sel:sharpness:VideoDescription', videoDescription_sharpness - Use Sharpness (Sharpness) setting to specify the strength of anti-aliasing. This setting changes the width of the anti-alias filter kernel used for scaling. Sharpness only applies if your output resolution is different from your input resolution. 0 is the softest setting, 100 the sharpest, and 50 recommended for most content.

$sel:crop:VideoDescription', videoDescription_crop - Use Cropping selection (crop) to specify the video area that the service will include in the output video frame.

$sel:width:VideoDescription', videoDescription_width - Use Width (Width) to define the video resolution width, in pixels, for this output. If you don't provide a value here, the service will use the input width.

$sel:scalingBehavior:VideoDescription', videoDescription_scalingBehavior - Specify how the service handles outputs that have a different aspect ratio from the input aspect ratio. Choose Stretch to output (STRETCH_TO_OUTPUT) to have the service stretch your video image to fit. Keep the setting Default (DEFAULT) to have the service letterbox your video instead. This setting overrides any value that you specify for the setting Selection placement (position) in this output.

$sel:respondToAfd:VideoDescription', videoDescription_respondToAfd - Use Respond to AFD (RespondToAfd) to specify how the service changes the video itself in response to AFD values in the input. * Choose Respond to clip the input video frame according to the AFD value, input display aspect ratio, and output display aspect ratio. * Choose Passthrough to include the input AFD values. Do not choose this when AfdSignaling is set to (NONE). A preferred implementation of this workflow is to set RespondToAfd to (NONE) and set AfdSignaling to (AUTO). * Choose None to remove all input AFD values from this output.

$sel:dropFrameTimecode:VideoDescription', videoDescription_dropFrameTimecode - Applies only to 29.97 fps outputs. When this feature is enabled, the service will use drop-frame timecode on outputs. If it is not possible to use drop-frame timecode, the system will fall back to non-drop-frame. This setting is enabled by default when Timecode insertion (TimecodeInsertion) is enabled.

$sel:antiAlias:VideoDescription', videoDescription_antiAlias - The anti-alias filter is automatically applied to all outputs. The service no longer accepts the value DISABLED for AntiAlias. If you specify that in your job, the service will ignore the setting.

$sel:fixedAfd:VideoDescription', videoDescription_fixedAfd - Applies only if you set AFD Signaling(AfdSignaling) to Fixed (FIXED). Use Fixed (FixedAfd) to specify a four-bit AFD value which the service will write on all frames of this video output.

$sel:colorMetadata:VideoDescription', videoDescription_colorMetadata - Choose Insert (INSERT) for this setting to include color metadata in this output. Choose Ignore (IGNORE) to exclude color metadata from this output. If you don't specify a value, the service sets this to Insert by default.

$sel:codecSettings:VideoDescription', videoDescription_codecSettings - Video codec settings, (CodecSettings) under (VideoDescription), contains the group of settings related to video encoding. The settings in this group vary depending on the value that you choose for Video codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AV1, Av1Settings * AVC_INTRA, AvcIntraSettings * FRAME_CAPTURE, FrameCaptureSettings * H_264, H264Settings * H_265, H265Settings * MPEG2, Mpeg2Settings * PRORES, ProresSettings * VC3, Vc3Settings * VP8, Vp8Settings * VP9, Vp9Settings * XAVC, XavcSettings

$sel:videoPreprocessors:VideoDescription', videoDescription_videoPreprocessors - Find additional transcoding features under Preprocessors (VideoPreprocessors). Enable the features at each output individually. These features are disabled by default.

$sel:position:VideoDescription', videoDescription_position - Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black.

videoDescription_timecodeInsertion :: Lens' VideoDescription (Maybe VideoTimecodeInsertion) Source #

Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode insertion when the input frame rate is identical to the output frame rate. To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. When the service inserts timecodes in an output, by default, it uses any embedded timecodes from the input. If none are present, the service will set the timecode for the first output frame to zero. To change this default behavior, adjust the settings under Timecode configuration (TimecodeConfig). In the console, these settings are located under Job > Job settings > Timecode configuration. Note - Timecode source under input settings (InputTimecodeSource) does not affect the timecodes that are inserted in the output. Source under Job settings > Timecode configuration (TimecodeSource) does.

videoDescription_height :: Lens' VideoDescription (Maybe Natural) Source #

Use the Height (Height) setting to define the video resolution height for this output. Specify in pixels. If you don't provide a value here, the service will use the input height.

videoDescription_afdSignaling :: Lens' VideoDescription (Maybe AfdSignaling) Source #

This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert AFD signaling (AfdSignaling) to specify whether the service includes AFD values in the output video data and what those values are. * Choose None to remove all AFD values from this output. * Choose Fixed to ignore input AFD values and instead encode the value specified in the job. * Choose Auto to calculate output AFD values based on the input AFD scaler data.

videoDescription_sharpness :: Lens' VideoDescription (Maybe Natural) Source #

Use Sharpness (Sharpness) setting to specify the strength of anti-aliasing. This setting changes the width of the anti-alias filter kernel used for scaling. Sharpness only applies if your output resolution is different from your input resolution. 0 is the softest setting, 100 the sharpest, and 50 recommended for most content.

videoDescription_crop :: Lens' VideoDescription (Maybe Rectangle) Source #

Use Cropping selection (crop) to specify the video area that the service will include in the output video frame.

videoDescription_width :: Lens' VideoDescription (Maybe Natural) Source #

Use Width (Width) to define the video resolution width, in pixels, for this output. If you don't provide a value here, the service will use the input width.

videoDescription_scalingBehavior :: Lens' VideoDescription (Maybe ScalingBehavior) Source #

Specify how the service handles outputs that have a different aspect ratio from the input aspect ratio. Choose Stretch to output (STRETCH_TO_OUTPUT) to have the service stretch your video image to fit. Keep the setting Default (DEFAULT) to have the service letterbox your video instead. This setting overrides any value that you specify for the setting Selection placement (position) in this output.

videoDescription_respondToAfd :: Lens' VideoDescription (Maybe RespondToAfd) Source #

Use Respond to AFD (RespondToAfd) to specify how the service changes the video itself in response to AFD values in the input. * Choose Respond to clip the input video frame according to the AFD value, input display aspect ratio, and output display aspect ratio. * Choose Passthrough to include the input AFD values. Do not choose this when AfdSignaling is set to (NONE). A preferred implementation of this workflow is to set RespondToAfd to (NONE) and set AfdSignaling to (AUTO). * Choose None to remove all input AFD values from this output.

videoDescription_dropFrameTimecode :: Lens' VideoDescription (Maybe DropFrameTimecode) Source #

Applies only to 29.97 fps outputs. When this feature is enabled, the service will use drop-frame timecode on outputs. If it is not possible to use drop-frame timecode, the system will fall back to non-drop-frame. This setting is enabled by default when Timecode insertion (TimecodeInsertion) is enabled.

videoDescription_antiAlias :: Lens' VideoDescription (Maybe AntiAlias) Source #

The anti-alias filter is automatically applied to all outputs. The service no longer accepts the value DISABLED for AntiAlias. If you specify that in your job, the service will ignore the setting.

videoDescription_fixedAfd :: Lens' VideoDescription (Maybe Natural) Source #

Applies only if you set AFD Signaling(AfdSignaling) to Fixed (FIXED). Use Fixed (FixedAfd) to specify a four-bit AFD value which the service will write on all frames of this video output.

videoDescription_colorMetadata :: Lens' VideoDescription (Maybe ColorMetadata) Source #

Choose Insert (INSERT) for this setting to include color metadata in this output. Choose Ignore (IGNORE) to exclude color metadata from this output. If you don't specify a value, the service sets this to Insert by default.

videoDescription_codecSettings :: Lens' VideoDescription (Maybe VideoCodecSettings) Source #

Video codec settings, (CodecSettings) under (VideoDescription), contains the group of settings related to video encoding. The settings in this group vary depending on the value that you choose for Video codec (Codec). For each codec enum that you choose, define the corresponding settings object. The following lists the codec enum, settings object pairs. * AV1, Av1Settings * AVC_INTRA, AvcIntraSettings * FRAME_CAPTURE, FrameCaptureSettings * H_264, H264Settings * H_265, H265Settings * MPEG2, Mpeg2Settings * PRORES, ProresSettings * VC3, Vc3Settings * VP8, Vp8Settings * VP9, Vp9Settings * XAVC, XavcSettings

videoDescription_videoPreprocessors :: Lens' VideoDescription (Maybe VideoPreprocessor) Source #

Find additional transcoding features under Preprocessors (VideoPreprocessors). Enable the features at each output individually. These features are disabled by default.

videoDescription_position :: Lens' VideoDescription (Maybe Rectangle) Source #

Use Selection placement (position) to define the video area in your output frame. The area outside of the rectangle that you specify here is black.

VideoDetail

data VideoDetail Source #

Contains details about the output's video stream

See: newVideoDetail smart constructor.

Constructors

VideoDetail' 

Fields

Instances

Instances details
Eq VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

Read VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

Show VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

Generic VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

Associated Types

type Rep VideoDetail :: Type -> Type #

NFData VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

Methods

rnf :: VideoDetail -> () #

Hashable VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

FromJSON VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

type Rep VideoDetail Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoDetail

type Rep VideoDetail = D1 ('MetaData "VideoDetail" "Amazonka.MediaConvert.Types.VideoDetail" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "VideoDetail'" 'PrefixI 'True) (S1 ('MetaSel ('Just "heightInPx") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)) :*: S1 ('MetaSel ('Just "widthInPx") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int))))

newVideoDetail :: VideoDetail Source #

Create a value of VideoDetail with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:heightInPx:VideoDetail', videoDetail_heightInPx - Height in pixels for the output

$sel:widthInPx:VideoDetail', videoDetail_widthInPx - Width in pixels for the output

videoDetail_heightInPx :: Lens' VideoDetail (Maybe Int) Source #

Height in pixels for the output

videoDetail_widthInPx :: Lens' VideoDetail (Maybe Int) Source #

Width in pixels for the output

VideoPreprocessor

data VideoPreprocessor Source #

Find additional transcoding features under Preprocessors (VideoPreprocessors). Enable the features at each output individually. These features are disabled by default.

See: newVideoPreprocessor smart constructor.

Constructors

VideoPreprocessor' 

Fields

  • timecodeBurnin :: Maybe TimecodeBurnin

    Settings for burning the output timecode and specified prefix into the output.

  • dolbyVision :: Maybe DolbyVision

    Enable Dolby Vision feature to produce Dolby Vision compatible video output.

  • colorCorrector :: Maybe ColorCorrector

    Use these settings to convert the color space or to modify properties such as hue and contrast for this output. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/converting-the-color-space.html.

  • deinterlacer :: Maybe Deinterlacer

    Use the deinterlacer to produce smoother motion and a clearer picture. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-scan-type.html.

  • noiseReducer :: Maybe NoiseReducer

    Enable the Noise reducer (NoiseReducer) feature to remove noise from your video output if necessary. Enable or disable this feature for each output individually. This setting is disabled by default.

  • imageInserter :: Maybe ImageInserter

    Enable the Image inserter (ImageInserter) feature to include a graphic overlay on your video. Enable or disable this feature for each output individually. This setting is disabled by default.

  • hdr10Plus :: Maybe Hdr10Plus

    Enable HDR10+ analyis and metadata injection. Compatible with HEVC only.

  • partnerWatermarking :: Maybe PartnerWatermarking

    If you work with a third party video watermarking partner, use the group of settings that correspond with your watermarking partner to include watermarks in your output.

Instances

Instances details
Eq VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

Read VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

Show VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

Generic VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

Associated Types

type Rep VideoPreprocessor :: Type -> Type #

NFData VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

Methods

rnf :: VideoPreprocessor -> () #

Hashable VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

ToJSON VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

FromJSON VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

type Rep VideoPreprocessor Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoPreprocessor

newVideoPreprocessor :: VideoPreprocessor Source #

Create a value of VideoPreprocessor with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:timecodeBurnin:VideoPreprocessor', videoPreprocessor_timecodeBurnin - Settings for burning the output timecode and specified prefix into the output.

$sel:dolbyVision:VideoPreprocessor', videoPreprocessor_dolbyVision - Enable Dolby Vision feature to produce Dolby Vision compatible video output.

$sel:colorCorrector:VideoPreprocessor', videoPreprocessor_colorCorrector - Use these settings to convert the color space or to modify properties such as hue and contrast for this output. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/converting-the-color-space.html.

$sel:deinterlacer:VideoPreprocessor', videoPreprocessor_deinterlacer - Use the deinterlacer to produce smoother motion and a clearer picture. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-scan-type.html.

$sel:noiseReducer:VideoPreprocessor', videoPreprocessor_noiseReducer - Enable the Noise reducer (NoiseReducer) feature to remove noise from your video output if necessary. Enable or disable this feature for each output individually. This setting is disabled by default.

$sel:imageInserter:VideoPreprocessor', videoPreprocessor_imageInserter - Enable the Image inserter (ImageInserter) feature to include a graphic overlay on your video. Enable or disable this feature for each output individually. This setting is disabled by default.

$sel:hdr10Plus:VideoPreprocessor', videoPreprocessor_hdr10Plus - Enable HDR10+ analyis and metadata injection. Compatible with HEVC only.

$sel:partnerWatermarking:VideoPreprocessor', videoPreprocessor_partnerWatermarking - If you work with a third party video watermarking partner, use the group of settings that correspond with your watermarking partner to include watermarks in your output.

videoPreprocessor_timecodeBurnin :: Lens' VideoPreprocessor (Maybe TimecodeBurnin) Source #

Settings for burning the output timecode and specified prefix into the output.

videoPreprocessor_dolbyVision :: Lens' VideoPreprocessor (Maybe DolbyVision) Source #

Enable Dolby Vision feature to produce Dolby Vision compatible video output.

videoPreprocessor_colorCorrector :: Lens' VideoPreprocessor (Maybe ColorCorrector) Source #

Use these settings to convert the color space or to modify properties such as hue and contrast for this output. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/converting-the-color-space.html.

videoPreprocessor_deinterlacer :: Lens' VideoPreprocessor (Maybe Deinterlacer) Source #

Use the deinterlacer to produce smoother motion and a clearer picture. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-scan-type.html.

videoPreprocessor_noiseReducer :: Lens' VideoPreprocessor (Maybe NoiseReducer) Source #

Enable the Noise reducer (NoiseReducer) feature to remove noise from your video output if necessary. Enable or disable this feature for each output individually. This setting is disabled by default.

videoPreprocessor_imageInserter :: Lens' VideoPreprocessor (Maybe ImageInserter) Source #

Enable the Image inserter (ImageInserter) feature to include a graphic overlay on your video. Enable or disable this feature for each output individually. This setting is disabled by default.

videoPreprocessor_hdr10Plus :: Lens' VideoPreprocessor (Maybe Hdr10Plus) Source #

Enable HDR10+ analyis and metadata injection. Compatible with HEVC only.

videoPreprocessor_partnerWatermarking :: Lens' VideoPreprocessor (Maybe PartnerWatermarking) Source #

If you work with a third party video watermarking partner, use the group of settings that correspond with your watermarking partner to include watermarks in your output.

VideoSelector

data VideoSelector Source #

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

See: newVideoSelector smart constructor.

Constructors

VideoSelector' 

Fields

  • programNumber :: Maybe Int

    Selects a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported.

  • alphaBehavior :: Maybe AlphaBehavior

    Ignore this setting unless this input is a QuickTime animation with an alpha channel. Use this setting to create separate Key and Fill outputs. In each output, specify which part of the input MediaConvert uses. Leave this setting at the default value DISCARD to delete the alpha channel and preserve the video. Set it to REMAP_TO_LUMA to delete the video and map the alpha channel to the luma channel of your outputs.

  • colorSpaceUsage :: Maybe ColorSpaceUsage

    There are two sources for color metadata, the input file and the job input settings Color space (ColorSpace) and HDR master display information settings(Hdr10Metadata). The Color space usage setting determines which takes precedence. Choose Force (FORCE) to use color metadata from the input job settings. If you don't specify values for those settings, the service defaults to using metadata from your input. FALLBACK - Choose Fallback (FALLBACK) to use color metadata from the source when it is present. If there's no color metadata in your input file, the service defaults to using values you specify in the input settings.

  • hdr10Metadata :: Maybe Hdr10Metadata

    Use these settings to provide HDR 10 metadata that is missing or inaccurate in your input video. Appropriate values vary depending on the input video and must be provided by a color grader. The color grader generates these values during the HDR 10 mastering process. The valid range for each of these settings is 0 to 50,000. Each increment represents 0.00002 in CIE1931 color coordinate. Related settings - When you specify these values, you must also set Color space (ColorSpace) to HDR 10 (HDR10). To specify whether the the values you specify here take precedence over the values in the metadata of your input file, set Color space usage (ColorSpaceUsage). To specify whether color metadata is included in an output, set Color metadata (ColorMetadata). For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

  • pid :: Maybe Natural

    Use PID (Pid) to select specific video data from an input file. Specify this value as an integer; the system automatically converts it to the hexidecimal value. For example, 257 selects PID 0x101. A PID, or packet identifier, is an identifier for a set of data in an MPEG-2 transport stream container.

  • rotate :: Maybe InputRotate

    Use Rotate (InputRotate) to specify how the service rotates your video. You can choose automatic rotation or specify a rotation. You can specify a clockwise rotation of 0, 90, 180, or 270 degrees. If your input video container is .mov or .mp4 and your input has rotation metadata, you can choose Automatic to have the service rotate your video according to the rotation specified in the metadata. The rotation must be within one degree of 90, 180, or 270 degrees. If the rotation metadata specifies any other rotation, the service will default to no rotation. By default, the service does no rotation, even if your input video has rotation metadata. The service doesn't pass through rotation metadata.

  • colorSpace :: Maybe ColorSpace

    If your input video has accurate color space metadata, or if you don't know about color space, leave this set to the default value Follow (FOLLOW). The service will automatically detect your input color space. If your input video has metadata indicating the wrong color space, specify the accurate color space here. If your input video is HDR 10 and the SMPTE ST 2086 Mastering Display Color Volume static metadata isn't present in your video stream, or if that metadata is present but not accurate, choose Force HDR 10 (FORCE_HDR10) here and specify correct values in the input HDR 10 metadata (Hdr10Metadata) settings. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

  • sampleRange :: Maybe InputSampleRange

    If the sample range metadata in your input video is accurate, or if you don't know about sample range, keep the default value, Follow (FOLLOW), for this setting. When you do, the service automatically detects your input sample range. If your input video has metadata indicating the wrong sample range, specify the accurate sample range here. When you do, MediaConvert ignores any sample range information in the input metadata. Regardless of whether MediaConvert uses the input sample range or the sample range that you specify, MediaConvert uses the sample range for transcoding and also writes it to the output metadata.

Instances

Instances details
Eq VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

Read VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

Show VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

Generic VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

Associated Types

type Rep VideoSelector :: Type -> Type #

NFData VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

Methods

rnf :: VideoSelector -> () #

Hashable VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

ToJSON VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

FromJSON VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

type Rep VideoSelector Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VideoSelector

newVideoSelector :: VideoSelector Source #

Create a value of VideoSelector with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:programNumber:VideoSelector', videoSelector_programNumber - Selects a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported.

$sel:alphaBehavior:VideoSelector', videoSelector_alphaBehavior - Ignore this setting unless this input is a QuickTime animation with an alpha channel. Use this setting to create separate Key and Fill outputs. In each output, specify which part of the input MediaConvert uses. Leave this setting at the default value DISCARD to delete the alpha channel and preserve the video. Set it to REMAP_TO_LUMA to delete the video and map the alpha channel to the luma channel of your outputs.

$sel:colorSpaceUsage:VideoSelector', videoSelector_colorSpaceUsage - There are two sources for color metadata, the input file and the job input settings Color space (ColorSpace) and HDR master display information settings(Hdr10Metadata). The Color space usage setting determines which takes precedence. Choose Force (FORCE) to use color metadata from the input job settings. If you don't specify values for those settings, the service defaults to using metadata from your input. FALLBACK - Choose Fallback (FALLBACK) to use color metadata from the source when it is present. If there's no color metadata in your input file, the service defaults to using values you specify in the input settings.

$sel:hdr10Metadata:VideoSelector', videoSelector_hdr10Metadata - Use these settings to provide HDR 10 metadata that is missing or inaccurate in your input video. Appropriate values vary depending on the input video and must be provided by a color grader. The color grader generates these values during the HDR 10 mastering process. The valid range for each of these settings is 0 to 50,000. Each increment represents 0.00002 in CIE1931 color coordinate. Related settings - When you specify these values, you must also set Color space (ColorSpace) to HDR 10 (HDR10). To specify whether the the values you specify here take precedence over the values in the metadata of your input file, set Color space usage (ColorSpaceUsage). To specify whether color metadata is included in an output, set Color metadata (ColorMetadata). For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

$sel:pid:VideoSelector', videoSelector_pid - Use PID (Pid) to select specific video data from an input file. Specify this value as an integer; the system automatically converts it to the hexidecimal value. For example, 257 selects PID 0x101. A PID, or packet identifier, is an identifier for a set of data in an MPEG-2 transport stream container.

$sel:rotate:VideoSelector', videoSelector_rotate - Use Rotate (InputRotate) to specify how the service rotates your video. You can choose automatic rotation or specify a rotation. You can specify a clockwise rotation of 0, 90, 180, or 270 degrees. If your input video container is .mov or .mp4 and your input has rotation metadata, you can choose Automatic to have the service rotate your video according to the rotation specified in the metadata. The rotation must be within one degree of 90, 180, or 270 degrees. If the rotation metadata specifies any other rotation, the service will default to no rotation. By default, the service does no rotation, even if your input video has rotation metadata. The service doesn't pass through rotation metadata.

$sel:colorSpace:VideoSelector', videoSelector_colorSpace - If your input video has accurate color space metadata, or if you don't know about color space, leave this set to the default value Follow (FOLLOW). The service will automatically detect your input color space. If your input video has metadata indicating the wrong color space, specify the accurate color space here. If your input video is HDR 10 and the SMPTE ST 2086 Mastering Display Color Volume static metadata isn't present in your video stream, or if that metadata is present but not accurate, choose Force HDR 10 (FORCE_HDR10) here and specify correct values in the input HDR 10 metadata (Hdr10Metadata) settings. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

$sel:sampleRange:VideoSelector', videoSelector_sampleRange - If the sample range metadata in your input video is accurate, or if you don't know about sample range, keep the default value, Follow (FOLLOW), for this setting. When you do, the service automatically detects your input sample range. If your input video has metadata indicating the wrong sample range, specify the accurate sample range here. When you do, MediaConvert ignores any sample range information in the input metadata. Regardless of whether MediaConvert uses the input sample range or the sample range that you specify, MediaConvert uses the sample range for transcoding and also writes it to the output metadata.

videoSelector_programNumber :: Lens' VideoSelector (Maybe Int) Source #

Selects a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported.

videoSelector_alphaBehavior :: Lens' VideoSelector (Maybe AlphaBehavior) Source #

Ignore this setting unless this input is a QuickTime animation with an alpha channel. Use this setting to create separate Key and Fill outputs. In each output, specify which part of the input MediaConvert uses. Leave this setting at the default value DISCARD to delete the alpha channel and preserve the video. Set it to REMAP_TO_LUMA to delete the video and map the alpha channel to the luma channel of your outputs.

videoSelector_colorSpaceUsage :: Lens' VideoSelector (Maybe ColorSpaceUsage) Source #

There are two sources for color metadata, the input file and the job input settings Color space (ColorSpace) and HDR master display information settings(Hdr10Metadata). The Color space usage setting determines which takes precedence. Choose Force (FORCE) to use color metadata from the input job settings. If you don't specify values for those settings, the service defaults to using metadata from your input. FALLBACK - Choose Fallback (FALLBACK) to use color metadata from the source when it is present. If there's no color metadata in your input file, the service defaults to using values you specify in the input settings.

videoSelector_hdr10Metadata :: Lens' VideoSelector (Maybe Hdr10Metadata) Source #

Use these settings to provide HDR 10 metadata that is missing or inaccurate in your input video. Appropriate values vary depending on the input video and must be provided by a color grader. The color grader generates these values during the HDR 10 mastering process. The valid range for each of these settings is 0 to 50,000. Each increment represents 0.00002 in CIE1931 color coordinate. Related settings - When you specify these values, you must also set Color space (ColorSpace) to HDR 10 (HDR10). To specify whether the the values you specify here take precedence over the values in the metadata of your input file, set Color space usage (ColorSpaceUsage). To specify whether color metadata is included in an output, set Color metadata (ColorMetadata). For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

videoSelector_pid :: Lens' VideoSelector (Maybe Natural) Source #

Use PID (Pid) to select specific video data from an input file. Specify this value as an integer; the system automatically converts it to the hexidecimal value. For example, 257 selects PID 0x101. A PID, or packet identifier, is an identifier for a set of data in an MPEG-2 transport stream container.

videoSelector_rotate :: Lens' VideoSelector (Maybe InputRotate) Source #

Use Rotate (InputRotate) to specify how the service rotates your video. You can choose automatic rotation or specify a rotation. You can specify a clockwise rotation of 0, 90, 180, or 270 degrees. If your input video container is .mov or .mp4 and your input has rotation metadata, you can choose Automatic to have the service rotate your video according to the rotation specified in the metadata. The rotation must be within one degree of 90, 180, or 270 degrees. If the rotation metadata specifies any other rotation, the service will default to no rotation. By default, the service does no rotation, even if your input video has rotation metadata. The service doesn't pass through rotation metadata.

videoSelector_colorSpace :: Lens' VideoSelector (Maybe ColorSpace) Source #

If your input video has accurate color space metadata, or if you don't know about color space, leave this set to the default value Follow (FOLLOW). The service will automatically detect your input color space. If your input video has metadata indicating the wrong color space, specify the accurate color space here. If your input video is HDR 10 and the SMPTE ST 2086 Mastering Display Color Volume static metadata isn't present in your video stream, or if that metadata is present but not accurate, choose Force HDR 10 (FORCE_HDR10) here and specify correct values in the input HDR 10 metadata (Hdr10Metadata) settings. For more information about MediaConvert HDR jobs, see https://docs.aws.amazon.com/console/mediaconvert/hdr.

videoSelector_sampleRange :: Lens' VideoSelector (Maybe InputSampleRange) Source #

If the sample range metadata in your input video is accurate, or if you don't know about sample range, keep the default value, Follow (FOLLOW), for this setting. When you do, the service automatically detects your input sample range. If your input video has metadata indicating the wrong sample range, specify the accurate sample range here. When you do, MediaConvert ignores any sample range information in the input metadata. Regardless of whether MediaConvert uses the input sample range or the sample range that you specify, MediaConvert uses the sample range for transcoding and also writes it to the output metadata.

VorbisSettings

data VorbisSettings Source #

Required when you set Codec, under AudioDescriptions>CodecSettings, to the value Vorbis.

See: newVorbisSettings smart constructor.

Constructors

VorbisSettings' 

Fields

  • channels :: Maybe Natural

    Optional. Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2. The default value is 2.

  • sampleRate :: Maybe Natural

    Optional. Specify the audio sample rate in Hz. Valid values are 22050, 32000, 44100, and 48000. The default value is 48000.

  • vbrQuality :: Maybe Int

    Optional. Specify the variable audio quality of this Vorbis output from -1 (lowest quality, ~45 kbit/s) to 10 (highest quality, ~500 kbit/s). The default value is 4 (~128 kbit/s). Values 5 and 6 are approximately 160 and 192 kbit/s, respectively.

Instances

Instances details
Eq VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

Read VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

Show VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

Generic VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

Associated Types

type Rep VorbisSettings :: Type -> Type #

NFData VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

Methods

rnf :: VorbisSettings -> () #

Hashable VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

ToJSON VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

FromJSON VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

type Rep VorbisSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.VorbisSettings

type Rep VorbisSettings = D1 ('MetaData "VorbisSettings" "Amazonka.MediaConvert.Types.VorbisSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "VorbisSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "vbrQuality") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newVorbisSettings :: VorbisSettings Source #

Create a value of VorbisSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channels:VorbisSettings', vorbisSettings_channels - Optional. Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2. The default value is 2.

$sel:sampleRate:VorbisSettings', vorbisSettings_sampleRate - Optional. Specify the audio sample rate in Hz. Valid values are 22050, 32000, 44100, and 48000. The default value is 48000.

$sel:vbrQuality:VorbisSettings', vorbisSettings_vbrQuality - Optional. Specify the variable audio quality of this Vorbis output from -1 (lowest quality, ~45 kbit/s) to 10 (highest quality, ~500 kbit/s). The default value is 4 (~128 kbit/s). Values 5 and 6 are approximately 160 and 192 kbit/s, respectively.

vorbisSettings_channels :: Lens' VorbisSettings (Maybe Natural) Source #

Optional. Specify the number of channels in this output audio track. Choosing Mono on the console gives you 1 output channel; choosing Stereo gives you 2. In the API, valid values are 1 and 2. The default value is 2.

vorbisSettings_sampleRate :: Lens' VorbisSettings (Maybe Natural) Source #

Optional. Specify the audio sample rate in Hz. Valid values are 22050, 32000, 44100, and 48000. The default value is 48000.

vorbisSettings_vbrQuality :: Lens' VorbisSettings (Maybe Int) Source #

Optional. Specify the variable audio quality of this Vorbis output from -1 (lowest quality, ~45 kbit/s) to 10 (highest quality, ~500 kbit/s). The default value is 4 (~128 kbit/s). Values 5 and 6 are approximately 160 and 192 kbit/s, respectively.

Vp8Settings

data Vp8Settings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP8.

See: newVp8Settings smart constructor.

Constructors

Vp8Settings' 

Fields

  • qualityTuningLevel :: Maybe Vp8QualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • gopSize :: Maybe Double

    GOP Length (keyframe interval) in frames. Must be greater than zero.

  • hrdBufferSize :: Maybe Natural

    Optional. Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

  • rateControlMode :: Maybe Vp8RateControlMode

    With the VP8 codec, you can use only the variable bitrate (VBR) rate control mode.

  • parControl :: Maybe Vp8ParControl

    Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

  • bitrate :: Maybe Natural

    Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe Vp8FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe Vp8FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • maxBitrate :: Maybe Natural

    Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Instances

Instances details
Eq Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

Read Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

Show Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

Generic Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

Associated Types

type Rep Vp8Settings :: Type -> Type #

NFData Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

Methods

rnf :: Vp8Settings -> () #

Hashable Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

ToJSON Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

FromJSON Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

type Rep Vp8Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp8Settings

type Rep Vp8Settings = D1 ('MetaData "Vp8Settings" "Amazonka.MediaConvert.Types.Vp8Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Vp8Settings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8QualityTuningLevel)) :*: (S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))) :*: (S1 ('MetaSel ('Just "hrdBufferSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8RateControlMode)) :*: S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8ParControl))))) :*: ((S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8FramerateConversionAlgorithm)))) :*: ((S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp8FramerateControl)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newVp8Settings :: Vp8Settings Source #

Create a value of Vp8Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:Vp8Settings', vp8Settings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

$sel:parNumerator:Vp8Settings', vp8Settings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:gopSize:Vp8Settings', vp8Settings_gopSize - GOP Length (keyframe interval) in frames. Must be greater than zero.

$sel:hrdBufferSize:Vp8Settings', vp8Settings_hrdBufferSize - Optional. Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

$sel:rateControlMode:Vp8Settings', vp8Settings_rateControlMode - With the VP8 codec, you can use only the variable bitrate (VBR) rate control mode.

$sel:parControl:Vp8Settings', vp8Settings_parControl - Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

$sel:bitrate:Vp8Settings', vp8Settings_bitrate - Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

$sel:framerateDenominator:Vp8Settings', vp8Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:Vp8Settings', vp8Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:Vp8Settings', vp8Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:framerateNumerator:Vp8Settings', vp8Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:maxBitrate:Vp8Settings', vp8Settings_maxBitrate - Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

$sel:parDenominator:Vp8Settings', vp8Settings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

vp8Settings_qualityTuningLevel :: Lens' Vp8Settings (Maybe Vp8QualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

vp8Settings_parNumerator :: Lens' Vp8Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

vp8Settings_gopSize :: Lens' Vp8Settings (Maybe Double) Source #

GOP Length (keyframe interval) in frames. Must be greater than zero.

vp8Settings_hrdBufferSize :: Lens' Vp8Settings (Maybe Natural) Source #

Optional. Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

vp8Settings_rateControlMode :: Lens' Vp8Settings (Maybe Vp8RateControlMode) Source #

With the VP8 codec, you can use only the variable bitrate (VBR) rate control mode.

vp8Settings_parControl :: Lens' Vp8Settings (Maybe Vp8ParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio (PAR) for this output. The default behavior, Follow source (INITIALIZE_FROM_SOURCE), uses the PAR from your input video for your output. To specify a different PAR in the console, choose any value other than Follow source. To specify a different PAR by editing the JSON job specification, choose SPECIFIED. When you choose SPECIFIED for this setting, you must also specify values for the parNumerator and parDenominator settings.

vp8Settings_bitrate :: Lens' Vp8Settings (Maybe Natural) Source #

Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

vp8Settings_framerateDenominator :: Lens' Vp8Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vp8Settings_framerateConversionAlgorithm :: Lens' Vp8Settings (Maybe Vp8FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

vp8Settings_framerateControl :: Lens' Vp8Settings (Maybe Vp8FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

vp8Settings_framerateNumerator :: Lens' Vp8Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vp8Settings_maxBitrate :: Lens' Vp8Settings (Maybe Natural) Source #

Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

vp8Settings_parDenominator :: Lens' Vp8Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Vp9Settings

data Vp9Settings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value VP9.

See: newVp9Settings smart constructor.

Constructors

Vp9Settings' 

Fields

  • qualityTuningLevel :: Maybe Vp9QualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

  • parNumerator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

  • gopSize :: Maybe Double

    GOP Length (keyframe interval) in frames. Must be greater than zero.

  • hrdBufferSize :: Maybe Natural

    Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

  • rateControlMode :: Maybe Vp9RateControlMode

    With the VP9 codec, you can use only the variable bitrate (VBR) rate control mode.

  • parControl :: Maybe Vp9ParControl

    Optional. Specify how the service determines the pixel aspect ratio for this output. The default behavior is to use the same pixel aspect ratio as your input video.

  • bitrate :: Maybe Natural

    Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe Vp9FramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • framerateControl :: Maybe Vp9FramerateControl

    If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • maxBitrate :: Maybe Natural

    Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

  • parDenominator :: Maybe Natural

    Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

Instances

Instances details
Eq Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

Read Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

Show Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

Generic Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

Associated Types

type Rep Vp9Settings :: Type -> Type #

NFData Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

Methods

rnf :: Vp9Settings -> () #

Hashable Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

ToJSON Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

FromJSON Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

type Rep Vp9Settings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Vp9Settings

type Rep Vp9Settings = D1 ('MetaData "Vp9Settings" "Amazonka.MediaConvert.Types.Vp9Settings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Vp9Settings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "qualityTuningLevel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9QualityTuningLevel)) :*: (S1 ('MetaSel ('Just "parNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "gopSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))) :*: (S1 ('MetaSel ('Just "hrdBufferSize") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "rateControlMode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9RateControlMode)) :*: S1 ('MetaSel ('Just "parControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9ParControl))))) :*: ((S1 ('MetaSel ('Just "bitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9FramerateConversionAlgorithm)))) :*: ((S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Vp9FramerateControl)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "maxBitrate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "parDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))))

newVp9Settings :: Vp9Settings Source #

Create a value of Vp9Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:Vp9Settings', vp9Settings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

$sel:parNumerator:Vp9Settings', vp9Settings_parNumerator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

$sel:gopSize:Vp9Settings', vp9Settings_gopSize - GOP Length (keyframe interval) in frames. Must be greater than zero.

$sel:hrdBufferSize:Vp9Settings', vp9Settings_hrdBufferSize - Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

$sel:rateControlMode:Vp9Settings', vp9Settings_rateControlMode - With the VP9 codec, you can use only the variable bitrate (VBR) rate control mode.

$sel:parControl:Vp9Settings', vp9Settings_parControl - Optional. Specify how the service determines the pixel aspect ratio for this output. The default behavior is to use the same pixel aspect ratio as your input video.

$sel:bitrate:Vp9Settings', vp9Settings_bitrate - Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

$sel:framerateDenominator:Vp9Settings', vp9Settings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:Vp9Settings', vp9Settings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:framerateControl:Vp9Settings', vp9Settings_framerateControl - If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:framerateNumerator:Vp9Settings', vp9Settings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:maxBitrate:Vp9Settings', vp9Settings_maxBitrate - Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

$sel:parDenominator:Vp9Settings', vp9Settings_parDenominator - Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

vp9Settings_qualityTuningLevel :: Lens' Vp9Settings (Maybe Vp9QualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, multi-pass encoding.

vp9Settings_parNumerator :: Lens' Vp9Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parNumerator is 40.

vp9Settings_gopSize :: Lens' Vp9Settings (Maybe Double) Source #

GOP Length (keyframe interval) in frames. Must be greater than zero.

vp9Settings_hrdBufferSize :: Lens' Vp9Settings (Maybe Natural) Source #

Size of buffer (HRD buffer model) in bits. For example, enter five megabits as 5000000.

vp9Settings_rateControlMode :: Lens' Vp9Settings (Maybe Vp9RateControlMode) Source #

With the VP9 codec, you can use only the variable bitrate (VBR) rate control mode.

vp9Settings_parControl :: Lens' Vp9Settings (Maybe Vp9ParControl) Source #

Optional. Specify how the service determines the pixel aspect ratio for this output. The default behavior is to use the same pixel aspect ratio as your input video.

vp9Settings_bitrate :: Lens' Vp9Settings (Maybe Natural) Source #

Target bitrate in bits/second. For example, enter five megabits per second as 5000000.

vp9Settings_framerateDenominator :: Lens' Vp9Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vp9Settings_framerateConversionAlgorithm :: Lens' Vp9Settings (Maybe Vp9FramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

vp9Settings_framerateControl :: Lens' Vp9Settings (Maybe Vp9FramerateControl) Source #

If you are using the console, use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate you specify in the settings FramerateNumerator and FramerateDenominator.

vp9Settings_framerateNumerator :: Lens' Vp9Settings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

vp9Settings_maxBitrate :: Lens' Vp9Settings (Maybe Natural) Source #

Ignore this setting unless you set qualityTuningLevel to MULTI_PASS. Optional. Specify the maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. The default behavior uses twice the target bitrate as the maximum bitrate.

vp9Settings_parDenominator :: Lens' Vp9Settings (Maybe Natural) Source #

Required when you set Pixel aspect ratio (parControl) to SPECIFIED. On the console, this corresponds to any value other than Follow source. When you specify an output pixel aspect ratio (PAR) that is different from your input video PAR, provide your output PAR as a ratio. For example, for D1/DV NTSC widescreen, you would specify the ratio 40:33. In this example, the value for parDenominator is 33.

WavSettings

data WavSettings Source #

Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to the value WAV.

See: newWavSettings smart constructor.

Constructors

WavSettings' 

Fields

  • bitDepth :: Maybe Natural

    Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

  • channels :: Maybe Natural

    Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

  • format :: Maybe WavFormat

    The service defaults to using RIFF for WAV outputs. If your output audio is likely to exceed 4 GB in file size, or if you otherwise need the extended support of the RF64 format, set your output WAV file format to RF64.

  • sampleRate :: Maybe Natural

    Sample rate in Hz.

Instances

Instances details
Eq WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

Read WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

Show WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

Generic WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

Associated Types

type Rep WavSettings :: Type -> Type #

NFData WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

Methods

rnf :: WavSettings -> () #

Hashable WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

ToJSON WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

FromJSON WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

type Rep WavSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WavSettings

type Rep WavSettings = D1 ('MetaData "WavSettings" "Amazonka.MediaConvert.Types.WavSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "WavSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "bitDepth") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "channels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "format") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WavFormat)) :*: S1 ('MetaSel ('Just "sampleRate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newWavSettings :: WavSettings Source #

Create a value of WavSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:bitDepth:WavSettings', wavSettings_bitDepth - Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

$sel:channels:WavSettings', wavSettings_channels - Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

$sel:format:WavSettings', wavSettings_format - The service defaults to using RIFF for WAV outputs. If your output audio is likely to exceed 4 GB in file size, or if you otherwise need the extended support of the RF64 format, set your output WAV file format to RF64.

$sel:sampleRate:WavSettings', wavSettings_sampleRate - Sample rate in Hz.

wavSettings_bitDepth :: Lens' WavSettings (Maybe Natural) Source #

Specify Bit depth (BitDepth), in bits per sample, to choose the encoding quality for this audio track.

wavSettings_channels :: Lens' WavSettings (Maybe Natural) Source #

Specify the number of channels in this output audio track. Valid values are 1 and even numbers up to 64. For example, 1, 2, 4, 6, and so on, up to 64.

wavSettings_format :: Lens' WavSettings (Maybe WavFormat) Source #

The service defaults to using RIFF for WAV outputs. If your output audio is likely to exceed 4 GB in file size, or if you otherwise need the extended support of the RF64 format, set your output WAV file format to RF64.

WebvttDestinationSettings

data WebvttDestinationSettings Source #

Settings related to WebVTT captions. WebVTT is a sidecar format that holds captions in a file that is separate from the video container. Set up sidecar captions in the same output group, but different output from your video. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/ttml-and-webvtt-output-captions.html. When you work directly in your JSON job specification, include this object and any required children when you set destinationType to WebVTT.

See: newWebvttDestinationSettings smart constructor.

Constructors

WebvttDestinationSettings' 

Fields

  • stylePassthrough :: Maybe WebvttStylePassthrough

    Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

Instances

Instances details
Eq WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

Read WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

Show WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

Generic WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

Associated Types

type Rep WebvttDestinationSettings :: Type -> Type #

NFData WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

Hashable WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

ToJSON WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

FromJSON WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

type Rep WebvttDestinationSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttDestinationSettings

type Rep WebvttDestinationSettings = D1 ('MetaData "WebvttDestinationSettings" "Amazonka.MediaConvert.Types.WebvttDestinationSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "WebvttDestinationSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "stylePassthrough") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe WebvttStylePassthrough))))

newWebvttDestinationSettings :: WebvttDestinationSettings Source #

Create a value of WebvttDestinationSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:stylePassthrough:WebvttDestinationSettings', webvttDestinationSettings_stylePassthrough - Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

webvttDestinationSettings_stylePassthrough :: Lens' WebvttDestinationSettings (Maybe WebvttStylePassthrough) Source #

Set Style passthrough (StylePassthrough) to ENABLED to use the available style, color, and position information from your input captions. MediaConvert uses default settings for any missing style and position information in your input captions. Set Style passthrough to DISABLED, or leave blank, to ignore the style and position information from your input captions and use simplified output captions.

WebvttHlsSourceSettings

data WebvttHlsSourceSettings Source #

Settings specific to WebVTT sources in HLS alternative rendition group. Specify the properties (renditionGroupId, renditionName or renditionLanguageCode) to identify the unique subtitle track among the alternative rendition groups present in the HLS manifest. If no unique track is found, or multiple tracks match the specified properties, the job fails. If there is only one subtitle track in the rendition group, the settings can be left empty and the default subtitle track will be chosen. If your caption source is a sidecar file, use FileSourceSettings instead of WebvttHlsSourceSettings.

See: newWebvttHlsSourceSettings smart constructor.

Constructors

WebvttHlsSourceSettings' 

Fields

Instances

Instances details
Eq WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

Read WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

Show WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

Generic WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

Associated Types

type Rep WebvttHlsSourceSettings :: Type -> Type #

NFData WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

Methods

rnf :: WebvttHlsSourceSettings -> () #

Hashable WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

ToJSON WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

FromJSON WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

type Rep WebvttHlsSourceSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.WebvttHlsSourceSettings

type Rep WebvttHlsSourceSettings = D1 ('MetaData "WebvttHlsSourceSettings" "Amazonka.MediaConvert.Types.WebvttHlsSourceSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "WebvttHlsSourceSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "renditionName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "renditionGroupId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "renditionLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)))))

newWebvttHlsSourceSettings :: WebvttHlsSourceSettings Source #

Create a value of WebvttHlsSourceSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:renditionName:WebvttHlsSourceSettings', webvttHlsSourceSettings_renditionName - Optional. Specify media name

$sel:renditionGroupId:WebvttHlsSourceSettings', webvttHlsSourceSettings_renditionGroupId - Optional. Specify alternative group ID

$sel:renditionLanguageCode:WebvttHlsSourceSettings', webvttHlsSourceSettings_renditionLanguageCode - Optional. Specify ISO 639-2 or ISO 639-3 code in the language property

webvttHlsSourceSettings_renditionLanguageCode :: Lens' WebvttHlsSourceSettings (Maybe LanguageCode) Source #

Optional. Specify ISO 639-2 or ISO 639-3 code in the language property

Xavc4kIntraCbgProfileSettings

data Xavc4kIntraCbgProfileSettings Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_CBG.

See: newXavc4kIntraCbgProfileSettings smart constructor.

Constructors

Xavc4kIntraCbgProfileSettings' 

Fields

  • xavcClass :: Maybe Xavc4kIntraCbgProfileClass

    Specify the XAVC Intra 4k (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

Read Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

Show Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

Generic Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

Associated Types

type Rep Xavc4kIntraCbgProfileSettings :: Type -> Type #

NFData Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

Hashable Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

ToJSON Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

FromJSON Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

type Rep Xavc4kIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings

type Rep Xavc4kIntraCbgProfileSettings = D1 ('MetaData "Xavc4kIntraCbgProfileSettings" "Amazonka.MediaConvert.Types.Xavc4kIntraCbgProfileSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Xavc4kIntraCbgProfileSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "xavcClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Xavc4kIntraCbgProfileClass))))

newXavc4kIntraCbgProfileSettings :: Xavc4kIntraCbgProfileSettings Source #

Create a value of Xavc4kIntraCbgProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:xavcClass:Xavc4kIntraCbgProfileSettings', xavc4kIntraCbgProfileSettings_xavcClass - Specify the XAVC Intra 4k (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

xavc4kIntraCbgProfileSettings_xavcClass :: Lens' Xavc4kIntraCbgProfileSettings (Maybe Xavc4kIntraCbgProfileClass) Source #

Specify the XAVC Intra 4k (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Xavc4kIntraVbrProfileSettings

data Xavc4kIntraVbrProfileSettings Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_VBR.

See: newXavc4kIntraVbrProfileSettings smart constructor.

Constructors

Xavc4kIntraVbrProfileSettings' 

Fields

  • xavcClass :: Maybe Xavc4kIntraVbrProfileClass

    Specify the XAVC Intra 4k (VBR) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

Read Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

Show Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

Generic Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

Associated Types

type Rep Xavc4kIntraVbrProfileSettings :: Type -> Type #

NFData Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

Hashable Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

ToJSON Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

FromJSON Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

type Rep Xavc4kIntraVbrProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings

type Rep Xavc4kIntraVbrProfileSettings = D1 ('MetaData "Xavc4kIntraVbrProfileSettings" "Amazonka.MediaConvert.Types.Xavc4kIntraVbrProfileSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "Xavc4kIntraVbrProfileSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "xavcClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Xavc4kIntraVbrProfileClass))))

newXavc4kIntraVbrProfileSettings :: Xavc4kIntraVbrProfileSettings Source #

Create a value of Xavc4kIntraVbrProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:xavcClass:Xavc4kIntraVbrProfileSettings', xavc4kIntraVbrProfileSettings_xavcClass - Specify the XAVC Intra 4k (VBR) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

xavc4kIntraVbrProfileSettings_xavcClass :: Lens' Xavc4kIntraVbrProfileSettings (Maybe Xavc4kIntraVbrProfileClass) Source #

Specify the XAVC Intra 4k (VBR) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Xavc4kProfileSettings

data Xavc4kProfileSettings Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K.

See: newXavc4kProfileSettings smart constructor.

Constructors

Xavc4kProfileSettings' 

Fields

  • qualityTuningLevel :: Maybe Xavc4kProfileQualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

  • hrdBufferSize :: Maybe Natural

    Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

  • slices :: Maybe Natural

    Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

  • bitrateClass :: Maybe Xavc4kProfileBitrateClass

    Specify the XAVC 4k (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

  • flickerAdaptiveQuantization :: Maybe XavcFlickerAdaptiveQuantization

    The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

  • codecProfile :: Maybe Xavc4kProfileCodecProfile

    Specify the codec profile for this output. Choose High, 8-bit, 4:2:0 (HIGH) or High, 10-bit, 4:2:2 (HIGH_422). These profiles are specified in ITU-T H.264.

  • gopBReference :: Maybe XavcGopBReference

    Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

  • gopClosedCadence :: Maybe Natural

    Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

Instances

Instances details
Eq Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

Read Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

Show Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

Generic Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

Associated Types

type Rep Xavc4kProfileSettings :: Type -> Type #

NFData Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

Methods

rnf :: Xavc4kProfileSettings -> () #

Hashable Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

ToJSON Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

FromJSON Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

type Rep Xavc4kProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.Xavc4kProfileSettings

newXavc4kProfileSettings :: Xavc4kProfileSettings Source #

Create a value of Xavc4kProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:Xavc4kProfileSettings', xavc4kProfileSettings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

$sel:hrdBufferSize:Xavc4kProfileSettings', xavc4kProfileSettings_hrdBufferSize - Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

$sel:slices:Xavc4kProfileSettings', xavc4kProfileSettings_slices - Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

$sel:bitrateClass:Xavc4kProfileSettings', xavc4kProfileSettings_bitrateClass - Specify the XAVC 4k (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

$sel:flickerAdaptiveQuantization:Xavc4kProfileSettings', xavc4kProfileSettings_flickerAdaptiveQuantization - The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

$sel:codecProfile:Xavc4kProfileSettings', xavc4kProfileSettings_codecProfile - Specify the codec profile for this output. Choose High, 8-bit, 4:2:0 (HIGH) or High, 10-bit, 4:2:2 (HIGH_422). These profiles are specified in ITU-T H.264.

$sel:gopBReference:Xavc4kProfileSettings', xavc4kProfileSettings_gopBReference - Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

$sel:gopClosedCadence:Xavc4kProfileSettings', xavc4kProfileSettings_gopClosedCadence - Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

xavc4kProfileSettings_qualityTuningLevel :: Lens' Xavc4kProfileSettings (Maybe Xavc4kProfileQualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

xavc4kProfileSettings_hrdBufferSize :: Lens' Xavc4kProfileSettings (Maybe Natural) Source #

Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

xavc4kProfileSettings_slices :: Lens' Xavc4kProfileSettings (Maybe Natural) Source #

Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

xavc4kProfileSettings_bitrateClass :: Lens' Xavc4kProfileSettings (Maybe Xavc4kProfileBitrateClass) Source #

Specify the XAVC 4k (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

xavc4kProfileSettings_flickerAdaptiveQuantization :: Lens' Xavc4kProfileSettings (Maybe XavcFlickerAdaptiveQuantization) Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

xavc4kProfileSettings_codecProfile :: Lens' Xavc4kProfileSettings (Maybe Xavc4kProfileCodecProfile) Source #

Specify the codec profile for this output. Choose High, 8-bit, 4:2:0 (HIGH) or High, 10-bit, 4:2:2 (HIGH_422). These profiles are specified in ITU-T H.264.

xavc4kProfileSettings_gopBReference :: Lens' Xavc4kProfileSettings (Maybe XavcGopBReference) Source #

Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

xavc4kProfileSettings_gopClosedCadence :: Lens' Xavc4kProfileSettings (Maybe Natural) Source #

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

XavcHdIntraCbgProfileSettings

data XavcHdIntraCbgProfileSettings Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD_INTRA_CBG.

See: newXavcHdIntraCbgProfileSettings smart constructor.

Constructors

XavcHdIntraCbgProfileSettings' 

Fields

  • xavcClass :: Maybe XavcHdIntraCbgProfileClass

    Specify the XAVC Intra HD (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

Instances

Instances details
Eq XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

Read XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

Show XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

Generic XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

Associated Types

type Rep XavcHdIntraCbgProfileSettings :: Type -> Type #

NFData XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

Hashable XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

ToJSON XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

FromJSON XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

type Rep XavcHdIntraCbgProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings

type Rep XavcHdIntraCbgProfileSettings = D1 ('MetaData "XavcHdIntraCbgProfileSettings" "Amazonka.MediaConvert.Types.XavcHdIntraCbgProfileSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "XavcHdIntraCbgProfileSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "xavcClass") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcHdIntraCbgProfileClass))))

newXavcHdIntraCbgProfileSettings :: XavcHdIntraCbgProfileSettings Source #

Create a value of XavcHdIntraCbgProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:xavcClass:XavcHdIntraCbgProfileSettings', xavcHdIntraCbgProfileSettings_xavcClass - Specify the XAVC Intra HD (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

xavcHdIntraCbgProfileSettings_xavcClass :: Lens' XavcHdIntraCbgProfileSettings (Maybe XavcHdIntraCbgProfileClass) Source #

Specify the XAVC Intra HD (CBG) Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

XavcHdProfileSettings

data XavcHdProfileSettings Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD.

See: newXavcHdProfileSettings smart constructor.

Constructors

XavcHdProfileSettings' 

Fields

  • qualityTuningLevel :: Maybe XavcHdProfileQualityTuningLevel

    Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

  • hrdBufferSize :: Maybe Natural

    Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

  • slices :: Maybe Natural

    Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

  • bitrateClass :: Maybe XavcHdProfileBitrateClass

    Specify the XAVC HD (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

  • telecine :: Maybe XavcHdProfileTelecine

    Ignore this setting unless you set Frame rate (framerateNumerator divided by framerateDenominator) to 29.970. If your input framerate is 23.976, choose Hard (HARD). Otherwise, keep the default value None (NONE). For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-telecine-and-inverse-telecine.html.

  • interlaceMode :: Maybe XavcInterlaceMode

    Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

  • flickerAdaptiveQuantization :: Maybe XavcFlickerAdaptiveQuantization

    The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

  • gopBReference :: Maybe XavcGopBReference

    Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

  • gopClosedCadence :: Maybe Natural

    Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

Instances

Instances details
Eq XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

Read XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

Show XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

Generic XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

Associated Types

type Rep XavcHdProfileSettings :: Type -> Type #

NFData XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

Methods

rnf :: XavcHdProfileSettings -> () #

Hashable XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

ToJSON XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

FromJSON XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

type Rep XavcHdProfileSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcHdProfileSettings

newXavcHdProfileSettings :: XavcHdProfileSettings Source #

Create a value of XavcHdProfileSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:qualityTuningLevel:XavcHdProfileSettings', xavcHdProfileSettings_qualityTuningLevel - Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

$sel:hrdBufferSize:XavcHdProfileSettings', xavcHdProfileSettings_hrdBufferSize - Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

$sel:slices:XavcHdProfileSettings', xavcHdProfileSettings_slices - Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

$sel:bitrateClass:XavcHdProfileSettings', xavcHdProfileSettings_bitrateClass - Specify the XAVC HD (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

$sel:telecine:XavcHdProfileSettings', xavcHdProfileSettings_telecine - Ignore this setting unless you set Frame rate (framerateNumerator divided by framerateDenominator) to 29.970. If your input framerate is 23.976, choose Hard (HARD). Otherwise, keep the default value None (NONE). For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-telecine-and-inverse-telecine.html.

$sel:interlaceMode:XavcHdProfileSettings', xavcHdProfileSettings_interlaceMode - Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

$sel:flickerAdaptiveQuantization:XavcHdProfileSettings', xavcHdProfileSettings_flickerAdaptiveQuantization - The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

$sel:gopBReference:XavcHdProfileSettings', xavcHdProfileSettings_gopBReference - Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

$sel:gopClosedCadence:XavcHdProfileSettings', xavcHdProfileSettings_gopClosedCadence - Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

xavcHdProfileSettings_qualityTuningLevel :: Lens' XavcHdProfileSettings (Maybe XavcHdProfileQualityTuningLevel) Source #

Optional. Use Quality tuning level (qualityTuningLevel) to choose how you want to trade off encoding speed for output video quality. The default behavior is faster, lower quality, single-pass encoding.

xavcHdProfileSettings_hrdBufferSize :: Lens' XavcHdProfileSettings (Maybe Natural) Source #

Specify the size of the buffer that MediaConvert uses in the HRD buffer model for this output. Specify this value in bits; for example, enter five megabits as 5000000. When you don't set this value, or you set it to zero, MediaConvert calculates the default by doubling the bitrate of this output point.

xavcHdProfileSettings_slices :: Lens' XavcHdProfileSettings (Maybe Natural) Source #

Number of slices per picture. Must be less than or equal to the number of macroblock rows for progressive pictures, and less than or equal to half the number of macroblock rows for interlaced pictures.

xavcHdProfileSettings_bitrateClass :: Lens' XavcHdProfileSettings (Maybe XavcHdProfileBitrateClass) Source #

Specify the XAVC HD (Long GOP) Bitrate Class to set the bitrate of your output. Outputs of the same class have similar image quality over the operating points that are valid for that class.

xavcHdProfileSettings_telecine :: Lens' XavcHdProfileSettings (Maybe XavcHdProfileTelecine) Source #

Ignore this setting unless you set Frame rate (framerateNumerator divided by framerateDenominator) to 29.970. If your input framerate is 23.976, choose Hard (HARD). Otherwise, keep the default value None (NONE). For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-telecine-and-inverse-telecine.html.

xavcHdProfileSettings_interlaceMode :: Lens' XavcHdProfileSettings (Maybe XavcInterlaceMode) Source #

Choose the scan line type for the output. Keep the default value, Progressive (PROGRESSIVE) to create a progressive output, regardless of the scan type of your input. Use Top field first (TOP_FIELD) or Bottom field first (BOTTOM_FIELD) to create an output that's interlaced with the same field polarity throughout. Use Follow, default top (FOLLOW_TOP_FIELD) or Follow, default bottom (FOLLOW_BOTTOM_FIELD) to produce outputs with the same field polarity as the source. For jobs that have multiple inputs, the output field polarity might change over the course of the output. Follow behavior depends on the input scan type. If the source is interlaced, the output will be interlaced with the same polarity as the source. If the source is progressive, the output will be interlaced with top field bottom field first, depending on which of the Follow options you choose.

xavcHdProfileSettings_flickerAdaptiveQuantization :: Lens' XavcHdProfileSettings (Maybe XavcFlickerAdaptiveQuantization) Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (XavcAdaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. Enable this setting to have the encoder reduce I-frame pop. I-frame pop appears as a visual flicker that can arise when the encoder saves bits by copying some macroblocks many times from frame to frame, and then refreshes them at the I-frame. When you enable this setting, the encoder updates these macroblocks slightly more often to smooth out the flicker. This setting is disabled by default. Related setting: In addition to enabling this setting, you must also set Adaptive quantization (adaptiveQuantization) to a value other than Off (OFF) or Auto (AUTO). Use Adaptive quantization to adjust the degree of smoothing that Flicker adaptive quantization provides.

xavcHdProfileSettings_gopBReference :: Lens' XavcHdProfileSettings (Maybe XavcGopBReference) Source #

Specify whether the encoder uses B-frames as reference frames for other pictures in the same GOP. Choose Allow (ENABLED) to allow the encoder to use B-frames as reference frames. Choose Don't allow (DISABLED) to prevent the encoder from using B-frames as reference frames.

xavcHdProfileSettings_gopClosedCadence :: Lens' XavcHdProfileSettings (Maybe Natural) Source #

Frequency of closed GOPs. In streaming applications, it is recommended that this be set to 1 so a decoder joining mid-stream will receive an IDR frame as quickly as possible. Setting this value to 0 will break output segmenting.

XavcSettings

data XavcSettings Source #

Required when you set (Codec) under (VideoDescription)>(CodecSettings) to the value XAVC.

See: newXavcSettings smart constructor.

Constructors

XavcSettings' 

Fields

  • temporalAdaptiveQuantization :: Maybe XavcTemporalAdaptiveQuantization

    The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal adaptive quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

  • slowPal :: Maybe XavcSlowPal

    Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Frame rate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

  • xavc4kProfileSettings :: Maybe Xavc4kProfileSettings

    Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K.

  • xavcHdIntraCbgProfileSettings :: Maybe XavcHdIntraCbgProfileSettings

    Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD_INTRA_CBG.

  • xavc4kIntraVbrProfileSettings :: Maybe Xavc4kIntraVbrProfileSettings

    Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_VBR.

  • xavc4kIntraCbgProfileSettings :: Maybe Xavc4kIntraCbgProfileSettings

    Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_CBG.

  • profile :: Maybe XavcProfile

    Specify the XAVC profile for this output. For more information, see the Sony documentation at https://www.xavc-info.org/. Note that MediaConvert doesn't support the interlaced video XAVC operating points for XAVC_HD_INTRA_CBG. To create an interlaced XAVC output, choose the profile XAVC_HD.

  • softness :: Maybe Natural

    Ignore this setting unless your downstream workflow requires that you specify it explicitly. Otherwise, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

  • framerateDenominator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Frame rate. In this example, specify 23.976.

  • framerateConversionAlgorithm :: Maybe XavcFramerateConversionAlgorithm

    Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

  • entropyEncoding :: Maybe XavcEntropyEncoding

    Optional. Choose a specific entropy encoding mode only when you want to override XAVC recommendations. If you choose the value auto, MediaConvert uses the mode that the XAVC file format specifies given this output's operating point.

  • framerateControl :: Maybe XavcFramerateControl

    If you are using the console, use the Frame rate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list. The framerates shown in the dropdown list are decimal approximations of fractions. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate that you specify in the settings FramerateNumerator and FramerateDenominator.

  • adaptiveQuantization :: Maybe XavcAdaptiveQuantization

    Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set Adaptive quantization (adaptiveQuantization) to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization to Off (OFF). Related settings: The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

  • framerateNumerator :: Maybe Natural

    When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

  • xavcHdProfileSettings :: Maybe XavcHdProfileSettings

    Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD.

  • spatialAdaptiveQuantization :: Maybe XavcSpatialAdaptiveQuantization

    The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

Instances

Instances details
Eq XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

Read XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

Show XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

Generic XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

Associated Types

type Rep XavcSettings :: Type -> Type #

NFData XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

Methods

rnf :: XavcSettings -> () #

Hashable XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

ToJSON XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

FromJSON XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

type Rep XavcSettings Source # 
Instance details

Defined in Amazonka.MediaConvert.Types.XavcSettings

type Rep XavcSettings = D1 ('MetaData "XavcSettings" "Amazonka.MediaConvert.Types.XavcSettings" "libZSservicesZSamazonka-mediaconvertZSamazonka-mediaconvert" 'False) (C1 ('MetaCons "XavcSettings'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "temporalAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcTemporalAdaptiveQuantization)) :*: S1 ('MetaSel ('Just "slowPal") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcSlowPal))) :*: (S1 ('MetaSel ('Just "xavc4kProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Xavc4kProfileSettings)) :*: S1 ('MetaSel ('Just "xavcHdIntraCbgProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcHdIntraCbgProfileSettings)))) :*: ((S1 ('MetaSel ('Just "xavc4kIntraVbrProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Xavc4kIntraVbrProfileSettings)) :*: S1 ('MetaSel ('Just "xavc4kIntraCbgProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Xavc4kIntraCbgProfileSettings))) :*: (S1 ('MetaSel ('Just "profile") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcProfile)) :*: S1 ('MetaSel ('Just "softness") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: (((S1 ('MetaSel ('Just "framerateDenominator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "framerateConversionAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcFramerateConversionAlgorithm))) :*: (S1 ('MetaSel ('Just "entropyEncoding") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcEntropyEncoding)) :*: S1 ('MetaSel ('Just "framerateControl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcFramerateControl)))) :*: ((S1 ('MetaSel ('Just "adaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcAdaptiveQuantization)) :*: S1 ('MetaSel ('Just "framerateNumerator") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "xavcHdProfileSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcHdProfileSettings)) :*: S1 ('MetaSel ('Just "spatialAdaptiveQuantization") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe XavcSpatialAdaptiveQuantization)))))))

newXavcSettings :: XavcSettings Source #

Create a value of XavcSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:temporalAdaptiveQuantization:XavcSettings', xavcSettings_temporalAdaptiveQuantization - The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal adaptive quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

$sel:slowPal:XavcSettings', xavcSettings_slowPal - Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Frame rate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

$sel:xavc4kProfileSettings:XavcSettings', xavcSettings_xavc4kProfileSettings - Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K.

$sel:xavcHdIntraCbgProfileSettings:XavcSettings', xavcSettings_xavcHdIntraCbgProfileSettings - Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD_INTRA_CBG.

$sel:xavc4kIntraVbrProfileSettings:XavcSettings', xavcSettings_xavc4kIntraVbrProfileSettings - Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_VBR.

$sel:xavc4kIntraCbgProfileSettings:XavcSettings', xavcSettings_xavc4kIntraCbgProfileSettings - Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_CBG.

$sel:profile:XavcSettings', xavcSettings_profile - Specify the XAVC profile for this output. For more information, see the Sony documentation at https://www.xavc-info.org/. Note that MediaConvert doesn't support the interlaced video XAVC operating points for XAVC_HD_INTRA_CBG. To create an interlaced XAVC output, choose the profile XAVC_HD.

$sel:softness:XavcSettings', xavcSettings_softness - Ignore this setting unless your downstream workflow requires that you specify it explicitly. Otherwise, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

$sel:framerateDenominator:XavcSettings', xavcSettings_framerateDenominator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Frame rate. In this example, specify 23.976.

$sel:framerateConversionAlgorithm:XavcSettings', xavcSettings_framerateConversionAlgorithm - Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

$sel:entropyEncoding:XavcSettings', xavcSettings_entropyEncoding - Optional. Choose a specific entropy encoding mode only when you want to override XAVC recommendations. If you choose the value auto, MediaConvert uses the mode that the XAVC file format specifies given this output's operating point.

$sel:framerateControl:XavcSettings', xavcSettings_framerateControl - If you are using the console, use the Frame rate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list. The framerates shown in the dropdown list are decimal approximations of fractions. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate that you specify in the settings FramerateNumerator and FramerateDenominator.

$sel:adaptiveQuantization:XavcSettings', xavcSettings_adaptiveQuantization - Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set Adaptive quantization (adaptiveQuantization) to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization to Off (OFF). Related settings: The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

$sel:framerateNumerator:XavcSettings', xavcSettings_framerateNumerator - When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

$sel:xavcHdProfileSettings:XavcSettings', xavcSettings_xavcHdProfileSettings - Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD.

$sel:spatialAdaptiveQuantization:XavcSettings', xavcSettings_spatialAdaptiveQuantization - The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.

xavcSettings_temporalAdaptiveQuantization :: Lens' XavcSettings (Maybe XavcTemporalAdaptiveQuantization) Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on temporal variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas of the frame that aren't moving and uses more bits on complex objects with sharp edges that move a lot. For example, this feature improves the readability of text tickers on newscasts and scoreboards on sports matches. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen that doesn't have moving objects with sharp edges, such as sports athletes' faces, you might choose to disable this feature. Related setting: When you enable temporal adaptive quantization, adjust the strength of the filter with the setting Adaptive quantization (adaptiveQuantization).

xavcSettings_slowPal :: Lens' XavcSettings (Maybe XavcSlowPal) Source #

Ignore this setting unless your input frame rate is 23.976 or 24 frames per second (fps). Enable slow PAL to create a 25 fps output by relabeling the video frames and resampling your audio. Note that enabling this setting will slightly reduce the duration of your video. Related settings: You must also set Frame rate to 25. In your JSON job specification, set (framerateControl) to (SPECIFIED), (framerateNumerator) to 25 and (framerateDenominator) to 1.

xavcSettings_xavc4kProfileSettings :: Lens' XavcSettings (Maybe Xavc4kProfileSettings) Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K.

xavcSettings_xavcHdIntraCbgProfileSettings :: Lens' XavcSettings (Maybe XavcHdIntraCbgProfileSettings) Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD_INTRA_CBG.

xavcSettings_xavc4kIntraVbrProfileSettings :: Lens' XavcSettings (Maybe Xavc4kIntraVbrProfileSettings) Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_VBR.

xavcSettings_xavc4kIntraCbgProfileSettings :: Lens' XavcSettings (Maybe Xavc4kIntraCbgProfileSettings) Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_4K_INTRA_CBG.

xavcSettings_profile :: Lens' XavcSettings (Maybe XavcProfile) Source #

Specify the XAVC profile for this output. For more information, see the Sony documentation at https://www.xavc-info.org/. Note that MediaConvert doesn't support the interlaced video XAVC operating points for XAVC_HD_INTRA_CBG. To create an interlaced XAVC output, choose the profile XAVC_HD.

xavcSettings_softness :: Lens' XavcSettings (Maybe Natural) Source #

Ignore this setting unless your downstream workflow requires that you specify it explicitly. Otherwise, we recommend that you adjust the softness of your output by using a lower value for the setting Sharpness (sharpness) or by enabling a noise reducer filter (noiseReducerFilter). The Softness (softness) setting specifies the quantization matrices that the encoder uses. Keep the default value, 0, for flat quantization. Choose the value 1 or 16 to use the default JVT softening quantization matricies from the H.264 specification. Choose a value from 17 to 128 to use planar interpolation. Increasing values from 17 to 128 result in increasing reduction of high-frequency data. The value 128 results in the softest video.

xavcSettings_framerateDenominator :: Lens' XavcSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Frame rate. In this example, specify 23.976.

xavcSettings_framerateConversionAlgorithm :: Lens' XavcSettings (Maybe XavcFramerateConversionAlgorithm) Source #

Choose the method that you want MediaConvert to use when increasing or decreasing the frame rate. We recommend using drop duplicate (DUPLICATE_DROP) for numerically simple conversions, such as 60 fps to 30 fps. For numerically complex conversions, you can use interpolate (INTERPOLATE) to avoid stutter. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence, use FrameFormer (FRAMEFORMER) to do motion-compensated interpolation. FrameFormer chooses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost.

xavcSettings_entropyEncoding :: Lens' XavcSettings (Maybe XavcEntropyEncoding) Source #

Optional. Choose a specific entropy encoding mode only when you want to override XAVC recommendations. If you choose the value auto, MediaConvert uses the mode that the XAVC file format specifies given this output's operating point.

xavcSettings_framerateControl :: Lens' XavcSettings (Maybe XavcFramerateControl) Source #

If you are using the console, use the Frame rate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list. The framerates shown in the dropdown list are decimal approximations of fractions. If you are creating your transcoding job specification as a JSON file without the console, use FramerateControl to specify which value the service uses for the frame rate for this output. Choose INITIALIZE_FROM_SOURCE if you want the service to use the frame rate from the input. Choose SPECIFIED if you want the service to use the frame rate that you specify in the settings FramerateNumerator and FramerateDenominator.

xavcSettings_adaptiveQuantization :: Lens' XavcSettings (Maybe XavcAdaptiveQuantization) Source #

Keep the default value, Auto (AUTO), for this setting to have MediaConvert automatically apply the best types of quantization for your video content. When you want to apply your quantization settings manually, you must set Adaptive quantization (adaptiveQuantization) to a value other than Auto (AUTO). Use this setting to specify the strength of any adaptive quantization filters that you enable. If you don't want MediaConvert to do any adaptive quantization in this transcode, set Adaptive quantization to Off (OFF). Related settings: The value that you choose here applies to the following settings: Flicker adaptive quantization (flickerAdaptiveQuantization), Spatial adaptive quantization (spatialAdaptiveQuantization), and Temporal adaptive quantization (temporalAdaptiveQuantization).

xavcSettings_framerateNumerator :: Lens' XavcSettings (Maybe Natural) Source #

When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.

xavcSettings_xavcHdProfileSettings :: Lens' XavcSettings (Maybe XavcHdProfileSettings) Source #

Required when you set (Profile) under (VideoDescription)>(CodecSettings)>(XavcSettings) to the value XAVC_HD.

xavcSettings_spatialAdaptiveQuantization :: Lens' XavcSettings (Maybe XavcSpatialAdaptiveQuantization) Source #

The best way to set up adaptive quantization is to keep the default value, Auto (AUTO), for the setting Adaptive quantization (adaptiveQuantization). When you do so, MediaConvert automatically applies the best types of quantization for your video content. Include this setting in your JSON job specification only when you choose to change the default value for Adaptive quantization. For this setting, keep the default value, Enabled (ENABLED), to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization (adaptiveQuantization) depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.