libZSservicesZSamazonka-transcribeZSamazonka-transcribe
Copyright(c) 2013-2021 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone

Amazonka.Transcribe.Lens

Description

 
Synopsis

Operations

ListLanguageModels

listLanguageModels_nameContains :: Lens' ListLanguageModels (Maybe Text) Source #

When specified, the custom language model names returned contain the substring you've specified.

listLanguageModels_nextToken :: Lens' ListLanguageModels (Maybe Text) Source #

When included, fetches the next set of jobs if the result of the previous request was truncated.

listLanguageModels_statusEquals :: Lens' ListLanguageModels (Maybe ModelStatus) Source #

When specified, returns only custom language models with the specified status. Language models are ordered by creation date, with the newest models first. If you don't specify a status, Amazon Transcribe returns all custom language models ordered by date.

listLanguageModels_maxResults :: Lens' ListLanguageModels (Maybe Natural) Source #

The maximum number of language models to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listLanguageModelsResponse_nextToken :: Lens' ListLanguageModelsResponse (Maybe Text) Source #

The operation returns a page of jobs at a time. The maximum size of the list is set by the MaxResults parameter. If there are more language models in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in the next request to the operation to return the next page of language models.

listLanguageModelsResponse_models :: Lens' ListLanguageModelsResponse (Maybe [LanguageModel]) Source #

A list of objects containing information about custom language models.

GetVocabulary

getVocabulary_vocabularyName :: Lens' GetVocabulary Text Source #

The name of the vocabulary to return information about. The name is case sensitive.

getVocabularyResponse_failureReason :: Lens' GetVocabularyResponse (Maybe Text) Source #

If the VocabularyState field is FAILED, this field contains information about why the job failed.

getVocabularyResponse_downloadUri :: Lens' GetVocabularyResponse (Maybe Text) Source #

The S3 location where the vocabulary is stored. Use this URI to get the contents of the vocabulary. The URI is available for a limited time.

getVocabularyResponse_lastModifiedTime :: Lens' GetVocabularyResponse (Maybe UTCTime) Source #

The date and time that the vocabulary was last modified.

DeleteLanguageModel

deleteLanguageModel_modelName :: Lens' DeleteLanguageModel Text Source #

The name of the model you're choosing to delete.

GetTranscriptionJob

getTranscriptionJobResponse_transcriptionJob :: Lens' GetTranscriptionJobResponse (Maybe TranscriptionJob) Source #

An object that contains the results of the transcription job.

StartMedicalTranscriptionJob

startMedicalTranscriptionJob_outputEncryptionKMSKeyId :: Lens' StartMedicalTranscriptionJob (Maybe Text) Source #

The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartMedicalTranscriptionJob operation must have permission to use the specified KMS key.

You use either of the following to identify a KMS key in the current account:

  • KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
  • KMS Key Alias: "alias/ExampleAlias"

You can use either of the following to identify a KMS key in the current account or another account:

  • Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
  • ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"

If you don't specify an encryption key, the output of the medical transcription job is encrypted with the default Amazon S3 key (SSE-S3).

If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName parameter.

startMedicalTranscriptionJob_kmsEncryptionContext :: Lens' StartMedicalTranscriptionJob (Maybe (HashMap Text Text)) Source #

A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data.

startMedicalTranscriptionJob_outputKey :: Lens' StartMedicalTranscriptionJob (Maybe Text) Source #

You can specify a location in an Amazon S3 bucket to store the output of your medical transcription job.

If you don't specify an output key, Amazon Transcribe Medical stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".

You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".

If you specify an output key, you must also specify an S3 bucket in the OutputBucketName parameter.

startMedicalTranscriptionJob_contentIdentificationType :: Lens' StartMedicalTranscriptionJob (Maybe MedicalContentIdentificationType) Source #

You can configure Amazon Transcribe Medical to label content in the transcription output. If you specify PHI, Amazon Transcribe Medical labels the personal health information (PHI) that it identifies in the transcription output.

startMedicalTranscriptionJob_tags :: Lens' StartMedicalTranscriptionJob (Maybe (NonEmpty Tag)) Source #

Add tags to an Amazon Transcribe medical transcription job.

startMedicalTranscriptionJob_mediaSampleRateHertz :: Lens' StartMedicalTranscriptionJob (Maybe Natural) Source #

The sample rate, in Hertz, of the audio track in the input media file.

If you do not specify the media sample rate, Amazon Transcribe Medical determines the sample rate. If you specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MediaSampleRateHertz field blank and let Amazon Transcribe Medical determine the sample rate.

startMedicalTranscriptionJob_medicalTranscriptionJobName :: Lens' StartMedicalTranscriptionJob Text Source #

The name of the medical transcription job. You can't use the strings "." or ".." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a medical transcription job with the same name as a previous medical transcription job, you get a ConflictException error.

startMedicalTranscriptionJob_languageCode :: Lens' StartMedicalTranscriptionJob LanguageCode Source #

The language code for the language spoken in the input media file. US English (en-US) is the valid value for medical transcription jobs. Any other value you enter for language code results in a BadRequestException error.

startMedicalTranscriptionJob_outputBucketName :: Lens' StartMedicalTranscriptionJob Text Source #

The Amazon S3 location where the transcription is stored.

You must set OutputBucketName for Amazon Transcribe Medical to store the transcription results. Your transcript appears in the S3 location you specify. When you call the GetMedicalTranscriptionJob, the operation returns this location in the TranscriptFileUri field. The S3 bucket must have permissions that allow Amazon Transcribe Medical to put files in the bucket. For more information, see Permissions Required for IAM User Roles.

You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe Medical uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.

startMedicalTranscriptionJob_specialty :: Lens' StartMedicalTranscriptionJob Specialty Source #

The medical specialty of any clinician speaking in the input media.

startMedicalTranscriptionJob_type :: Lens' StartMedicalTranscriptionJob Type Source #

The type of speech in the input audio. CONVERSATION refers to conversations between two or more speakers, e.g., a conversations between doctors and patients. DICTATION refers to single-speaker dictated speech, such as clinical notes.

ListCallAnalyticsJobs

listCallAnalyticsJobs_status :: Lens' ListCallAnalyticsJobs (Maybe CallAnalyticsJobStatus) Source #

When specified, returns only call analytics jobs with the specified status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all analytics jobs ordered by creation date.

listCallAnalyticsJobs_nextToken :: Lens' ListCallAnalyticsJobs (Maybe Text) Source #

If you receive a truncated result in the previous request of , include NextToken to fetch the next set of jobs.

listCallAnalyticsJobs_jobNameContains :: Lens' ListCallAnalyticsJobs (Maybe Text) Source #

When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.

listCallAnalyticsJobs_maxResults :: Lens' ListCallAnalyticsJobs (Maybe Natural) Source #

The maximum number of call analytics jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listCallAnalyticsJobsResponse_status :: Lens' ListCallAnalyticsJobsResponse (Maybe CallAnalyticsJobStatus) Source #

When specified, returns only call analytics jobs with that status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.

listCallAnalyticsJobsResponse_callAnalyticsJobSummaries :: Lens' ListCallAnalyticsJobsResponse (Maybe [CallAnalyticsJobSummary]) Source #

A list of objects containing summary information for a transcription job.

listCallAnalyticsJobsResponse_nextToken :: Lens' ListCallAnalyticsJobsResponse (Maybe Text) Source #

The operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in your next request to the operation to return next page of jobs.

ListTagsForResource

listTagsForResource_resourceArn :: Lens' ListTagsForResource Text Source #

Lists all tags associated with a given Amazon Resource Name (ARN).

listTagsForResourceResponse_resourceArn :: Lens' ListTagsForResourceResponse (Maybe Text) Source #

Lists all tags associated with the given Amazon Resource Name (ARN).

listTagsForResourceResponse_tags :: Lens' ListTagsForResourceResponse (Maybe (NonEmpty Tag)) Source #

Lists all tags associated with the given transcription job, vocabulary, or resource.

GetCallAnalyticsCategory

getCallAnalyticsCategory_categoryName :: Lens' GetCallAnalyticsCategory Text Source #

The name of the category you want information about. This value is case sensitive.

DeleteMedicalVocabulary

deleteMedicalVocabulary_vocabularyName :: Lens' DeleteMedicalVocabulary Text Source #

The name of the vocabulary that you want to delete.

UpdateMedicalVocabulary

updateMedicalVocabulary_vocabularyFileUri :: Lens' UpdateMedicalVocabulary (Maybe Text) Source #

The location in Amazon S3 of the text file that contains your custom vocabulary. The URI must be in the same Amazon Web Services Region as the resource that you are calling. The following is the format for a URI:

 https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>

For example:

https://s3.us-east-1.amazonaws.com/AWSDOC-EXAMPLE-BUCKET/vocab.txt

For more information about Amazon S3 object names, see Object Keys in the Amazon S3 Developer Guide.

For more information about custom vocabularies in Amazon Transcribe Medical, see Medical Custom Vocabularies.

updateMedicalVocabulary_vocabularyName :: Lens' UpdateMedicalVocabulary Text Source #

The name of the vocabulary to update. The name is case sensitive. If you try to update a vocabulary with the same name as a vocabulary you've already made, you get a ConflictException error.

updateMedicalVocabulary_languageCode :: Lens' UpdateMedicalVocabulary LanguageCode Source #

The language code of the language used for the entries in the updated vocabulary. US English (en-US) is the only valid language code in Amazon Transcribe Medical.

updateMedicalVocabularyResponse_languageCode :: Lens' UpdateMedicalVocabularyResponse (Maybe LanguageCode) Source #

The language code for the language of the text file used to update the custom vocabulary. US English (en-US) is the only language supported in Amazon Transcribe Medical.

updateMedicalVocabularyResponse_vocabularyState :: Lens' UpdateMedicalVocabularyResponse (Maybe VocabularyState) Source #

The processing state of the update to the vocabulary. When the VocabularyState field is READY, the vocabulary is ready to be used in a StartMedicalTranscriptionJob request.

CreateCallAnalyticsCategory

createCallAnalyticsCategory_categoryName :: Lens' CreateCallAnalyticsCategory Text Source #

The name that you choose for your category when you create it.

createCallAnalyticsCategory_rules :: Lens' CreateCallAnalyticsCategory (NonEmpty Rule) Source #

To create a category, you must specify between 1 and 20 rules. For each rule, you specify a filter to be applied to the attributes of the call. For example, you can specify a sentiment filter to detect if the customer's sentiment was negative or neutral.

DeleteTranscriptionJob

deleteTranscriptionJob_transcriptionJobName :: Lens' DeleteTranscriptionJob Text Source #

The name of the transcription job to be deleted.

DescribeLanguageModel

describeLanguageModel_modelName :: Lens' DescribeLanguageModel Text Source #

The name of the custom language model you submit to get more information.

describeLanguageModelResponse_languageModel :: Lens' DescribeLanguageModelResponse (Maybe LanguageModel) Source #

The name of the custom language model you requested more information about.

ListCallAnalyticsCategories

listCallAnalyticsCategories_nextToken :: Lens' ListCallAnalyticsCategories (Maybe Text) Source #

When included, NextTokenfetches the next set of categories if the result of the previous request was truncated.

listCallAnalyticsCategories_maxResults :: Lens' ListCallAnalyticsCategories (Maybe Natural) Source #

The maximum number of categories to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listCallAnalyticsCategoriesResponse_categories :: Lens' ListCallAnalyticsCategoriesResponse (Maybe [CategoryProperties]) Source #

A list of objects containing information about analytics categories.

listCallAnalyticsCategoriesResponse_nextToken :: Lens' ListCallAnalyticsCategoriesResponse (Maybe Text) Source #

The operation returns a page of jobs at a time. The maximum size of the list is set by the MaxResults parameter. If there are more categories in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in the next request to the operation to return the next page of analytics categories.

DeleteMedicalTranscriptionJob

deleteMedicalTranscriptionJob_medicalTranscriptionJobName :: Lens' DeleteMedicalTranscriptionJob Text Source #

The name you provide to the DeleteMedicalTranscriptionJob object to delete a transcription job.

DeleteVocabulary

deleteVocabulary_vocabularyName :: Lens' DeleteVocabulary Text Source #

The name of the vocabulary to delete.

StartCallAnalyticsJob

startCallAnalyticsJob_settings :: Lens' StartCallAnalyticsJob (Maybe CallAnalyticsJobSettings) Source #

A Settings object that provides optional settings for a call analytics job.

startCallAnalyticsJob_outputEncryptionKMSKeyId :: Lens' StartCallAnalyticsJob (Maybe Text) Source #

The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service key used to encrypt the output of the call analytics job. The user calling the operation must have permission to use the specified KMS key.

You use either of the following to identify an Amazon Web Services KMS key in the current account:

  • KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
  • KMS Key Alias: "alias/ExampleAlias"

You can use either of the following to identify a KMS key in the current account or another account:

  • Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef1234567890ab"
  • ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"

If you don't specify an encryption key, the output of the call analytics job is encrypted with the default Amazon S3 key (SSE-S3).

If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputLocation parameter.

startCallAnalyticsJob_outputLocation :: Lens' StartCallAnalyticsJob (Maybe Text) Source #

The Amazon S3 location where the output of the call analytics job is stored. You can provide the following location types to store the output of call analytics job:

  • s3://DOC-EXAMPLE-BUCKET1

    If you specify a bucket, Amazon Transcribe saves the output of the analytics job as a JSON file at the root level of the bucket.

  • s3://DOC-EXAMPLE-BUCKET1/folder/

    f you specify a path, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLE-BUCKET1/folder/your-transcription-job-name.json

    If you specify a folder, you must provide a trailing slash.

  • s3://DOC-EXAMPLE-BUCKET1/folder/filename.json

    If you provide a path that has the filename specified, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLEBUCKET1/folder/filename.json

You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of our analytics job using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of the analytics job output that is placed in your S3 bucket.

startCallAnalyticsJob_channelDefinitions :: Lens' StartCallAnalyticsJob (Maybe (NonEmpty ChannelDefinition)) Source #

When you start a call analytics job, you must pass an array that maps the agent and the customer to specific audio channels. The values you can assign to a channel are 0 and 1. The agent and the customer must each have their own channel. You can't assign more than one channel to an agent or customer.

startCallAnalyticsJob_callAnalyticsJobName :: Lens' StartCallAnalyticsJob Text Source #

The name of the call analytics job. You can't use the string "." or ".." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a call analytics job with the same name as a previous call analytics job, you get a ConflictException error.

startCallAnalyticsJob_dataAccessRoleArn :: Lens' StartCallAnalyticsJob Text Source #

The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains your input files. Amazon Transcribe assumes this role to read queued audio files. If you have specified an output S3 bucket for your transcription results, this role should have access to the output bucket as well.

startCallAnalyticsJobResponse_callAnalyticsJob :: Lens' StartCallAnalyticsJobResponse (Maybe CallAnalyticsJob) Source #

An object containing the details of the asynchronous call analytics job.

UpdateVocabulary

updateVocabulary_vocabularyFileUri :: Lens' UpdateVocabulary (Maybe Text) Source #

The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is

For example:

For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.

For more information about custom vocabularies, see Custom Vocabularies.

updateVocabulary_phrases :: Lens' UpdateVocabulary (Maybe [Text]) Source #

An array of strings containing the vocabulary entries.

updateVocabulary_vocabularyName :: Lens' UpdateVocabulary Text Source #

The name of the vocabulary to update. The name is case sensitive. If you try to update a vocabulary with the same name as a previous vocabulary you will receive a ConflictException error.

updateVocabulary_languageCode :: Lens' UpdateVocabulary LanguageCode Source #

The language code of the vocabulary entries. For a list of languages and their corresponding language codes, see transcribe-whatis.

updateVocabularyResponse_lastModifiedTime :: Lens' UpdateVocabularyResponse (Maybe UTCTime) Source #

The date and time that the vocabulary was updated.

updateVocabularyResponse_vocabularyState :: Lens' UpdateVocabularyResponse (Maybe VocabularyState) Source #

The processing state of the vocabulary. When the VocabularyState field contains READY the vocabulary is ready to be used in a StartTranscriptionJob request.

CreateVocabularyFilter

createVocabularyFilter_vocabularyFilterFileUri :: Lens' CreateVocabularyFilter (Maybe Text) Source #

The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.

The specified file must be less than 50 KB of UTF-8 characters.

If you provide the location of a list of words in the VocabularyFilterFileUri parameter, you can't use the Words parameter.

createVocabularyFilter_words :: Lens' CreateVocabularyFilter (Maybe (NonEmpty Text)) Source #

The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.

If you provide a list of words in the Words parameter, you can't use the VocabularyFilterFileUri parameter.

createVocabularyFilter_tags :: Lens' CreateVocabularyFilter (Maybe (NonEmpty Tag)) Source #

Adds one or more tags, each in the form of a key:value pair, to a new Amazon Transcribe vocabulary filter at the time you create this new vocabulary filter.

createVocabularyFilter_vocabularyFilterName :: Lens' CreateVocabularyFilter Text Source #

The vocabulary filter name. The name must be unique within the account that contains it. If you try to create a vocabulary filter with the same name as another vocabulary filter, you get a ConflictException error.

createVocabularyFilter_languageCode :: Lens' CreateVocabularyFilter LanguageCode Source #

The language code of the words in the vocabulary filter. All words in the filter must be in the same language. The vocabulary filter can only be used with transcription jobs in the specified language.

createVocabularyFilterResponse_lastModifiedTime :: Lens' CreateVocabularyFilterResponse (Maybe UTCTime) Source #

The date and time that the vocabulary filter was modified.

GetMedicalTranscriptionJob

GetVocabularyFilter

getVocabularyFilter_vocabularyFilterName :: Lens' GetVocabularyFilter Text Source #

The name of the vocabulary filter for which to return information.

getVocabularyFilterResponse_languageCode :: Lens' GetVocabularyFilterResponse (Maybe LanguageCode) Source #

The language code of the words in the vocabulary filter.

getVocabularyFilterResponse_downloadUri :: Lens' GetVocabularyFilterResponse (Maybe Text) Source #

The URI of the list of words in the vocabulary filter. You can use this URI to get the list of words.

getVocabularyFilterResponse_lastModifiedTime :: Lens' GetVocabularyFilterResponse (Maybe UTCTime) Source #

The date and time that the contents of the vocabulary filter were updated.

GetMedicalVocabulary

getMedicalVocabulary_vocabularyName :: Lens' GetMedicalVocabulary Text Source #

The name of the vocabulary that you want information about. The value is case sensitive.

getMedicalVocabularyResponse_failureReason :: Lens' GetMedicalVocabularyResponse (Maybe Text) Source #

If the VocabularyState is FAILED, this field contains information about why the job failed.

getMedicalVocabularyResponse_downloadUri :: Lens' GetMedicalVocabularyResponse (Maybe Text) Source #

The location in Amazon S3 where the vocabulary is stored. Use this URI to get the contents of the vocabulary. You can download your vocabulary from the URI for a limited time.

getMedicalVocabularyResponse_vocabularyName :: Lens' GetMedicalVocabularyResponse (Maybe Text) Source #

The name of the vocabulary returned by Amazon Transcribe Medical.

getMedicalVocabularyResponse_lastModifiedTime :: Lens' GetMedicalVocabularyResponse (Maybe UTCTime) Source #

The date and time that the vocabulary was last modified with a text file different from the one that was previously used.

getMedicalVocabularyResponse_vocabularyState :: Lens' GetMedicalVocabularyResponse (Maybe VocabularyState) Source #

The processing state of the vocabulary. If the VocabularyState is READY then you can use it in the StartMedicalTranscriptionJob operation.

DeleteCallAnalyticsJob

deleteCallAnalyticsJob_callAnalyticsJobName :: Lens' DeleteCallAnalyticsJob Text Source #

The name of the call analytics job you want to delete.

CreateMedicalVocabulary

createMedicalVocabulary_tags :: Lens' CreateMedicalVocabulary (Maybe (NonEmpty Tag)) Source #

Adds one or more tags, each in the form of a key:value pair, to a new medical vocabulary at the time you create this new vocabulary.

createMedicalVocabulary_vocabularyName :: Lens' CreateMedicalVocabulary Text Source #

The name of the custom vocabulary. This case-sensitive name must be unique within an Amazon Web Services account. If you try to create a vocabulary with the same name as a previous vocabulary, you get a ConflictException error.

createMedicalVocabulary_languageCode :: Lens' CreateMedicalVocabulary LanguageCode Source #

The language code for the language used for the entries in your custom vocabulary. The language code of your custom vocabulary must match the language code of your transcription job. US English (en-US) is the only language code available for Amazon Transcribe Medical.

createMedicalVocabulary_vocabularyFileUri :: Lens' CreateMedicalVocabulary Text Source #

The location in Amazon S3 of the text file you use to define your custom vocabulary. The URI must be in the same Amazon Web Services Region as the resource that you're calling. Enter information about your VocabularyFileUri in the following format:

 https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>

The following is an example URI for a vocabulary file that is stored in Amazon S3:

https://s3.us-east-1.amazonaws.com/AWSDOC-EXAMPLE-BUCKET/vocab.txt

For more information about Amazon S3 object names, see Object Keys in the Amazon S3 Developer Guide.

For more information about custom vocabularies, see Medical Custom Vocabularies.

createMedicalVocabularyResponse_failureReason :: Lens' CreateMedicalVocabularyResponse (Maybe Text) Source #

If the VocabularyState field is FAILED, this field contains information about why the job failed.

createMedicalVocabularyResponse_languageCode :: Lens' CreateMedicalVocabularyResponse (Maybe LanguageCode) Source #

The language code for the entries in your custom vocabulary. US English (en-US) is the only valid language code for Amazon Transcribe Medical.

createMedicalVocabularyResponse_vocabularyName :: Lens' CreateMedicalVocabularyResponse (Maybe Text) Source #

The name of the vocabulary. The name must be unique within an Amazon Web Services account and is case sensitive.

createMedicalVocabularyResponse_vocabularyState :: Lens' CreateMedicalVocabularyResponse (Maybe VocabularyState) Source #

The processing state of your custom vocabulary in Amazon Transcribe Medical. If the state is READY, you can use the vocabulary in a StartMedicalTranscriptionJob request.

ListMedicalVocabularies

listMedicalVocabularies_nameContains :: Lens' ListMedicalVocabularies (Maybe Text) Source #

Returns vocabularies whose names contain the specified string. The search is not case sensitive. ListMedicalVocabularies returns both "vocabularyname" and "VocabularyName".

listMedicalVocabularies_nextToken :: Lens' ListMedicalVocabularies (Maybe Text) Source #

If the result of your previous request to ListMedicalVocabularies was truncated, include the NextToken to fetch the next set of vocabularies.

listMedicalVocabularies_stateEquals :: Lens' ListMedicalVocabularies (Maybe VocabularyState) Source #

When specified, returns only vocabularies with the VocabularyState equal to the specified vocabulary state. Use this field to see which vocabularies are ready for your medical transcription jobs.

listMedicalVocabularies_maxResults :: Lens' ListMedicalVocabularies (Maybe Natural) Source #

The maximum number of vocabularies to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listMedicalVocabulariesResponse_vocabularies :: Lens' ListMedicalVocabulariesResponse (Maybe [VocabularyInfo]) Source #

A list of objects that describe the vocabularies that match your search criteria.

listMedicalVocabulariesResponse_nextToken :: Lens' ListMedicalVocabulariesResponse (Maybe Text) Source #

The ListMedicalVocabularies operation returns a page of vocabularies at a time. You set the maximum number of vocabularies to return on a page with the MaxResults parameter. If there are more jobs in the list will fit on a page, Amazon Transcribe Medical returns the NextPage token. To return the next page of vocabularies, include the token in the next request to the ListMedicalVocabularies operation .

DeleteCallAnalyticsCategory

deleteCallAnalyticsCategory_categoryName :: Lens' DeleteCallAnalyticsCategory Text Source #

The name of the call analytics category that you're choosing to delete. The value is case sensitive.

UpdateCallAnalyticsCategory

updateCallAnalyticsCategory_categoryName :: Lens' UpdateCallAnalyticsCategory Text Source #

The name of the analytics category to update. The name is case sensitive. If you try to update a call analytics category with the same name as a previous category you will receive a ConflictException error.

updateCallAnalyticsCategory_rules :: Lens' UpdateCallAnalyticsCategory (NonEmpty Rule) Source #

The rules used for the updated analytics category. The rules that you provide in this field replace the ones that are currently being used.

updateCallAnalyticsCategoryResponse_categoryProperties :: Lens' UpdateCallAnalyticsCategoryResponse (Maybe CategoryProperties) Source #

The attributes describing the analytics category. You can see information such as the rules that you've used to update the category and when the category was originally created.

GetCallAnalyticsJob

getCallAnalyticsJob_callAnalyticsJobName :: Lens' GetCallAnalyticsJob Text Source #

The name of the analytics job you want information about. This value is case sensitive.

getCallAnalyticsJobResponse_callAnalyticsJob :: Lens' GetCallAnalyticsJobResponse (Maybe CallAnalyticsJob) Source #

An object that contains the results of your call analytics job.

TagResource

tagResource_resourceArn :: Lens' TagResource Text Source #

The Amazon Resource Name (ARN) of the Amazon Transcribe resource you want to tag.

tagResource_tags :: Lens' TagResource (NonEmpty Tag) Source #

The tags you are assigning to a given Amazon Transcribe resource.

ListTranscriptionJobs

listTranscriptionJobs_status :: Lens' ListTranscriptionJobs (Maybe TranscriptionJobStatus) Source #

When specified, returns only transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don’t specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.

listTranscriptionJobs_nextToken :: Lens' ListTranscriptionJobs (Maybe Text) Source #

If the result of the previous request to ListTranscriptionJobs is truncated, include the NextToken to fetch the next set of jobs.

listTranscriptionJobs_jobNameContains :: Lens' ListTranscriptionJobs (Maybe Text) Source #

When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.

listTranscriptionJobs_maxResults :: Lens' ListTranscriptionJobs (Maybe Natural) Source #

The maximum number of jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listTranscriptionJobsResponse_nextToken :: Lens' ListTranscriptionJobsResponse (Maybe Text) Source #

The ListTranscriptionJobs operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in the next request to the ListTranscriptionJobs operation to return in the next page of jobs.

listTranscriptionJobsResponse_transcriptionJobSummaries :: Lens' ListTranscriptionJobsResponse (Maybe [TranscriptionJobSummary]) Source #

A list of objects containing summary information for a transcription job.

ListMedicalTranscriptionJobs

listMedicalTranscriptionJobs_status :: Lens' ListMedicalTranscriptionJobs (Maybe TranscriptionJobStatus) Source #

When specified, returns only medical transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don't specify a status, Amazon Transcribe Medical returns all transcription jobs ordered by creation date.

listMedicalTranscriptionJobs_nextToken :: Lens' ListMedicalTranscriptionJobs (Maybe Text) Source #

If you a receive a truncated result in the previous request of ListMedicalTranscriptionJobs, include NextToken to fetch the next set of jobs.

listMedicalTranscriptionJobs_jobNameContains :: Lens' ListMedicalTranscriptionJobs (Maybe Text) Source #

When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.

listMedicalTranscriptionJobs_maxResults :: Lens' ListMedicalTranscriptionJobs (Maybe Natural) Source #

The maximum number of medical transcription jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listMedicalTranscriptionJobsResponse_nextToken :: Lens' ListMedicalTranscriptionJobsResponse (Maybe Text) Source #

The ListMedicalTranscriptionJobs operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults parameter. If the number of jobs exceeds what can fit on a page, Amazon Transcribe Medical returns the NextPage token. Include the token in the next request to the ListMedicalTranscriptionJobs operation to return in the next page of jobs.

UntagResource

untagResource_resourceArn :: Lens' UntagResource Text Source #

The Amazon Resource Name (ARN) of the Amazon Transcribe resource you want to remove tags from.

untagResource_tagKeys :: Lens' UntagResource (NonEmpty Text) Source #

A list of tag keys you want to remove from a specified Amazon Transcribe resource.

DeleteVocabularyFilter

ListVocabularyFilters

listVocabularyFilters_nameContains :: Lens' ListVocabularyFilters (Maybe Text) Source #

Filters the response so that it only contains vocabulary filters whose name contains the specified string.

listVocabularyFilters_nextToken :: Lens' ListVocabularyFilters (Maybe Text) Source #

If the result of the previous request to ListVocabularyFilters was truncated, include the NextToken to fetch the next set of collections.

listVocabularyFilters_maxResults :: Lens' ListVocabularyFilters (Maybe Natural) Source #

The maximum number of filters to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listVocabularyFiltersResponse_nextToken :: Lens' ListVocabularyFiltersResponse (Maybe Text) Source #

The ListVocabularyFilters operation returns a page of collections at a time. The maximum size of the page is set by the MaxResults parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in the next request to the ListVocabularyFilters operation to return in the next page of jobs.

listVocabularyFiltersResponse_vocabularyFilters :: Lens' ListVocabularyFiltersResponse (Maybe [VocabularyFilterInfo]) Source #

The list of vocabulary filters. It contains at most MaxResults number of filters. If there are more filters, call the ListVocabularyFilters operation again with the NextToken parameter in the request set to the value of the NextToken field in the response.

UpdateVocabularyFilter

updateVocabularyFilter_vocabularyFilterFileUri :: Lens' UpdateVocabularyFilter (Maybe Text) Source #

The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.

The specified file must be less than 50 KB of UTF-8 characters.

If you provide the location of a list of words in the VocabularyFilterFileUri parameter, you can't use the Words parameter.

updateVocabularyFilter_words :: Lens' UpdateVocabularyFilter (Maybe (NonEmpty Text)) Source #

The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.

If you provide a list of words in the Words parameter, you can't use the VocabularyFilterFileUri parameter.

updateVocabularyFilter_vocabularyFilterName :: Lens' UpdateVocabularyFilter Text Source #

The name of the vocabulary filter to update. If you try to update a vocabulary filter with the same name as another vocabulary filter, you get a ConflictException error.

updateVocabularyFilterResponse_lastModifiedTime :: Lens' UpdateVocabularyFilterResponse (Maybe UTCTime) Source #

The date and time that the vocabulary filter was updated.

ListVocabularies

listVocabularies_nameContains :: Lens' ListVocabularies (Maybe Text) Source #

When specified, the vocabularies returned in the list are limited to vocabularies whose name contains the specified string. The search is not case sensitive, ListVocabularies returns both "vocabularyname" and "VocabularyName" in the response list.

listVocabularies_nextToken :: Lens' ListVocabularies (Maybe Text) Source #

If the result of the previous request to ListVocabularies was truncated, include the NextToken to fetch the next set of jobs.

listVocabularies_stateEquals :: Lens' ListVocabularies (Maybe VocabularyState) Source #

When specified, only returns vocabularies with the VocabularyState field equal to the specified state.

listVocabularies_maxResults :: Lens' ListVocabularies (Maybe Natural) Source #

The maximum number of vocabularies to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.

listVocabulariesResponse_vocabularies :: Lens' ListVocabulariesResponse (Maybe [VocabularyInfo]) Source #

A list of objects that describe the vocabularies that match the search criteria in the request.

listVocabulariesResponse_nextToken :: Lens' ListVocabulariesResponse (Maybe Text) Source #

The ListVocabularies operation returns a page of vocabularies at a time. The maximum size of the page is set in the MaxResults parameter. If there are more jobs in the list than will fit on the page, Amazon Transcribe returns the NextPage token. To return in the next page of jobs, include the token in the next request to the ListVocabularies operation.

CreateVocabulary

createVocabulary_vocabularyFileUri :: Lens' CreateVocabulary (Maybe Text) Source #

The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is:

For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.

For more information about custom vocabularies, see Custom vocabularies.

createVocabulary_phrases :: Lens' CreateVocabulary (Maybe [Text]) Source #

An array of strings that contains the vocabulary entries.

createVocabulary_tags :: Lens' CreateVocabulary (Maybe (NonEmpty Tag)) Source #

Adds one or more tags, each in the form of a key:value pair, to a new Amazon Transcribe vocabulary at the time you create this new vocabulary.

createVocabulary_vocabularyName :: Lens' CreateVocabulary Text Source #

The name of the vocabulary. The name must be unique within an Amazon Web Services account. The name is case sensitive. If you try to create a vocabulary with the same name as a previous vocabulary you will receive a ConflictException error.

createVocabulary_languageCode :: Lens' CreateVocabulary LanguageCode Source #

The language code of the vocabulary entries. For a list of languages and their corresponding language codes, see transcribe-whatis.

createVocabularyResponse_failureReason :: Lens' CreateVocabularyResponse (Maybe Text) Source #

If the VocabularyState field is FAILED, this field contains information about why the job failed.

createVocabularyResponse_lastModifiedTime :: Lens' CreateVocabularyResponse (Maybe UTCTime) Source #

The date and time that the vocabulary was created.

createVocabularyResponse_vocabularyState :: Lens' CreateVocabularyResponse (Maybe VocabularyState) Source #

The processing state of the vocabulary. When the VocabularyState field contains READY the vocabulary is ready to be used in a StartTranscriptionJob request.

CreateLanguageModel

createLanguageModel_tags :: Lens' CreateLanguageModel (Maybe (NonEmpty Tag)) Source #

Adds one or more tags, each in the form of a key:value pair, to a new language model at the time you create this new model.

createLanguageModel_languageCode :: Lens' CreateLanguageModel CLMLanguageCode Source #

The language of the input text you're using to train your custom language model.

createLanguageModel_baseModelName :: Lens' CreateLanguageModel BaseModelName Source #

The Amazon Transcribe standard language model, or base model used to create your custom language model.

If you want to use your custom language model to transcribe audio with a sample rate of 16,000 Hz or greater, choose Wideband.

If you want to use your custom language model to transcribe audio with a sample rate that is less than 16,000 Hz, choose Narrowband.

createLanguageModel_modelName :: Lens' CreateLanguageModel Text Source #

The name you choose for your custom language model when you create it.

createLanguageModel_inputDataConfig :: Lens' CreateLanguageModel InputDataConfig Source #

Contains the data access role and the Amazon S3 prefixes to read the required input files to create a custom language model.

createLanguageModelResponse_languageCode :: Lens' CreateLanguageModelResponse (Maybe CLMLanguageCode) Source #

The language code of the text you've used to create a custom language model.

createLanguageModelResponse_modelName :: Lens' CreateLanguageModelResponse (Maybe Text) Source #

The name you've chosen for your custom language model.

createLanguageModelResponse_inputDataConfig :: Lens' CreateLanguageModelResponse (Maybe InputDataConfig) Source #

The data access role and Amazon S3 prefixes you've chosen to create your custom language model.

createLanguageModelResponse_baseModelName :: Lens' CreateLanguageModelResponse (Maybe BaseModelName) Source #

The Amazon Transcribe standard language model, or base model you've used to create a custom language model.

createLanguageModelResponse_modelStatus :: Lens' CreateLanguageModelResponse (Maybe ModelStatus) Source #

The status of the custom language model. When the status is COMPLETED the model is ready to use.

StartTranscriptionJob

startTranscriptionJob_contentRedaction :: Lens' StartTranscriptionJob (Maybe ContentRedaction) Source #

An object that contains the request parameters for content redaction.

startTranscriptionJob_subtitles :: Lens' StartTranscriptionJob (Maybe Subtitles) Source #

Add subtitles to your batch transcription job.

startTranscriptionJob_languageCode :: Lens' StartTranscriptionJob (Maybe LanguageCode) Source #

The language code for the language used in the input media file.

To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16,000 Hz or higher.

startTranscriptionJob_languageOptions :: Lens' StartTranscriptionJob (Maybe (NonEmpty LanguageCode)) Source #

An object containing a list of languages that might be present in your collection of audio files. Automatic language identification chooses a language that best matches the source audio from that list.

To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16,000 Hz or higher.

startTranscriptionJob_settings :: Lens' StartTranscriptionJob (Maybe Settings) Source #

A Settings object that provides optional settings for a transcription job.

startTranscriptionJob_outputBucketName :: Lens' StartTranscriptionJob (Maybe Text) Source #

The location where the transcription is stored.

If you set the OutputBucketName, Amazon Transcribe puts the transcript in the specified S3 bucket. When you call the GetTranscriptionJob operation, the operation returns this location in the TranscriptFileUri field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri. If you enable content redaction and choose to output an unredacted transcript, that transcript's location still appears in the TranscriptFileUri. The S3 bucket must have permissions that allow Amazon Transcribe to put files in the bucket. For more information, see Permissions Required for IAM User Roles.

You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.

If you don't set the OutputBucketName, Amazon Transcribe generates a pre-signed URL, a shareable URL that provides secure access to your transcription, and returns it in the TranscriptFileUri field. Use this URL to download the transcription.

startTranscriptionJob_outputEncryptionKMSKeyId :: Lens' StartTranscriptionJob (Maybe Text) Source #

The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartTranscriptionJob operation must have permission to use the specified KMS key.

You can use either of the following to identify a KMS key in the current account:

  • KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
  • KMS Key Alias: "alias/ExampleAlias"

You can use either of the following to identify a KMS key in the current account or another account:

  • Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
  • ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"

If you don't specify an encryption key, the output of the transcription job is encrypted with the default Amazon S3 key (SSE-S3).

If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName parameter.

startTranscriptionJob_modelSettings :: Lens' StartTranscriptionJob (Maybe ModelSettings) Source #

Choose the custom language model you use for your transcription job in this parameter.

startTranscriptionJob_kmsEncryptionContext :: Lens' StartTranscriptionJob (Maybe (HashMap Text Text)) Source #

A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data.

startTranscriptionJob_jobExecutionSettings :: Lens' StartTranscriptionJob (Maybe JobExecutionSettings) Source #

Provides information about how a transcription job is executed. Use this field to indicate that the job can be queued for deferred execution if the concurrency limit is reached and there are no slots available to immediately run the job.

startTranscriptionJob_outputKey :: Lens' StartTranscriptionJob (Maybe Text) Source #

You can specify a location in an Amazon S3 bucket to store the output of your transcription job.

If you don't specify an output key, Amazon Transcribe stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".

You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".

If you specify an output key, you must also specify an S3 bucket in the OutputBucketName parameter.

startTranscriptionJob_identifyLanguage :: Lens' StartTranscriptionJob (Maybe Bool) Source #

Set this field to true to enable automatic language identification. Automatic language identification is disabled by default. You receive a BadRequestException error if you enter a value for a LanguageCode.

startTranscriptionJob_tags :: Lens' StartTranscriptionJob (Maybe (NonEmpty Tag)) Source #

Add tags to an Amazon Transcribe transcription job.

startTranscriptionJob_mediaSampleRateHertz :: Lens' StartTranscriptionJob (Maybe Natural) Source #

The sample rate, in Hertz, of the audio track in the input media file.

If you do not specify the media sample rate, Amazon Transcribe determines the sample rate. If you specify the sample rate, it must match the sample rate detected by Amazon Transcribe. In most cases, you should leave the MediaSampleRateHertz field blank and let Amazon Transcribe determine the sample rate.

startTranscriptionJob_transcriptionJobName :: Lens' StartTranscriptionJob Text Source #

The name of the job. You can't use the strings "." or ".." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a transcription job with the same name as a previous transcription job, you get a ConflictException error.

startTranscriptionJob_media :: Lens' StartTranscriptionJob Media Source #

An object that describes the input media for a transcription job.

startTranscriptionJobResponse_transcriptionJob :: Lens' StartTranscriptionJobResponse (Maybe TranscriptionJob) Source #

An object containing details of the asynchronous transcription job.

Types

AbsoluteTimeRange

absoluteTimeRange_first :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.

absoluteTimeRange_startTime :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:

  • StartTime - 10000
  • Endtime - 50000

The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.

absoluteTimeRange_last :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.

absoluteTimeRange_endTime :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:

  • StartTime - 10000
  • Endtime - 50000

The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.

CallAnalyticsJob

callAnalyticsJob_creationTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

A timestamp that shows when the analytics job was created.

callAnalyticsJob_failureReason :: Lens' CallAnalyticsJob (Maybe Text) Source #

If the AnalyticsJobStatus is FAILED, this field contains information about why the job failed.

The FailureReason field can contain one of the following values:

  • Unsupported media format: The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
  • The media format provided does not match the detected media format: The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
  • Invalid sample rate for audio file: The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz.
  • The sample rate provided does not match the detected sample rate: The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
  • Invalid file size: file size too large: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide.
  • Invalid number of channels: number of channels too large: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference.

callAnalyticsJob_identifiedLanguageScore :: Lens' CallAnalyticsJob (Maybe Double) Source #

A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don't provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified

callAnalyticsJob_languageCode :: Lens' CallAnalyticsJob (Maybe LanguageCode) Source #

If you know the language spoken between the customer and the agent, specify a language code for this field.

If you don't know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio. Refer to Supported languages and language-specific features for additional information.

callAnalyticsJob_settings :: Lens' CallAnalyticsJob (Maybe CallAnalyticsJobSettings) Source #

Provides information about the settings used to run a transcription job.

callAnalyticsJob_startTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

A timestamp that shows when the analytics job started processing.

callAnalyticsJob_completionTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

A timestamp that shows when the analytics job was completed.

callAnalyticsJob_mediaFormat :: Lens' CallAnalyticsJob (Maybe MediaFormat) Source #

The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.

callAnalyticsJob_channelDefinitions :: Lens' CallAnalyticsJob (Maybe (NonEmpty ChannelDefinition)) Source #

Shows numeric values to indicate the channel assigned to the agent's audio and the channel assigned to the customer's audio.

callAnalyticsJob_dataAccessRoleArn :: Lens' CallAnalyticsJob (Maybe Text) Source #

The Amazon Resource Number (ARN) that you use to get access to the analytics job.

CallAnalyticsJobSettings

callAnalyticsJobSettings_languageOptions :: Lens' CallAnalyticsJobSettings (Maybe (NonEmpty LanguageCode)) Source #

When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.

To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages and language-specific features for additional information.

callAnalyticsJobSettings_vocabularyName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The name of a vocabulary to use when processing the call analytics job.

callAnalyticsJobSettings_languageModelName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The structure used to describe a custom language model.

callAnalyticsJobSettings_vocabularyFilterName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.

callAnalyticsJobSettings_vocabularyFilterMethod :: Lens' CallAnalyticsJobSettings (Maybe VocabularyFilterMethod) Source #

Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.

CallAnalyticsJobSummary

callAnalyticsJobSummary_creationTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the call analytics job was created.

callAnalyticsJobSummary_failureReason :: Lens' CallAnalyticsJobSummary (Maybe Text) Source #

If the CallAnalyticsJobStatus is FAILED, a description of the error.

callAnalyticsJobSummary_languageCode :: Lens' CallAnalyticsJobSummary (Maybe LanguageCode) Source #

The language of the transcript in the source audio file.

callAnalyticsJobSummary_startTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job began processing.

callAnalyticsJobSummary_completionTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job was completed.

CategoryProperties

categoryProperties_rules :: Lens' CategoryProperties (Maybe (NonEmpty Rule)) Source #

The rules used to create a call analytics category.

categoryProperties_categoryName :: Lens' CategoryProperties (Maybe Text) Source #

The name of the call analytics category.

categoryProperties_lastUpdateTime :: Lens' CategoryProperties (Maybe UTCTime) Source #

A timestamp that shows when the call analytics category was most recently updated.

categoryProperties_createTime :: Lens' CategoryProperties (Maybe UTCTime) Source #

A timestamp that shows when the call analytics category was created.

ChannelDefinition

channelDefinition_participantRole :: Lens' ChannelDefinition (Maybe ParticipantRole) Source #

Indicates whether the person speaking on the audio channel is the agent or customer.

channelDefinition_channelId :: Lens' ChannelDefinition (Maybe Natural) Source #

A value that indicates the audio channel.

ContentRedaction

contentRedaction_redactionType :: Lens' ContentRedaction RedactionType Source #

Request parameter that defines the entities to be redacted. The only accepted value is PII.

contentRedaction_redactionOutput :: Lens' ContentRedaction RedactionOutput Source #

The output transcript file stored in either the default S3 bucket or in a bucket you specify.

When you choose redacted Amazon Transcribe outputs only the redacted transcript.

When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.

InputDataConfig

inputDataConfig_tuningDataS3Uri :: Lens' InputDataConfig (Maybe Text) Source #

The Amazon S3 prefix you specify to access the plain text files that you use to tune your custom language model.

inputDataConfig_s3Uri :: Lens' InputDataConfig Text Source #

The Amazon S3 prefix you specify to access the plain text files that you use to train your custom language model.

inputDataConfig_dataAccessRoleArn :: Lens' InputDataConfig Text Source #

The Amazon Resource Name (ARN) that uniquely identifies the permissions you've given Amazon Transcribe to access your Amazon S3 buckets containing your media files or text data.

InterruptionFilter

interruptionFilter_participantRole :: Lens' InterruptionFilter (Maybe ParticipantRole) Source #

Indicates whether the caller or customer was interrupting.

interruptionFilter_relativeTimeRange :: Lens' InterruptionFilter (Maybe RelativeTimeRange) Source #

An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.

interruptionFilter_negate :: Lens' InterruptionFilter (Maybe Bool) Source #

Set to TRUE to look for a time period where there was no interruption.

interruptionFilter_absoluteTimeRange :: Lens' InterruptionFilter (Maybe AbsoluteTimeRange) Source #

An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.

JobExecutionSettings

jobExecutionSettings_dataAccessRoleArn :: Lens' JobExecutionSettings (Maybe Text) Source #

The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.

If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.

jobExecutionSettings_allowDeferredExecution :: Lens' JobExecutionSettings (Maybe Bool) Source #

Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException exception.

Note that job queuing is enabled by default for call analytics jobs.

If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.

LanguageModel

languageModel_failureReason :: Lens' LanguageModel (Maybe Text) Source #

The reason why the custom language model couldn't be created.

languageModel_languageCode :: Lens' LanguageModel (Maybe CLMLanguageCode) Source #

The language code you used to create your custom language model.

languageModel_modelName :: Lens' LanguageModel (Maybe Text) Source #

The name of the custom language model.

languageModel_lastModifiedTime :: Lens' LanguageModel (Maybe UTCTime) Source #

The most recent time the custom language model was modified.

languageModel_upgradeAvailability :: Lens' LanguageModel (Maybe Bool) Source #

Whether the base model used for the custom language model is up to date. If this field is true then you are running the most up-to-date version of the base model in your custom language model.

languageModel_inputDataConfig :: Lens' LanguageModel (Maybe InputDataConfig) Source #

The data access role and Amazon S3 prefixes for the input files used to train the custom language model.

languageModel_baseModelName :: Lens' LanguageModel (Maybe BaseModelName) Source #

The Amazon Transcribe standard language model, or base model used to create the custom language model.

languageModel_modelStatus :: Lens' LanguageModel (Maybe ModelStatus) Source #

The creation status of a custom language model. When the status is COMPLETED the model is ready for use.

languageModel_createTime :: Lens' LanguageModel (Maybe UTCTime) Source #

The time the custom language model was created.

Media

media_mediaFileUri :: Lens' Media (Maybe Text) Source #

The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:

For example:

For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.

media_redactedMediaFileUri :: Lens' Media (Maybe Text) Source #

The S3 object location for your redacted output media file. This is only supported for call analytics jobs.

MedicalTranscript

medicalTranscript_transcriptFileUri :: Lens' MedicalTranscript (Maybe Text) Source #

The S3 object location of the medical transcript.

Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.

MedicalTranscriptionJob

medicalTranscriptionJob_creationTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job was created.

medicalTranscriptionJob_specialty :: Lens' MedicalTranscriptionJob (Maybe Specialty) Source #

The medical specialty of any clinicians providing a dictation or having a conversation. Refer to Transcribing a medical conversationfor a list of supported specialties.

medicalTranscriptionJob_failureReason :: Lens' MedicalTranscriptionJob (Maybe Text) Source #

If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.

The FailureReason field contains one of the following values:

  • Unsupported media format- The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
  • The media format provided does not match the detected media format- The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
  • Invalid sample rate for audio file- The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz.
  • The sample rate provided does not match the detected sample rate- The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
  • Invalid file size: file size too large- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide
  • Invalid number of channels: number of channels too large- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference

medicalTranscriptionJob_languageCode :: Lens' MedicalTranscriptionJob (Maybe LanguageCode) Source #

The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a BadRequestException error.

medicalTranscriptionJob_startTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job started processing.

medicalTranscriptionJob_completionTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job was completed.

medicalTranscriptionJob_type :: Lens' MedicalTranscriptionJob (Maybe Type) Source #

The type of speech in the transcription job. CONVERSATION is generally used for patient-physician dialogues. DICTATION is the setting for physicians speaking their notes after seeing a patient. For more information, see What is Amazon Transcribe Medical?.

medicalTranscriptionJob_contentIdentificationType :: Lens' MedicalTranscriptionJob (Maybe MedicalContentIdentificationType) Source #

Shows the type of content that you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is PHI, you've configured the job to identify personal health information (PHI) in the transcription output.

medicalTranscriptionJob_transcript :: Lens' MedicalTranscriptionJob (Maybe MedicalTranscript) Source #

An object that contains the MedicalTranscript. The MedicalTranscript contains the TranscriptFileUri.

medicalTranscriptionJob_tags :: Lens' MedicalTranscriptionJob (Maybe (NonEmpty Tag)) Source #

A key:value pair assigned to a given medical transcription job.

medicalTranscriptionJob_mediaSampleRateHertz :: Lens' MedicalTranscriptionJob (Maybe Natural) Source #

The sample rate, in Hertz, of the source audio containing medical information.

If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MedicalMediaSampleHertz blank and let Amazon Transcribe Medical determine the sample rate.

MedicalTranscriptionJobSummary

medicalTranscriptionJobSummary_creationTime :: Lens' MedicalTranscriptionJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the medical transcription job was created.

medicalTranscriptionJobSummary_specialty :: Lens' MedicalTranscriptionJobSummary (Maybe Specialty) Source #

The medical specialty of the transcription job. Refer to Transcribing a medical conversationfor a list of supported specialties.

medicalTranscriptionJobSummary_failureReason :: Lens' MedicalTranscriptionJobSummary (Maybe Text) Source #

If the TranscriptionJobStatus field is FAILED, a description of the error.

medicalTranscriptionJobSummary_outputLocationType :: Lens' MedicalTranscriptionJobSummary (Maybe OutputLocationType) Source #

Indicates the location of the transcription job's output. This field must be the path of an S3 bucket; if you don't already have an S3 bucket, one is created based on the path you add.

medicalTranscriptionJobSummary_startTime :: Lens' MedicalTranscriptionJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job began processing.

medicalTranscriptionJobSummary_type :: Lens' MedicalTranscriptionJobSummary (Maybe Type) Source #

The speech of the clinician in the input audio.

medicalTranscriptionJobSummary_contentIdentificationType :: Lens' MedicalTranscriptionJobSummary (Maybe MedicalContentIdentificationType) Source #

Shows the type of information you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is PHI, you've configured the transcription job to identify personal health information (PHI).

MedicalTranscriptionSetting

medicalTranscriptionSetting_vocabularyName :: Lens' MedicalTranscriptionSetting (Maybe Text) Source #

The name of the vocabulary to use when processing a medical transcription job.

medicalTranscriptionSetting_maxAlternatives :: Lens' MedicalTranscriptionSetting (Maybe Natural) Source #

The maximum number of alternatives that you tell the service to return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.

medicalTranscriptionSetting_channelIdentification :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.

Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.

You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException

medicalTranscriptionSetting_showAlternatives :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.

medicalTranscriptionSetting_maxSpeakerLabels :: Lens' MedicalTranscriptionSetting (Maybe Natural) Source #

The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

medicalTranscriptionSetting_showSpeakerLabels :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels in the MaxSpeakerLabels field.

You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.

ModelSettings

modelSettings_languageModelName :: Lens' ModelSettings (Maybe Text) Source #

The name of your custom language model.

NonTalkTimeFilter

nonTalkTimeFilter_relativeTimeRange :: Lens' NonTalkTimeFilter (Maybe RelativeTimeRange) Source #

An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.

nonTalkTimeFilter_negate :: Lens' NonTalkTimeFilter (Maybe Bool) Source #

Set to TRUE to look for a time period when people were talking.

nonTalkTimeFilter_threshold :: Lens' NonTalkTimeFilter (Maybe Natural) Source #

The duration of the period when neither the customer nor agent was talking.

nonTalkTimeFilter_absoluteTimeRange :: Lens' NonTalkTimeFilter (Maybe AbsoluteTimeRange) Source #

An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.

RelativeTimeRange

relativeTimeRange_endPercentage :: Lens' RelativeTimeRange (Maybe Natural) Source #

A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:

  • StartPercentage - 10
  • EndPercentage - 50

This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.

relativeTimeRange_first :: Lens' RelativeTimeRange (Maybe Natural) Source #

A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.

relativeTimeRange_last :: Lens' RelativeTimeRange (Maybe Natural) Source #

A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.

relativeTimeRange_startPercentage :: Lens' RelativeTimeRange (Maybe Natural) Source #

A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:

  • StartPercentage - 10
  • EndPercentage - 50

This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.

Rule

rule_nonTalkTimeFilter :: Lens' Rule (Maybe NonTalkTimeFilter) Source #

A condition for a time period when neither the customer nor the agent was talking.

rule_transcriptFilter :: Lens' Rule (Maybe TranscriptFilter) Source #

A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.

rule_sentimentFilter :: Lens' Rule (Maybe SentimentFilter) Source #

A condition that is applied to a particular customer sentiment.

rule_interruptionFilter :: Lens' Rule (Maybe InterruptionFilter) Source #

A condition for a time period when either the customer or agent was interrupting the other person.

SentimentFilter

sentimentFilter_participantRole :: Lens' SentimentFilter (Maybe ParticipantRole) Source #

A value that determines whether the sentiment belongs to the customer or the agent.

sentimentFilter_relativeTimeRange :: Lens' SentimentFilter (Maybe RelativeTimeRange) Source #

The time range, set in percentages, that correspond to proportion of the call.

sentimentFilter_negate :: Lens' SentimentFilter (Maybe Bool) Source #

Set to TRUE to look for sentiments that weren't specified in the request.

sentimentFilter_absoluteTimeRange :: Lens' SentimentFilter (Maybe AbsoluteTimeRange) Source #

The time range, measured in seconds, of the sentiment.

sentimentFilter_sentiments :: Lens' SentimentFilter (NonEmpty SentimentValue) Source #

An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.

Settings

settings_vocabularyName :: Lens' Settings (Maybe Text) Source #

The name of a vocabulary to use when processing the transcription job.

settings_maxAlternatives :: Lens' Settings (Maybe Natural) Source #

The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.

settings_channelIdentification :: Lens' Settings (Maybe Bool) Source #

Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.

Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.

You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.

settings_showAlternatives :: Lens' Settings (Maybe Bool) Source #

Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.

settings_maxSpeakerLabels :: Lens' Settings (Maybe Natural) Source #

The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

settings_vocabularyFilterName :: Lens' Settings (Maybe Text) Source #

The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.

settings_showSpeakerLabels :: Lens' Settings (Maybe Bool) Source #

Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels field.

You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.

settings_vocabularyFilterMethod :: Lens' Settings (Maybe VocabularyFilterMethod) Source #

Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.

Subtitles

subtitles_formats :: Lens' Subtitles (Maybe [SubtitleFormat]) Source #

Specify the output format for your subtitle file.

SubtitlesOutput

subtitlesOutput_formats :: Lens' SubtitlesOutput (Maybe [SubtitleFormat]) Source #

Specify the output format for your subtitle file; if you select both SRT and VTT formats, two output files are genereated.

subtitlesOutput_subtitleFileUris :: Lens' SubtitlesOutput (Maybe [Text]) Source #

Choose the output location for your subtitle file. This location must be an S3 bucket.

Tag

tag_key :: Lens' Tag Text Source #

The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.

tag_value :: Lens' Tag Text Source #

The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.

Transcript

transcript_redactedTranscriptFileUri :: Lens' Transcript (Maybe Text) Source #

The S3 object location of the redacted transcript.

Use this URI to access the redacted transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.

transcript_transcriptFileUri :: Lens' Transcript (Maybe Text) Source #

The S3 object location of the transcript.

Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.

TranscriptFilter

transcriptFilter_participantRole :: Lens' TranscriptFilter (Maybe ParticipantRole) Source #

Determines whether the customer or the agent is speaking the phrases that you've specified.

transcriptFilter_relativeTimeRange :: Lens' TranscriptFilter (Maybe RelativeTimeRange) Source #

An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.

transcriptFilter_negate :: Lens' TranscriptFilter (Maybe Bool) Source #

If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.

transcriptFilter_absoluteTimeRange :: Lens' TranscriptFilter (Maybe AbsoluteTimeRange) Source #

A time range, set in seconds, between two points in the call.

transcriptFilter_transcriptFilterType :: Lens' TranscriptFilter TranscriptFilterType Source #

Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.

transcriptFilter_targets :: Lens' TranscriptFilter (NonEmpty Text) Source #

The phrases that you're specifying for the transcript filter to match.

TranscriptionJob

transcriptionJob_creationTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job was created.

transcriptionJob_failureReason :: Lens' TranscriptionJob (Maybe Text) Source #

If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.

The FailureReason field can contain one of the following values:

  • Unsupported media format - The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
  • The media format provided does not match the detected media format
  • The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure that the two values match.
  • Invalid sample rate for audio file - The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz.
  • The sample rate provided does not match the detected sample rate - The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
  • Invalid file size: file size too large - The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide.
  • Invalid number of channels: number of channels too large - Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.

transcriptionJob_contentRedaction :: Lens' TranscriptionJob (Maybe ContentRedaction) Source #

An object that describes content redaction settings for the transcription job.

transcriptionJob_identifiedLanguageScore :: Lens' TranscriptionJob (Maybe Double) Source #

A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. Larger values indicate that Amazon Transcribe has higher confidence in the language it identified.

transcriptionJob_subtitles :: Lens' TranscriptionJob (Maybe SubtitlesOutput) Source #

Generate subtitles for your batch transcription job.

transcriptionJob_languageCode :: Lens' TranscriptionJob (Maybe LanguageCode) Source #

The language code for the input speech.

transcriptionJob_languageOptions :: Lens' TranscriptionJob (Maybe (NonEmpty LanguageCode)) Source #

An object that shows the optional array of languages inputted for transcription jobs with automatic language identification enabled.

transcriptionJob_settings :: Lens' TranscriptionJob (Maybe Settings) Source #

Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.

transcriptionJob_startTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job started processing.

transcriptionJob_completionTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

A timestamp that shows when the job completed.

transcriptionJob_media :: Lens' TranscriptionJob (Maybe Media) Source #

An object that describes the input media for the transcription job.

transcriptionJob_modelSettings :: Lens' TranscriptionJob (Maybe ModelSettings) Source #

An object containing the details of your custom language model.

transcriptionJob_jobExecutionSettings :: Lens' TranscriptionJob (Maybe JobExecutionSettings) Source #

Provides information about how a transcription job is executed.

transcriptionJob_identifyLanguage :: Lens' TranscriptionJob (Maybe Bool) Source #

A value that shows if automatic language identification was enabled for a transcription job.

transcriptionJob_transcript :: Lens' TranscriptionJob (Maybe Transcript) Source #

An object that describes the output of the transcription job.

transcriptionJob_tags :: Lens' TranscriptionJob (Maybe (NonEmpty Tag)) Source #

A key:value pair assigned to a given transcription job.

transcriptionJob_mediaSampleRateHertz :: Lens' TranscriptionJob (Maybe Natural) Source #

The sample rate, in Hertz, of the audio track in the input media file.

TranscriptionJobSummary

transcriptionJobSummary_creationTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job was created.

transcriptionJobSummary_failureReason :: Lens' TranscriptionJobSummary (Maybe Text) Source #

If the TranscriptionJobStatus field is FAILED, a description of the error.

transcriptionJobSummary_contentRedaction :: Lens' TranscriptionJobSummary (Maybe ContentRedaction) Source #

The content redaction settings of the transcription job.

transcriptionJobSummary_identifiedLanguageScore :: Lens' TranscriptionJobSummary (Maybe Double) Source #

A value between zero and one that Amazon Transcribe assigned to the language it identified in the source audio. A higher score indicates that Amazon Transcribe is more confident in the language it identified.

transcriptionJobSummary_outputLocationType :: Lens' TranscriptionJobSummary (Maybe OutputLocationType) Source #

Indicates the location of the output of the transcription job.

If the value is CUSTOMER_BUCKET then the location is the S3 bucket specified in the outputBucketName field when the transcription job was started with the StartTranscriptionJob operation.

If the value is SERVICE_BUCKET then the output is stored by Amazon Transcribe and can be retrieved using the URI in the GetTranscriptionJob response's TranscriptFileUri field.

transcriptionJobSummary_startTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job started processing.

transcriptionJobSummary_completionTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

A timestamp that shows when the job was completed.

transcriptionJobSummary_transcriptionJobStatus :: Lens' TranscriptionJobSummary (Maybe TranscriptionJobStatus) Source #

The status of the transcription job. When the status is COMPLETED, use the GetTranscriptionJob operation to get the results of the transcription.

transcriptionJobSummary_identifyLanguage :: Lens' TranscriptionJobSummary (Maybe Bool) Source #

Whether automatic language identification was enabled for a transcription job.

VocabularyFilterInfo

vocabularyFilterInfo_languageCode :: Lens' VocabularyFilterInfo (Maybe LanguageCode) Source #

The language code of the words in the vocabulary filter.

vocabularyFilterInfo_lastModifiedTime :: Lens' VocabularyFilterInfo (Maybe UTCTime) Source #

The date and time that the vocabulary was last updated.

vocabularyFilterInfo_vocabularyFilterName :: Lens' VocabularyFilterInfo (Maybe Text) Source #

The name of the vocabulary filter. The name must be unique in the account that holds the filter.

VocabularyInfo

vocabularyInfo_languageCode :: Lens' VocabularyInfo (Maybe LanguageCode) Source #

The language code of the vocabulary entries.

vocabularyInfo_lastModifiedTime :: Lens' VocabularyInfo (Maybe UTCTime) Source #

The date and time that the vocabulary was last modified.

vocabularyInfo_vocabularyState :: Lens' VocabularyInfo (Maybe VocabularyState) Source #

The processing state of the vocabulary. If the state is READY you can use the vocabulary in a StartTranscriptionJob request.